Saturday, September 8, 2012


Someone asked me recently what I think of the use of masks, particularly the Guy Fawkes mask in the Occupy protests.

In general, I think that images, masks, and other kinds of public symbols very rarely, if ever, stay firmly rooted to their original condition of meaning. Visual artists, in fact, often compare their art to children who grow up, leave home, and become their own distinct creatures in their interactions with others. Hence, in general, I think meaning is as varied as interpretation, which invites all sorts of ambiguities that attach to questions we may ask about things like Shakespeare’s plays, stories in the Torah, visual images, sculptures, etc.

The degree to which meaning may be as varied as interpretation partly depends on at least two questions: (a) how vividly attached is the image/symbol to the original conditions of usage, and (b) how widespread is knowledge of the original condition?

Take an example. About two weeks ago, I read a story of a young businessman in India who opened a retail store. Guess what? He named it “Hitler,” and he used the image of the swastika as his store’s brand logo. He claimed that he did not know anything about the Nazis or the atrocities against the Jewish people. He had heard some co-workers at another company he used to work for referring to their supervisor as “Hitler” to indicate how authoritarian, stingy, and frightening was his leadership style. That was his only reference point for the name and logo. For some reason, he wanted to name his new store “Hitler” and use the swastika logo. What’s interesting about this particular incident is that if we examine issues (a) and (b) above, the connections are very tight for the swastika, its actual historical conditions of meaning, and a widely distributed public recognition of the meaning. So in this particular case, the relation between meaning and interpretation has less wiggle room.

The Italian philosopher and novelist Umberto Eco coined a phrase many years ago: “encyclopedias of reference.” He uses this phrase to describe our interpretive filter, the framework of concepts that we use to interpret all the data that comes to us in any form. It is the interaction of the incoming data and our encyclopedias of reference that give birth to interpretation and hence our conferral of meaning onto the things we experience. These encyclopedias of reference contain so many things, but for me personally, they contain films that have deeply affected me. For example, when I think of coming-of-age stories, I think of Lasse Hallström’s film My Life as a Dog, and any time I hear of a young person struggling to make sense of his or her experiences of a complex world, I think of that film. Others probably have other kinds of things in their encyclopedia of reference that they use to interpret their experiences.

In the case described above of the unfortunately ignorant young man who opened his store with a most unfortunate name and logo, a great portion of the world, including the non-Western world, has a shared encyclopedia of reference that confers very similar understandings of the significance of a swastika. Thus, the judgment was rather swift and severe.

Now, what about those Guy Fawkes masks used in the context of the Occupy protests? For the English, their encyclopedia of reference may be the historical Guy Fawkes. But for Americans, who in general are addicted to both pop culture and Hollywood entertainment, I wonder if their encyclopedia of reference is the 2005 film-adaptation of the V comic book. In the film, a huge host of anonymous protesters against the British government appear at Parliament wearing the Guy Fawkes mask, just like the mask of the horribly disfigured anti-government-protagonist in the film.

If that is what the encyclopedia of reference is for the American protesters, perhaps they are seeing the alleged general similarities between the structure of American corporate culture and the dystopian portrait of British authoritarian government in the film V. There is thus a creative fusion between the collective usage of the Guy Fawkes mask in the film and the collective usage of the very same mask in the real life protest of the Occupy movement. Perhaps the uniformity of the mask also underscores the solidarity of the 99% against the 1%. Those of the 99% are totally anonymous, especially to the eyes of the 1%. When the 1% see the protesters with their totally anonymous masks, perhaps the message is, “Hey you! You don’t even see us as individual human beings! And we’re wearing these masks to underscore your deficit.” Hence, the masks in protest become symbolic of both the solidarity of the protesters and the moral critique of the allegedly depersonalizing vision of the 1%.

Wednesday, September 5, 2012

Occupy Protesters' fashion signals


I have been following some of the Occupy protests with some interest when I’m not overwhelmed by other duties. The practice of protesting and the use of fashion to illustrate the content of the protest have been joined for a very long time. I think of the original British colonists on the eastern shore of America (which at the time of course was partly comprised of a British set of colonies) who protested against the taxation from the British crown. In protest, they dressed themselves in the traditional clothing of the Native American tribes while dumping British tea into the harbors – their attire proclaiming that they identify with the growing, independent sense of America and its new interests and less so with the old British colonial enterprise. The Native American garb during this protest act was not and never was intended to be a disguise in any sense. It was what social scientists call an intentional external signal, every bit as intentional as the iconic raised black glove of Tommie Smith at the Olympics in 1968.

That’s the angle (i.e., signal theory) that helps make sense of and critique the Occupy protesters. As refined and developed as human beings are, we are still products of evolution, and as such are deeply continuous with the entire biological world.

For example, take the majestic and sometimes laughably exaggerated peacock. The brilliant colors are a signal to others (typically to females) that one is worthy of mating. The general idea is that animals, including humans, use a plethora of nonverbal signals to instruct others how and where to categorize the signaler. In fact, before the (more or less) egalitarian fashion culture we now inhabit, there was a time when particular kinds of fabrics were ONLY allowed to be worn by particular social and economic classes, usually the upper classes. Crossing that fashion line was punishable by law. In that social context, it is obvious how the nonverbal donning of a mere fabric would be a signal about how and where on the class system to categorize the fabric wearer in question.

In terms of fashion in the context of political protest, the human practice of signaling goes in two directions. First, the kind of fashion (clothing, hair, accessories, skin art, etc.) sends a signal to others to communicate one’s identification with a cause or ideology, but there is an equally important second signal. That second signal is the one that is sent to oneself. This sounds strange, but it becomes less strange when we are willing to grant that self-knowledge comes less from introspection of our own private thoughts and more from how we interpret our own observable actions. Example of observing one’s own action: Suppose I am at a grocery store, and a homeless person approaches me to ask me to buy him a meal. Let us say that I do so. As I am buying this person a meal, I am observing my own action, but I also interpret my own action and give it a meaning. The interpretation, let us say, is that “I am a generous and caring person” or something like that. In my memories of that action, I revive the same interpretation and thereby give my own moral identity its meaning. Hence, whenever I send a signal to others, I am also sending that signal to myself.

The connection with fashion in the context of this signal theory is that what one chooses to wear is a signal in both of the ways above. In the context of political protest, as a marker of the first type of signal, it is a message to others who witness the protest and fashion that the protester aligns herself with a particular ideology – for example, “unfettered corporate greed degrades humanity” or something like that. The reflective protester may even choose fashion that aligns herself more self-consciously and consistently with the anti-corporate ideology. She may intentionally wear items that are second-hand thrift shop items. She may intentionally wear hand-me-downs during the protest or other strategically designed signals to observers that highlight the distance between the protesters’ ideas and those of an allegedly greed-driven corporate America. The distance is highlighted by both the content of the message and the appearance of the protesters. In this way, the fashion is in a relationship of concord with the content of the protest.

This perspective of signal theory also opens an avenue for critique. If one searches through the many photographs of the various Occupy protests that have occurred, especially the ones in New York City, one will see protesters wearing clothing and accessories with the brands and logos of the very sort of corporate entities their protest is supposed to be criticizing! Mixed signals! One wonders about the degree to which they are sincere or perhaps just self-deluded about their convictions. Are they not picking up on their own signals? This curious situation of mixed signals has not been lost on international media. Many British news outlets have quipped that many of the New York protesters are actually engaged in a grass roots fashion parade that pretends to be a protest movement. The very fact that the British media are raising the possibility of this insincerity is because they already recognize that form (self-presentation, of which fashion is primary) should align consistently with function (content of the protest, anti-corporate message, etc.). Where there is a disconnection, there is the invitation for suspicion.

Tuesday, September 4, 2012

rambling about Kant's shadow



Here’s an outlandish, famous saying that has a claim to being true: “Most of philosophy is a footnote to Plato.”

Time will tell if the following outlandish claim is also true: “Most of post-scientific-revolution philosophy is a footnote to Kant.” When I think about the shape of philosophy after the close of the early modern era, it really is stunning to see the length of Kant’s shadow.

When Kant (in a creative subversion of Plato’s spirit) split realms into the noumenal and phenomenal, and then argues that beliefs about the traditional realm of Being can only be justified by practical rationality (e.g., theoretical postulates about, say, The Good, required by the operation of the moral law), he affected the future state of philosophy in the United States in rather intense ways.

This idea that what many at the time considered the most important kind of philosophical beliefs (Forms, Being, God, the Good, etc. – all capitalized for drama) could only be arrived at in a very transmuted form via practical or pragmatic philosophy energized early American intellectuals such as William James.

Since pragmatic or practical rationality occurs in the context of community and society, the notion of deeply embedded social practices become a motif of the early American pragmatists. This is quite a bit of a different emphasis than the methodological solipsism one finds in Descartes’ method.

The intense reaction against this notion that praxis as a philosophical first principle reigns supreme was spearheaded by giants such as G.E. Moore and Bertrand Russell who argued that some form of conceptual analysis is the way to reinterpret and thereby save in a fascinating way the spirit (if not the letter) of the old Platonic impulse for formal knowledge. The radical success of this analytic program, initially targeted against the German and British Idealists, ended up keeping American pragmatism down in the American universities for at least half a century.

The analytic program had its excesses which bred counter-excesses from the other side, such as the reductionism of the agent to her social world, a kind of socio-political-economic determinism, where the individual is really just a determined atom relative to the larger cultural nexus of causes and effects. And because social relations are, by their very nature, located in a particular place and time, this invited historicism, which to my mind is another kind of reductionism, only on the other side of the aisle. Not to be outdone, I suppose that one might consider the agent as mere smoke to neurophysiological fire as another kind of reductionism.The main difference is that the deterministic nexus is located in the skull rather than in society.

I think this big argument – this conflict between the gods and titans – about (i) the so-called “big questions” about being, goodness, truth, and agency (throw in beauty if you distinguish it from these previous big ideas) and (ii) the method for answering them is still alive and kicking. The analytic paradigm has evolved in amazing ways. The pragmatic, praxis oriented program has also moved in some amazing directions, and the delightful thing about this joint evolution is that the best versions of each side have moved lock-step with the advance of empirical non-reductionistic science.

Maybe this is a sign that progress can happen, even in philosophy.

Tuesday, July 10, 2012

the new normal for western civ

It’s the middle of summer, and like many of my colleagues, I’m realizing that I need to get to serious work. So what do I do? I procrastinate! I post a long, rambling blog entry that is more like free associating. Well, perhaps it’s my own version of therapy.



Every semester, I teach a team-taught course on western civilization. The teaching team, consisting of faculty from Philosophy, History, Political Science, and Religion, attempts to convey a coherent narrative (NOT a totalizing one!) that does some justice to the thorny, complicated, and contradictory relations between the phenomena of western civilization.

Here’s the kicker: This class is entirely populated by freshmen from every major in the University!

It has been a challenge to me as a thinker because I’m more of a “special problems” kind of guy – meaning, that I like to focus on technical puzzles in philosophy or narrowly constrained phenomena. The notion that there is a “broad sweep of intellectual history” is a bit daunting, but that’s exactly what I’m tasked to help convey in this team-taught course.

From a personal perspective, it’s been quite enlightening. I am a person of religious faith (of some [progressive-liberal] Christian flavor), but I confess that at least two days out of the week, I find myself wondering if I “drank the Kool-Aid.” I am well aware that not just from a psychological but from a rational point of view the naturalist perspective of reality (or indeed, say, the Buddhist view) makes sense and is quite coherent. I very easily switch between these alternate interpretations of reality, much like the Gestalt perspectives of the famous “duck-rabbit” or the Necker Cube.

I know I’m not alone in this; in fact, several of my colleagues from several departments of my faith-based school resonate deeply with this sensibility.

The benefit of teaching in this team-taught course is that it has helped clarify for me at least a few factors that explain this.

First, I want to disavow any commitment to the following standard story: Human progress = demystification tout court. The standard story suggests that there is a rational core of human reason that is buried under several layers of superstitious husk. Progress involves sloughing off the layers that somehow impede essential human rationality from perceiving The Real as it is, where The Real = Naturalized-with-no-remainder. That strikes me as too simplistic. It downplays the way that social realities are products of creative forces – emphasis on creation. The standard story asks us to believe that there is some view from nowhere that is obscured by our benighted superstitions. This is the distortion that is the root of the “science versus religion” trope that is so overplayed.

Second, I want to acknowledge that the rise of the sciences from medieval optics to the golden age of the 17th century did play a huge role, but not in isolation. Here’s an analogy: Ask a room full of historians what caused World War I (or WWII). A few usual suspects will make an appearance, but beyond naming them, there’s not likely to be overwhelming interpretive consensus. The same goes for invoking the scientific revolutions of the early modern period. Yes, these were huge, but they were able to play the role they did because of the intersections with all sorts of other secularizing influences.

Here’s a thumbnail sketch (in media res! still coming together in my head):

The scientific revolutions did play a role. They were a final nail in the coffin of the view of the universe as “haunted.” Back in the old days, the idea that spirits (good or bad) could exercise power on the human agent was taken very seriously. In the distinctively Christian appropriation, this is the root of sacramentalism, pilgrimages, and relic veneration. It's the world of bad-magic versus good-magic. It's the world of stories about Moses' staff of snakes that heals poisonous bites. It's a world of stories where long hair + Nazarite vows = superhuman strength. It's the world of stories about miraculous handkerchiefs that, imbued with the magic of St. Peter and St. Paul, heal the sick. In these foregoing stories, the idea was that the good-magic from God was absolutely essential to protect oneself from the bad-magic of the bad spirits, because haunted worlds are dangerous places where the ordinary person is perpetually vulnerable to forces beyond understanding and control. The best recourse is to secure some loan on the magic of God. It's a world of stories about the Sons of Sceva who get their asses kicked because their good-magic was not powerful enough (perhaps the wrong lender) to combat the bad-magic.

With the mathematization and mechanization of nature, this magical view of material substance is diminished. There is neither a great chain of being nor a qualitative difference in matter or its properties, whether in Heaven or on Earth. Heretofore “spiritual” phenomena is liable to an alternate form of explanation that is intelligible and predictable under conditions of technological progress. Rather than relying on the inscrutable purposes of spirits, a way is open to interpret and even control to a degree the exigencies that beset the human condition. Similarly, the cognate metaphysics of Aristotelian forms, and along with it teleological explanations, suffers a diminishing. This is a great boon for the agent, because she is now no longer hopelessly vulnerable to forces beyond either understanding or control. There is now at least the promise that she can exercise, from her own power rather than supplication to a spirit or inspection of a moribund metaphysics, some degree of control over her own destiny.

This is all occurring at the same time that social hierarchies are collapsing along with the great chain of being. The realization that one is not locked into a fated level in the cosmos and the Protestant emphasis on the sanctity of ordinary life and work (Weber later appropriates this his analysis of the Protestant work ethic) combined to create new social realities, new moral visions of (a) goodness divorced from Church interpretations of hierarchy, (b) goodness exclusively connected to what was formerly considered “lower” or “secular” considerations, and (c) the continual eviction of ghosts from the natural world that emancipated folks from fear of the unpredictability that comes from (the bad) spirits/demons.

(You don't really have to worry about evicting the benevolent ones, right? I find it funny that folk sometimes talk literally about their “guardian angels” but not about their “pestering demons.”)

Because of these considerations, for perhaps the first time in modern Europe, the raw materials are in place for a positive, content-rich moral vision of a human life in a universe that does not need to make explicit reference to the supernatural or divine as a justification for its orientation as a life philosophy. The power of old religious authorities to police such new realizations is largely gone. Other social structures are in place that wield greater sway over the sentiments, hopes, and dreams of ordinary folk – emphasis on “ordinary,” as in mundane.

This is the centuries-prior seed for Dawkins’ claim that Darwin made it possible to be an intellectually-fulfilled atheist (a true statement). In this case, it’s about being a morally-fulfilled secular person, or, at any rate, a morally-fulfilled non-Christian European. In the background, it goes without saying that the Protestant Reformations and the subsequent Catholic Reformations deconstructed the notion that there was some unitary moral vision of human life that was credible for once and all.

Fundamentally, I think it’s these new moral possibilities and visions that made for the epistemological and social changes. Of course it’s in reality a messy cycle, but some feedback loops are stronger than others.

In this rambling blog post, I’ve tried to articulate some of my intuitions/interpretations drawn from teaching in this team-taught course about western civilization. I started by asking how it can be the case that someone like myself can be simultaneously committed to a particular religious point of view, all the while acknowledging the rationality and indeed compelling force of alternate, incompatible interpretations of reality – perspectives that I find myself drawn to quite powerfully several days of the week (this is how philosophers experience doubt, which is different than skepticism). In previous centuries, this tension is less pronounced, more rare, and for many, unthinkable. In our century, it’s so utterly normal as to be hardly worth mentioning.

I’m sure that there are going to be those who would promote this as a virtue and others who would say that it’s a damnable vice. For persons of faith such as myself, I’d say it’s both in different respects.

Sunday, July 1, 2012

two kinds of inference errors


We’re in the midst of a searing heat wave in Minneapolis. It about 91 degrees, and I’m sitting in my backyard enjoying a tasty Hefeweizen while thinking about cognitive errors.

Here are two different kinds of breakdowns in inference (among the multitude of types).

The first one is really simple to understand, but an example is better than an explanation.

type one

In a game of chess, the bishop can only move diagonally. This rule governing its movement is built into the chess universe. This rule has no exceptions.

A player has two bishops, one on a red-square and one on a black-square.

If one of the given bishops starts on a red-square, then one might be tempted to conclude that the bishop must always be on a red square and never can be on a black square. It can seem like this corollary is every bit without exception as the diagonal movement rule.

In many cases, a bishop that starts on a red square will stay on red squares for as long as the game is live.

But there are exceptions. Suppose the bishop is captured. The player who lost her bishop may promote a pawn that manages to cross to the other shore. Suppose that the pawn is promoted on a black-square. That reincarnated bishop is a black-square bishop, contrary to the ersatz-rule that it can only be on a red-square.

This illustrates in a stupendously nerdy way the threat of too hasty inferences in a game context in which there are more complex but accessible rules that would otherwise block or temper the mistaken inference.

Again, this seems to be the most common and straightforward kind of inference breakdown.

type two

There is, however, another kind of breakdown in inference that has less to do with failing to account for accessible rules. It’s subtler and more interesting in that the complete rules themselves are not quite available, yet the person who makes inferences cognitively acts under the fiction that they are. But how does this occur? It seems that it has to do with a blindness to the changing conditions of observation, where the conditions change because of either a technological or cultural innovation.

Again, an example is better than an explanation.

For a long time in physics (immediate qualifier: limiting the discussion to scientific realism), it was thought that the mass of an object does not change even under conditions of motion. From the point of view of the crude observations of macro-sized objects, this appears true. Even at the micro-level of most observations that any of us would make, this is sufficient. Even objects that are moving extremely fast relative to how most animals move retain the same mass in said motion.

Generate a rule of physics: The mass of an object is independent of speed.

This rule about mass and motion is mostly correct modulo the context of ordinary conditions of observation. In fact, for a long time (not too long ago, in terms of the scale of the history of science), it was sufficient for physicists!

This changed when our technology changed. This changed when we became able to inspect a bit deeper into the nature of material substance.

Contrary to the former rule, mass does appreciably increase with velocity, but only at velocities near c (speed of light).

It’s not that Nature changed; rather, our conditions of observational access to Nature have evolved in a good way. As our evolution moves us forward in our observational powers, we reform old rules, replace them, and find new mysteries of material substance that suggest that even our reformed rules may be overturned in time. Orthodoxy in physics is (or should be) always hypothetical – meaning, that allegiance is a complex negotiation between Nature’s presentation of herself, our observational technologies, our theoretical models of explanation and prediction (prediction is explanation with time’s arrow imaginatively reversed), and a judgment about acceptable alienation between the theory’s predictions and current anomalies that have yet to become recalcitrant.

The nice thing about some scientists (and some philosophers) is that this complex relationship involving both observation sentences and value sentences (the epistemic and aesthetic values of simplicity, parsimony, elegance, etc., that inextricably are entangled with theory assessment) is front and center of their theorizing. There is no need to hide what is obvious under even minimal analysis.

This more subtle epistemic orientation is a nice contact point for science and philosophy, whose interrelationships have been written about voluminously. While not immune to breakdowns in these more subtle epistemic orientations, I see more numerous breakdowns occurring elsewhere.

The possible breakdown in inference in these sorts of contexts is when an agent makes a mistake about the quality of his observational placement. He is not properly taking into account the likelihood – much less, the possibility – that further evolutions in observational conditions may provide further insight into the very nature of the subject matter about which he exhibits dogma.

I’m sad to report that this happens noticeably in reasoning about religion and God/god/G-d/g-d/gods/The-Real.

Over the years, I’ve come to see the affinities between on the one hand the Kuhnian (but not the more baroque readings) and Lakatos-ian interpretations of the history of science and on the other hand the notion that Christian theology should operate with some formal family resemblances to the earlier aforementioned complex negotiation of rationality that exists in science. (I focus on Christian theology only because (i) I’m more familiar with it, and (ii) I self-identify as some form of Christian 4 out of 7 days a week.†)

But this dynamic isn’t even acknowledged – much less fretted about – by a huge contingent of religious folk.

I think I’m beginning to understand Christian Fundamentalists* a bit more, in that they tend to make both types of inference errors: (a) mistakenly believing that they have the right rules (as opposed to the hypothetical “right now” rules) and (b) not taking into account the probability that the conditions for observation evolve and in fact alter (and should alter) our allegiance to hypothetical “right now” rules (or propositions or creedal statements or…). They forget the motto Ecclesia semper reformanda est (“The church is to be always reforming.”).

Please allow me to head off a red herring at the pass. The bogeyman of “relativism” does not apply at all, precisely because the whole enterprise is premised on progress. In science (well, in scientific realism, to which I subscribe), the march towards better and more accurate representations of The Real is prosecuted hand-in-hand with – for lack of a better phrase – epistemic humility on the part of the scientists and philosophers of science I resonate with most. In the philosophy of science, we even invented a term to describe this orientation: verisimilitude.

In religion, I don’t see why it can’t be the same. (Actually, I see why, but it has nothing to do with rationality.) One specific analog for religious programs and religious ethics of the new observational possibilities and technologies in science should be the complex cluster of new social, political, and economic possibilities that exist for women, racial/ethnic minorities, and other types of under-represented or historically-repressed groups in the West and increasingly other global places. These new “encyclopedias of reference” allow for deeper insight into religiously important phenomena such as scripture, hermeneutics, nature/function of religious communities, etc., while simultaneously correcting a gross distortion – namely, the distortion that religious/doctrinal history floats somehow free of other social and historical realities such as the political economies that largely determined the modern consciousness, to name just one. (This is not a reductively Marxist claim! Adam Smith could say the same!)

The character of these new insights cannot be controlled, and hence they may and perhaps should be expected to subvert older “rules.” None of this can be legitimately controlled in advance, unless “legitimacy” is narrowly defined as “not contradicting orthodox understandings of Scripture.” That narrow definition wears on its sleeve the logical fallacy of begging the question, since the very conversation that is of most interest to religious believers whose brains are turned on revolves around questions such as “In what ways might Scripture gesture beyond the confines of its historically situated representations?” and “What new discoveries and semantic horizons – possibly in tandem with evolving conditions of observation/reading/interpretation – does Scripture have for us who sincerely seek to live from a religious point of view?"

Well, it’s getting hot out here, and this simple blog post has inflated beyond my intention. I’m headed back indoors to retreat to air-conditioned bliss.

Signing off on this really hot Sunday afternoon…

Peace be with you.

* Any form of Fundamentalism (religious or otherwise, left or right, etc.) will do, since various forms of Fundamentalism share the same tendencies towards inference errors.

† Gospel of Mark 9:24.

Friday, June 8, 2012

moral psychologies


I finally picked up Jonathan Haidt’s book The Righteous Mind: Why Good People Are Divided by Politics and Religion.

Like many books of this scope, there are a few over-extensions of explanatory reach, but that’s been covered by many good book reviews in print.

Despite the overgeneralizations, there are many suggestive lines of thought concerning moral psychology.

For instance, consider the following hypothetical scenario:

A family suffers the tragic loss of the family pet, in this case, a dog that is hit by a car. Rather than allow a perfectly good meat go to waste, the family decides to cook the dog for supper.

Personally I hear this scenario and find it distasteful, if not disgusting. However, I would be hard pressed to say that it was morally wrong, and in fact I do not think it is.

Scenarios like this are meant to illustrate an instinctive distinction that many make between a taboo and a morally wrong action.

Here’s the interesting thing: In some cultures outside of America and the west in general, respondents would claim that the action was actually morally wrong, not merely a taboo.

Even though most cultures distinguish between taboos and actions that are morally wrong, the precise activities that fall into each or overlap are different depending on several factors.

One such factor tracks with the emergence of newer conceptions of autonomy and individualism. One might call philosophies that are inspired by autonomy, liberty, and individualism (e.g., the “west” generally) egocentric. The moral “bottom line” revolves around the harm principle (i.e., “One can do what one pleases as long as one does not harm another.”). So… in this more egocentric moral psychology, taboos (such as eating the family pet) are avoided because our culture or society judges that as grotesque or impolite, but not because there is any morally wrong content in the action.

In many parts of the world, however, the moral bottom line has a different focus. It’s not that a harm principle is lacking. Rather, it’s that the harm principle is differently articulated by an interpretation of the harm. Such philosophies of life are more sociocentric, where the concepts of duty, loyalty, and fidelity to tribe are paramount. As such, sociocultural taboos are group markers of inclusion (or exclusion), and the tendency to see the world in moral terms is extended much wider to include more types of behaviors into moral classification than what one may see in a more egocentric moral psychology.

Hence, what an egocentric moral psyche may see as a mere taboo may be interpreted as a morally charged issue for someone who is more sociocentric in their moral psychology.

Furthermore, it’s not the case that the sociocentric psyche sees these taboos as contingently possessing moral content (i.e., having moral evaluative properties because of their particular cultural evolution, etc.), but rather the tendency is to universalize their moral outlook, believing that their moral demarcations are normative for everyone else including those outside of their tribal affiliations, no matter their distinct cultural histories.

Now, you can probably already see the dangers lurking for overgeneralization and over-reaching explanations with this distinction in moral psychological outlook, and those dangers are real.

But as a research project into moral psychology, this is still a promising heuristic.

For example, it helps me make sense of why early forms of Judaism include so many taboos within their religious and moral laws, with no sharp distinction between these ceremonial forms of life and the moral point of view. Various “abominations” related to food choices, inter-racial marriage, fashion accutremonts, and human sexuality are lined up side by side. These all make more sense in the context of the sociocentric pressures at a pre-pluralistic time to reinforce and police tribal affiliations and “the Other.”

With the rise of egocentric moral psychologies, which personally I take to be a real advance in our moral outlook (I would say that, wouldn’t I?), the category of “mere taboo” has grown to encompass more of these types of tribal affiliations. The evolutionary arch then takes over and shaves away even many of these old taboos, resulting in a more tolerant, morally minimalist society – or at any rate, more tolerant if still few oasis zones.

I see some of the religious culture wars recapitulating this dynamic with patterns of retrenchment of liberty on the side of the sociocentric moral psyches and the extension of liberty on the side of the egocentrics. Yes, that sounds like the gross oversimplification of a classical liberal, but when empirical data points plot overall in a curve shape …

Saturday, April 14, 2012

downshifting in a very creative way


Alan Hill lives inside of an abandoned auto plant in Detroit (he does so legally):


Tuesday, April 10, 2012

the passions


Wow, it’s been ages since anyone has posted here.

I flip flop between which two different perspectives on the passions I find more interesting: generally Stoic and generally Cartesian.

(Point of clarification: I think they’re both false when measured against human experience, but that’s consistent with being fascinated with them. I’m fascinated by false things all the time.)

For the Stoics, passion was assimilated to a type of false belief. The goal of a good life is to eliminate them from the cognitive architecture, because it’s always better to have fewer false beliefs. (My psychologist friends would here press me to define “better.”)

For Descartes, the passions are classes of divinely engineered response functions that, when operating at peak efficiency, move us to act appropriately given the appropriate conditions. They are like the trigger mechanisms for what V.S. Ramachandran and S. Blakeslee call the “vigilant self.” The goal is not to get rid of them. Rather, the goal is to bring them under the authority of reason. What does it mean for them to be under the authority of reason? For Descartes it means that they are under the control of the will. Perhaps someone can dispute this, but I don’t know how else to interpret what it means for reason to be sovereign in each and every token of a joint action between reason and passion. I say this because of the connection Descartes draws to disinterest or disengagement. In every particular decision tree we encounter, reason tells us what we ought to prefer (pace Hume), yet we should be, strictly speaking in terms of our passions, detached from the actual outcome. Reason then informs the will to prefer noble decision branch X and tells the passions, “Sic ‘em!” Thus is born a fully rational, intentional action. (Okay, these last two sentences are clearly rhetorical overkill.)

At the end of the day, I find both of these proposals both fascinating and hopelessly naïve. The former turns humans into cyborgs. The latter retains the passions but at the (huge) cost of a homuncular view of the self.

But I also think they’re two of the coolest views on the block. Why are all the cool views false?