A Curious Miscellany of Items Philosophical, Historical, and Literary

Manus haec inimica tyrannis.

Wednesday, August 25, 2010

Jon Elster, "Ulysses and the Sirens"

Methinks this blog has become a little too obsessed with economics. It was certainly never the intention of The Spectacled Avenger to run an economics blog, but unfortunately that is the channel in which his slender genius hath run of late.

I have decided it’s time to switch gears a bit, but like any habit, breaking this one can be accomplished only gradually. As a first step, I propose to discuss the work of a man who is possibly the living thinker with the greatest influence on my thought. He is the Norwegian philosopher, economist, political theorist, social scientists, and all-around polymath Jon Elster. Anyone who has been reading this blog for any length of time should be surprised that I’m such a fan of Elster’s. He is, after all, an analytical Marxist. On the other hand, he’s my favourite kind of thinker: the adventurous type, not afraid of Big Ideas, and also not afraid to use the insights of authors like Montaigne and Stendhal in the service of economics and social science. He has written on such diverse topics as addiction, the emotions, Alexis de Tocqueville, constitutional design in the former communist countries of Eastern Europe, economics, jazz, film, and the conceptual foundations of the social sciences.

So smitten am I with Elster’s work, that I could not limit myself to discussing just one of his books, so I’m discussing what I consider to be his two best, Ulysses and the Sirens and Sour Grapes. As this will be done in two posts, I’ll begin with the former title first.

I consider discussing Elster as a natural step in the process of weaning myself from my late obsession with economics because his work has implications that sap the very foundations of economics as a rational science. It also represents the vanguard of many more recent – and in my opinion less insightful – books in the burgeoning field of what has come to be called “behavioural economics”, possibly the most sensational (and unsatisfying) of which is Richard Thaler’s and Cass Sunstein’s Nudge (Yale University Press, 2008).

Ulysses and the Sirens: Studies in Rationality and Irrationality (Cambridge: Cambridge University Press, 1979).

In Ulysses and the Sirens: Studies in Rationality and Irrationality, which is perhaps his most-cited book, Elster critiques the notion of rationality in the economist’s sense, as a faculty that is concerned with maximizing the satisfaction of agents’ present preferences. He contrasts this notion of locally maximizing rationality with what can be called globally maximizing rationality. This latter concept is perhaps best illustrated by those interesting situations where the best “strategy” is irrationality.

An example of this is the sort of Cold War nuclear strategy that Nobel Prize-winning economist Thomas Schelling famously explored: as a nation you make a nuclear threat, but the threat cannot be made credible because your opponent knows that you would be irrational (in the locally maximizing sense) to carry through on the threat. So you put in place mechanisms that effectively take the decision to launch out of your hands, mechanisms that will automatically trigger a launch after a certain point has been reached, and which cannot be overridden. This is the idea behind the “fail-safe” deterrent. The deterrent is globally maximizing (or so it is postulated). There is a gain from the deterrent that can only be achieved by seemingly non-rational means. In sum, if you want people to leave you alone, act crazy. And the best way of getting people to believe that you’re crazy is by actually being crazy. The paradox, of course, is that it is no mean feat to go mad on purpose.

On a more mundane level, "rational irrationality" occurs when you make any kind of inter-temporal threat or promise in which carrying out the threat or promise involves some cost to yourself. Let’s imagine that Alice threatens Bob at time 1 with X at time 3, if Bob doesn’t do Y at time 2. Let’s further assume that there is some cost attached to Alice if she carries out X. If Alice is perfectly rational (in the maximizing sense), and if Bob knows this, then if Bob is also rational he won’t do Y – in other words, Alice’s threat will have no effect. This is because at time 3 Alice will no longer have an incentive to carry out her threat. After all, doing so would now represent a net cost to her. The damage has been done; there’s no point in adding to it by incurring a cost that no longer serves a purpose.

Notice that the same incentive structure applies to promises: Alice promises at time 1 to do X for Bob at time 3 if Bob does Y for her at time 2. Assuming both are rational and that X has some cost to Alice attached, Bob will not rely on Alice’s promise because he knows that at time 3 she’ll have no incentive to hold up her end of the bargain. Once Alice has got what she wants, why would she bother to incur the cost of giving Bob what she promised to give him?

(Of course, we should note that this whole dynamic changes where there is the prospect of repeated interactions between Alice and Bob. We're only contemplating one-off interactions here.)

What kind of a society could we expect if it were impossible to make credible threats and promises? Most market exchanges in the form of contracts with intertemporal performance (which is most if not all contracts) in a large and impersonal society like ours would be impossible. In short, if economic coordination and market exchange are dependent on the making of credible promissory contracts, and on making credible threats for breaches of those contracts, then such economic coordination and market exchange would be impossible if human nature were modeled after the economist’s homo economicus, the rational maximizer. Luckily we’re not perfectly rational. We have the ability to bind ourselves to actions that are not strictly rational from the narrow locally maximizing point of view. By “bind ourselves” I mean to bind our selves. In the example given above, it would be helpful if Alice could somehow bind her later self to carry out the intentions of her earlier self. Such self-binding can be done, broadly speaking, in two ways.

Endogenous self-binding. This relies heavily on the inculcation of moral norms, and on emotional responses to those norms. This can come through upbringing or through the kind of character formation recommended in Stoic philosophy. Either way, it is dependent on appropriate emotional response. (Contrary to popular misconception, the Stoics did not advocate the extirpation of the emotions, but rather their harmonization.) In the example of Alice and Bob, Alice might be motivated to carry through on her promise by wishing to avoid the emotional cost of the guilt or shame she would incur for breaking it.

There is a good reason why emotions play this crucial role. Emotions are largely autonomic, meaning they happen whether or not we think it’s in our interests to have them. In our rationalistic culture we tend to view this as a bad thing, as a weakness in which passion overcomes our better judgment. But in the kinds of cases I’ve been describing, our “better judgment” is not better at all, at least not in the overall global sense. If Alice were red-faced with anger, Bob would have a signal that she is capable of carrying out her threat despite her better judgment. The signal gains its efficacy by virtue of the fact that it can’t easily be faked. There are good functional reasons why such mechanisms have been evolutionarily selected. You see, within a framework of strategic interaction, evolution selects for global maximization. And although some responses can be faked some of the time, evolution has also selected for human beings with an ability to sniff out the fakes.

Exogenous self-binding. This is best illustrated by the example of Elster’s chosen title. In order to be able to hear the song of the Sirens, a sound which drove men mad and made them steer into the rocks, Ulysses had his crewmen put wax in their ears and bind him to the mast of his ship. The crewmen were to have their swords drawn and were to ignore any appeals Ulysses might make to be untied. Rather than draw on internal resources for resisting the call of the Sirens, and assuming that he would be weak under its influence, Ulysses relied on externally imposed constraints.

In the example of Alice and Bob, Alice might be motivated to keep her promise because of the existence of an institution like contract law that attaches heavy penalties to such breaches of trust. Similarly, if I want to quit drinking, it might help if I give the keys to my liquor cabinet to a friend. If I want to quit smoking, I might place a hefty side bet with friends so that I’ll incur a financial penalty if my later self gives in to temptation. Thus, my later self will have an incentive to stay quit. If I need to save money for Christmas presents, I might open a savings account that does not allow me to make withdrawals before December, in the knowledge that I am likely to be tempted to spend the money before then. If I am a nuclear-armed nation, I may have a computer system rigged up to launch missiles upon detection of a credible and impending threat, in the knowledge that my later self might have doubts or lack the guts to press the launch button. The difference between endogenous and exogenous self-binding is that while the former depends on internal resources for binding, the latter depends on what could broadly be called "external technologies", whether in the form of artificial incentives or determinative mechanisms. Another term for such technologies of exogenous self-binding is “precommitment”. They constitute pre-commitment because they effectively determine me on a course of action before the occasion for choice even arises.

One can overemphasize the distinction between endogenous and exogenous self-binding. I may refrain from doing something I am tempted to do because of moral principles that have been internalized (i.e. made endogenous) through some process of exogenous moral training (e.g. reward and punishment structures inculcated from parents).

Elster’s work in this area dovetails with work by others, notably psychologist George Ainslie and philosopher Derek Parfit, on the notion of “multiple selves”. Put (over)simply, the idea here is that the human agent is best conceptualized not as one overarching self who has an ordering of preferences and who chooses between them, but rather as an indefinite number of intertemporal selves. In many of the examples we have been considering, the preferences of earlier selves may be thwarted by later selves that give in to temptation.

In situations like addiction, it is taken for granted that the earlier “pre-craving” self knows what is best for the later “tempted” self and so is in a position to constrain the latter through precommitment devices. But one can also conceive of cases where the later self is in danger from the irrational choices of the earlier selves. Indeed, even with addiction, we can take a broader view in which the earlier rational self binds the later irrational self in the interest of some still-later self.

I must admit my own reservations here about the “multiple selves” view of agency. It seems to me that it is difficult to make sense of these various selves wishing to manipulate each other unless we preserve some notion of unified agency, where these various intertemporal selves somehow retain identification with each other. Why would I now wish to go to such great lengths to “legislate” for my later selves, unless I identify them as me? After all, by the time the occasion comes for a later self to act, the earlier self will no longer exist. There seems to be an incoherence lurking here. Still, the “multiple selves” notion is provocative, with far-reaching implications.

Some of these implications are ethical, rather than merely metaphysical: if I am not me, but am rather a series of intertemporal selves, then it would seem that the relation between successive selves is no different than the relation between simultaneous agents, i.e. between different agents at a given time. If that is the case, what right does my earlier self have to limit the choices of my later selves? It is no different than my claiming the right to limit your choices? This is paternalism at best, tyranny at worst.

(On the other hand, we might view a person in the throes of addiction as exercising precisely such a self-tyranny: sacrificing the interests of later selves to the arbitrary desires of the present self. An analogous kind of tyranny occurs at the aggregate level, when a society sacrifices the interests of future generations through deep or prolonged deficit financing for current consumption.)

I could go on exploring the ethical and metaphysical difficulties of the “multiple selves” approach, but I’ll save it for the post immediately following the next one.

No comments:

Post a Comment