Tuesday, February 23, 2010
Here is a recent example. I was listening to a radio phone-in show a few weeks ago, where the topic of discussion was something like “The Life Worth Living”. It naturally gravitated around questions of ethics and the good life — the traditional purview of moral philosophy. One caller, in response to a question about how one decides what is ethically right, launched into a learned disquisition on the nature of moral reality, meta-ethics, moral realism, moral non-cognitivism, non-natural moral properties, the naturalistic fallacy, blah blah blah… I immediately recognized the professional jargon he was manipulating and vomiting forth, and it took me no time to surmise that this young man (for so he sounded) was a graduate student in philosophy.
The host of the show, to his credit, was quite perceptive, and after a certain point caught the caller short with the following simple question: “That’s very interesting. Now how do you apply this in your own life when making difficult ethical decisions?” The learned caller was literally speechless. The poor fellow had not the first notion of how to step down from the lofty heights of meta-ethical speculation to address the earthy topic at hand, which was, “How should I live?” If academic philosophy can no longer engage with this question, which is arguably the only truly useful task philosophy has left to it, then maybe the discipline should be put out of its misery.
The irony is that many callers who sounded much less educated than the young philosopher was, gave answers to very much the same question that were a million times more sensible and informative. Although they were not learned, these were people who “knew what’s what, and that’s as high / As Metaphysick wit can fly” (Samuel Butler, Hudibras (1663), I.i.149-150).
Still, despite all my protestations to the contrary, philosophy is still in my blood, which is why I continue to follow “Leiter Reports”, a blog directed at academic philosophers, done by Brian Leiter, who is a philosopher teaching in the law faculty at the University of Chicago. Back in March of 2009 there was a posting on the site that pretty much confirmed my suspicion that academic philosophy is moribund (with apologies to Professor Leiter, who produces what is actually a very engaging blog, given the subject matter).
The posting was a list of the top forty “most important philosophers” of the past 200 years, the results of a poll of professional philosophers, who in total cast 600 votes. I reproduce the list below, after which I’d like to subjoin a few remarks on what the list might tell us about academic philosophy today.
* * *
1. Ludwig Wittgenstein
2. Gottlob Frege
3. Bertrand Russell
4. John Stuart Mill
5. W. V. O. Quine
6. G. W. Hegel
7. Saul Kripke
8. Friedrich Nietzsche
9. Karl Marx
10. Soren Kierkegaard
11. Rudolf Carnap
12. John Rawls
13. David K. Lewis
14. G. E. Moore
15. Donald Davidson
16. Martin Heidegger
17. Edmund Husserl
18. Hilary Putnam
19. William James
20. Charles Sanders Peirce
21. Alfred Tarski
22. J. L. Austin
23. P. F. Strawson
24. Karl Popper
25. G. E. M. Anscombe
26. Jean-Paul Sartre
27. John Dewey
28. Wilfrid Sellars
29. Arthur Schopenhauer
30. Henry Sidgwick
31. Alfred North Whitehead
32. Michel Foucault
33. Bernard Williams
34. Gilbert Ryle
35. Maurice Merleau-Ponty
36. Franz Brentano
37. Michael Dummett
38. Jürgen Habermas
39. Hannah Arendt
40. Simone de Beauvoir
Now, first off, notice that there are only three women on the list: Anscombe, Arendt, and de Beauvoir. Of these, the latter two just squeeze onto the list in spots 39 and 40, respectively. This bizarre exclusion of women is a continuing shame and blot on the profession, especially as there are many women philosophers of first-rate ability (e.g. Christine Korsgaard, Onora O’Neill, and Philippa Foot). Females can be every bit as good as men at philosophy. There must be something else going on. The sad fact is, even within academic circles the profession has the reputation of being unwelcoming towards women. I think part of the problem is that philosophy is fundamentally antagonistic, and philosophers are an unpleasant lot, much given to petty and aggressive arguing, which creates an uncivil atmosphere that women are probably wise to shun. (For more on this, see Norman Swartz’s classic essay “Philosophy as Blood Sport”.)
Next, to my knowledge, there is not a single person of colour on the list, which is too bad, given that there have been, and are, some top-notch philosophers of one colour or another (e.g. Alain Locke, Cornel West, Kwame Anthony Appiah, Amartya Sen, and Jaegwon Kim). The continuing lack of visibility of women and minorities in philosophy is probably related to the aging of the profession (see below).
Gender and colour aside, there are also what I take to be some unusual exclusions from the list. No Jeremy Bentham, Auguste Comte, Ralph Waldo Emerson, Henry David Thoreau, Herbert Spencer, or Mikhail Bakunin — any of whom is easily more “important” (by almost any standard) than, say, Lewis, Sellars, or Strawson.
Even if we stick to the evident modern and Anglo-centric biases of the respondents, there are still other interesting omissions: R. M. Hare, W. D. Ross, Isaiah Berlin, David Gauthier, F. H. Bradley, R. G. Collingwood, H. L. A. Hart, Ronald Dworkin, Thomas Nagel, and Robert Nozick. As these names indicate, there also seems to be a relative lack of moral and political philosophers on the list, which is strange, as arguably philosophers in this area are more likely to be important in the “real world” than philosophers from other areas (although here my personal bias may be speaking). Obviously, “important” for the purposes of this list means important to academic philosophers, and to the work they do. If “important” meant something like “having an impact outside the academy”, I imagine the list would be very different indeed.
Instead, the list is heavily weighted towards philosophers in the analytic tradition, which I suppose is understandable, given that respondents may be presumed to be overwhelmingly English-speaking. But why are so many of them logicians? Four of the top five, and five of the top ten, are primarily logicians. In total, ten of the forty, or one quarter, are primarily logicians. This tally uses my own very narrow conception of “logician”, otherwise I calculate that the number could go as high as sixteen out of forty.
Also, I suspect that many of the respondents suffer from the characteristic parochialism of modern academic philosophy in having a very narrow definition of who counts as a “philosopher”. If we were to broaden this definition, then the absence of, say, Marshall McLuhan, Noam Chomsky, Friedrich Hayek, Charles Darwin, et al. might seem rather strange. Again, even accounting for the relative dearth of philosophers from the continental tradition on the list, how can the absence of Jacques Derrida be explained other than by the fact that he is so deeply hated within the Anglo-American analytic tradition? (Although I confess that I too am no fan of Derrida!)
What is also somewhat disconcerting about the list is that only three of the people on it are still living (Kripke, Dummett, and Habermas), and given how old they are, only just. Why is this? It’s possible that there is simply no “important” philosophy going on anymore (and I would agree that there’s precious little of it). On the other hand, it might be the result of professional jealousy, with no one wishing to nominate a potential rival.
However, my hunch is that, with universities hiring fewer and fewer new tenured and tenure-track professors, philosophy faculties are aging significantly. Although I no longer teach, I still work in a university philosophy department, and sometimes I feel like a care worker in an old age home, except that, instead of changing sheets and giving sponge baths, I teach old philosophers how to use computers and fax machines. Thus, I suspect that to a significant number of the poll’s respondents, names like Strawson, Rawls, and Quine still seem current (indeed, the latter may even have been their teachers), whereas to my generation, these names might as well belong to contemporaries of Plato or Aristotle.
The conclusion I’ve come to is that philosophy needs to reform itself, and fast. However, I doubt that this will happen. Instead, I imagine it will continue to die its slow and inglorious death, mewed up in its quiet sickroom, shielded from the sordid noise on the street outside by the thick walls of academe, whining all the while about how no one listens to philosophers. The death of academic philosophy could even be a literal physical one, given how its practitioners are aging. And when the death finally happens, will there be anybody left who will notice or care?
On the brighter side, something that looks like philosophy may end up flourishing outside the universities. After all, some of the most glorious movements of thought and culture have happened when intelligent and curious people have grown dissatisfied with what was going on behind those high academic walls. For the most part, the Enlightenment was not constructed in schools but in salons, clubs, associations, and newspapers — in short, in the public sphere. Perhaps blogs will play midwife to the next great philosophy (remember folks, you read it here). Sadly, universities are supposed to be maternity wards for ideas, but when it comes to the humanities, they’re really more like funeral homes.
My opinions about philosophy (and philosophers) are obviously shared by many, otherwise so many contributions to Professor Leiter’s blog would not need to be devoted to defending the discipline from precisely such complaints. Indeed, rather than defend it against such complaints, some contributors actually glory in them, proud of their professional reputation for nastiness and incivility, and writing off critics like me as lazy-minded dullards. These latter seem to represent the “macho” persona of the profession, the one that probably turns off potential female participants. And of course, if professional philosophers can continue to somehow bamboozle various institutions, foundations, and government agencies into granting them funding, then why should they care about what we dullards think?
If such arrogance and posing makes philosophers feel good about themselves, then they are welcome to behave so, but they thereby renounce the privilege of lamenting the fact that the public doesn’t care about what they do.
In an ironic sense, maybe the gradual death of the profession represents a win-win scenario for all concerned. After all, with fewer professional philosophers around, those that are left will probably experience the gratifying sense they are more important than they really are, while the rest of society will suffer little from their thinning ranks. That sense of self-importance is certainly palpable on blogs like Leiter’s (again, with apologies to Leiter himself, who, no doubt, is a thoroughly decent chap).
Wednesday, February 17, 2010
This definition is fine as far as it goes, but upon reading Gordon Tullock’s book The Rent-Seeking Society (Indianapolis: Liberty Fund, 2005), a collection of his essays on the subject, I have come to see that more is required to make the concept clearer. As it stands, my definition captures too much.
For example, even though a patent is a kind of time-limited monopoly, seeking a patent is not necessarily an example of rent-seeking, despite surface similarities. Offering a patent to the first firm that finds a drug that will cure a certain disease provides an incentive for different firms to invest in research and development, in the hope that at the end of the process they will have a drug from which they can profit exclusively. Now, insofar as this incentive causes several firms to duplicate each other’s research in the scramble for the patent, patent-seeking behaviour on the part of firms has caused waste and inefficiency.
(On the other hand, one can only imagine the waste involved in having a government scientific planning bureau dole out the funds to a single firm. After all, what’s to guarantee that they’ll bet on the right horse? Maybe the firm they fund will turn out to have been on entirely the wrong scientific trail.)
Perhaps we are inclined to see the problem not in the waste incurred in multiple firms seeking the same objective, but rather in the “excess” profits that will accrue to the firm that wins the race. Fair enough. If the patent were effective in perpetuity there would be some truth in this.
On the other hand, it is not necessarily the existence of a monopoly per se that constitutes a rent. For example, Tullock points out that a monopoly may arise in a fully competitive market because it can produce something more efficiently or offer it at a lower price than anyone else. If it ceased to do so, new competitors would arise to fill the vacuum. In the absence of artificial monopoly-creating interference by government, it is rather difficult for a monopoly to make “excess” profits in a competitive market.
The reason the drug company’s seeking after a patent is not an example of rent-seeking is because its activities constitute a net social benefit. We are all presumably made better off by the existence of the drug, which might not exist if the incentive of the patent were not available. Similarly, the reason that the competitive monopoly is not an example of rent-seeking is because it constitutes a net social benefit, in that goods are provided to the public in the most efficient way, at the lowest possible price.
Now imagine a different scenario. A drug already exists, but instead of finding a way to manufacture it cheaply and offer it at a lower price, its manufacturer instead decides to invest its resources in lobbying the government to block the sale of a competitor’s drug, one which is cheaper and more effective. This would be rent-seeking, because the firm’s activities result in negative social benefit. The public now only has access to a less effective drug, and at a higher price than it would otherwise pay. The firm might make large profits, but at the public’s expense. As such, those profits represent a misallocation of resources that could have been more efficiently spent or invested by the public elsewhere.
Thus, we can refine the above definition of rent as “profits accruing to persons or organizations which are not otherwise available for purchase through the operation of a free and open market, and which represent a negative net social benefit.”
The Harm of Rent-Seeking
We have seen that rent-seeking, when successful, results in net loss of social benefit. Still, it might benefit the rent-seeker very much. But we can imagine situations where this is not the case.
Imagine firm A is lobbying a politician to support a restrictive tariff to protect its home market from foreign competition. Further imagine that such a policy would hurt firm B, which depends on the lower-priced product imported by one of A’s foreign competitors. B now has an incentive to lobby for a countervailing policy to block A’s rent-seeking attempt. The dynamic here can quickly become a classic prisoner’s dilemma: both may be locked into redoubling their efforts, because now, if firm A is unsuccessful, things will not simply go back to the status quo, but rather the firm will fall back to a worse position than it started from. The money, time, and resources expended will have been for naught.
In such a dynamic, both firms may end up investing more resources in rent-seeking than the rent itself is worth. Both firms would have been better off if they had never begun to seek rents in the first place. They are like two children tearing apart the teddy bear they are fighting over. In effect, such rent-seeking represents a negative-sum game.
Although such rent-seeking arms races seem particularly wasteful, they might be relatively easy to fix. Since ultimately, neither firm in the situation described stands to gain, they both might at some point welcome outside intervention to stop it. If this claim sounds odd, think of how it was the case that many cigarette companies actually were relieved when various governments placed severe restrictions on their advertising activities. Because the restrictions applied to all cigarette companies, many of them were relieved at not having to expend so much of their resources on advertising in order to capture a larger piece of a shrinking pie. Re-investing those huge advertising budgets elsewhere was a net gain for the companies (and perhaps for society).
(Although advertising is not exactly an example of rent-seeking, it does share one important trait with the latter: aside from a few rare cases, advertising produces little or no net social benefit.)
So far we have described cases where rent-seeking causes economic agents to misallocate resources, thereby losing out on the opportunity to become more efficient. But there are also cases where the prospect of gaining economic rents not only causes firms to ignore investment in increased efficiency, but can also cause them to purposely “invest” in decreasing their efficiency. After all, if you propose to distribute a large amount of money as charity, there will always be those who will make the effort to become an object of that charity.
Tullock’s best example of this has to do with local government. When various US state governments propose to distribute funds to local governments to fix and upgrade roads in bad disrepair, it has been demonstrated that many local governments purposely allow their roads to fall apart in order to qualify for the funding. Less money would have been spent in the end if the roads had been maintained in the first place, so this rent-seeking by local governments represents a net social loss.
(I suspect a similar phenomenon has been going on in the city in which I live for some time now. Its entire infrastructure is crumbling away while it awaits funding from the federal and provincial governments, and yet, there always seems to be money lying around for whatever new hare-brained scheme the mayor cooks up, while never seriously contemplating cuts to the city’s profligate budget.)
In Tullock’s memorable words, “The local community that allows its road system to deteriorate in order to qualify for state subsidies or that runs down its hospital system in the expectation that the federal government will replace it is in exactly the same situation as the Chinese beggar who mutilates himself to obtain charity from passers-by. In both cases, the action is rational. In both cases, the effect is to lower the welfare of those involved” (p. 25). (Tullock is not being racist here. The passage follows upon an anecdote he tells about his experience while stationed in China with the US Foreign Service.)
Governments do not only seek rents from other levels of government. They may also seek them from the private sector. An example of this is where a legislator moves to introduce legislation that would hurt some private interest, in the expectation of being offered a rent for killing the legislation. Here it is government that is the rent-seeker, while the private interest is more properly a rent-avoider. And in case you think the scenario is far-fetched, Tullock cites studies and insider accounts to the contrary — though the examples are mostly from the US political system.
Sometimes the relationship between government and private interests is so symbiotic that the distinction between rent-seeking and rent-avoiding becomes blurred. For example, a government might grant a tax exemption or monopoly to a firm in exchange for a share of the future profits, to be used as a source of government revenue. No doubt the firm has extracted a rent, but so too has the government. Who is the seeker here, and who is the avoider?
The political dimension of rent-seeking is important to emphasize. When distributing rents, the goal of a politician is to make sure that they are distributed to those who can influence voting in his electoral district, while having the costs borne by those outside of it. Unfortunately, when all politicians are doing the same thing, we must all be the losers. Even if, in the best case scenario, the various redistributions of wealth via rents were to cancel each other out, it would still represent a net social loss, for we would have been better off still if the resources spent in rent-seeking had been allocated to countless better uses.
I hope to explore some possible remedies for rent-seeking in a future post.
Wednesday, February 10, 2010
I recently finished reading James D. Wallace’s Ethical Norms, Particular Cases (Ithaca, NY: Cornell University Press, 1996), p. 148. It is a wise little book of moral philosophy, not profound or ground-breaking mind you, but containing more real truths than most other books on the subject. In it, Wallace defends a version of what is commonly called, in philosopher’s parlance, moral particularism, the basic idea of which is that moral decision-making is more properly done in light of the detailed knowledge of the moral situation, not by appeal to abstract, general rules.
Towards the end of the book, Wallace makes a point with which, at first, I found myself nodding in agreement. But then small doubts came creeping in. The point, roughly is that, when faced with some practice or activity that seems morally wrong — Wallace’s example is widow-burning — we have two choices: we can either force the other party to stop, or we can try to convince them of the wrongness of what they are doing. If we cannot do the latter by giving them concrete reasons for why it is wrong, then appealing to universal moral principles is even less likely to produce the desired effect.
Someone whose moral sensibility is too dulled or perverted by bad customs to be able to see that widow-burning is an abominable practice is not going to change his beliefs on the basis of a lecture on Kant’s Categorical Imperative. We are better advised to stick to the particularities of the situation at hand. “We may suppose that we are articulating universal moral norms that apply to everyone, whatever their way of life, but it is apt to appear to our hearers that we are doing something more parochial.” In short, our universal moral principles are apt to sound rather un-universal to those who will not or cannot accept them.
Anthropology versus Moral Philosophy
So much I will happily grant Wallace. However, there is a further claim he makes that I am not so comfortable with. He says that if we want to be effective at giving particularistic reasons to a widow-burner, we are required “to understand their activities, purposes, and norms, their way of life”.
Must we? After all, we are moralists, not anthropologists. We can put this into perspective by dividing human customs and practices into roughly three categories, which can be characterized according to the kinds of reactions they tend to evoke in outsiders.
First, there are many human practices one could be faced with that, though they seem strange, would likely elicit only a shrug from us. We might even take part in them if invited to do so, for example, the Italian practice of kissing people, both men and women, on both cheeks — after all, when in Rome…
Second, there is the class of practices that drift a bit beyond “strange” and into the realm of “disgusting” or “revolting”. Perhaps the commonest examples here are the countless dietary practices found across cultures, or perhaps the various forms of “beautification” by self-mutilation (of which, incidentally, our own culture provides numerous examples). In the case of this class of practices, although our reactions to them might be strong and visceral, we do not feel the urge to put an immediate halt to them. Although it is unlikely that we can be brought to take part in them, others are welcome to do so if they wish — after all, live and let live…
With these first two categories of practices, I would agree that there’s probably little harm done in taking the anthropologist’s stance. Strictly speaking, I would argue that they are not really part of the domain of moral philosophy anyway.
However, things are very different with respect to the third category, which consists of practices manifesting fundamental differences in moral values (widow-burning being an example). Here, the anthropologist must yield to the moral philosopher, for these are cases where the practice is not just strange, or revolting, but also should not be done by myself or others.
To summarize, then, here are the three categories of practice (P) and the reactions they evoke:
1. P is strange, but I might engage in it myself, given the right circumstances.
2. P is disgusting, and I probably would not engage in it under any circumstances, but others are welcome to if they wish.
3. P is morally wrong, and no one ought to engage in it, and I would stamp out P if I could.
Now, it is sometimes the case that we misclassify an instance of class 1 or — more often — of class 2, as a case of class 3. This is where forbearance has a role to play, for on such occasions we might be confusing disgust with moral disapprobation. Where this is the case, we should examine our own judgment; however, we are not required to “understand” (in Wallace’s sense) the other’s practice, or if so, then only very tangentially. Certainly, we needn’t go so far as to play the anthropologist here.
Furthermore, such examination of our judgment is more appropriately done according to the criteria of our own moral values, beliefs, and principles, not according to the criteria of the moral values, beliefs, and principles of the people in question. To use an analogy, we judge something to be a crime in light of what our law says, not according to what the laws of other nations say, and certainly not according to the standards and objectives of criminals themselves.
Taking the Pill
There is not just something wrong-headed about trying to “understand” practices like widow-burning. I wish to make the controversial claim that to even try to understand it is itself morally suspect. The anthropological approach misunderstands the nature of morality and moral judgment, lumping moral norms with other types of norms, despite fundamental differences.
What makes moral norms fundamentally different from other social or cultural norms? This can best be seen by trying a little thought experiment. Ultimately the success of the experiment will rely on your intuitions about the situations described matching mine, which is always a danger in this kind of exposition. But here goes…
Imagine you are graduate student in anthropology. You are doing research in the field, and the culture you are studying presents you with a class 2 practice that you find personally revolting. Let’s say it involves eating some disgusting kind of food, perhaps something like a giant hissing cockroach, and maybe while it’s still alive and wriggling. You are invited to partake, and it might be insulting to refuse. Besides that, the objectives of your research require that you integrate into this culture as much as you can in the time available to you for study. And yet, because of your visceral feeling of disgust, you can’t bring yourself to eat the many-eyed, many-legged, hissing, wriggling thing on the end of the chopstick in front of you.
However, before you left on your journey, your PhD supervisor gave you one of his special pills, the effect of which is to block feelings of revulsion, much as aspirin blocks headache pain. Would you take the pill? If your intuitions are like mine, then although you feel disgust while gazing upon your intended meal, you would be sufficiently motivated to take the pill. After all, it’s in the service of science.
Now imagine this second scenario: you ate the bug, went on to get your PhD, published a book about your experience, and now you have gone on another trip to study a different culture, to write a different book. This time the people in question present you with a class 3 cultural practice. They are about to roast a live and screaming baby on a fire and they invite you to place the baby on the fire yourself, a ceremonial office of great honour. Not only do you feel revulsion, but you feel you must do what you can to put a stop to this horrible practice. You certainly believe that you ought not to take any part in it.
However, before you left on your journey, a colleague gave you one of her special pills, the effect of which is to block out feelings of moral aversion, much as aspirin blocks headache pain. Would you take the pill? If your intuitions are like mine, then no, you would not take the pill. Someone with enough motivation to take the pill must, I contend, be morally suspect ab initio.
You see, morality doesn’t only require having the right values and adhering to the right principles. It also involves an added element, which is having a certain attitude of commitment towards those principles and values. You cannot really call yourself a loyal spouse if you’d be willing to take a pill that enables you to cheat on your husband or wife guilt-free. If the lack of a pill is all that comes between you and immorality, then your integrity must be open to question. It is intelligible to want to overcome squeamishness. It is less intelligible to want to overcome one’s moral values — unless they aren’t really moral values in the first place.
My unwillingness to take the pill, even in the face of countervailing temptations, is a fairly reliable sign that the values in question are moral values, rather than some other kind. Nobody said morality is easy. As a matter of fact, more often than not, it is precisely the purpose of morality to act as a bulwark against the frequent and strong temptations to transgress. If nobody ever did anything wrong, there would be nothing for moral philosophers to talk about.
If I am committed to certain moral values, it’s not just that I don’t wish to experience the guilt or shame associated with transgressing those values. More fundamentally, I don’t want to be the kind of person who would transgress them.
Tout comprendre c’est tout pardonner
Let’s change the roasted baby example somewhat. Rather than removing moral disapprobation, let’s instead say that the pill in question makes you understand the practice, like a good anthropologist should. Would you take it now?
This question is a bit tougher. It would very much depend on what is meant by “understand” in this context, and what other effects such “understanding” might have. If, as the French saying goes, tout comprendre c’est tout pardonner (“to understand everything is to forgive”), then I would have to say no. Such understanding might be fine for practices of class 1 or 2, but not for class 3.
I suppose we could distinguish between two kinds of “understanding”. Understanding1 happens at a purely intellectual level. It finds expression in sentences like “I understand that culture C practices P for reasons x, y, and z”, or “Hitler wished to exterminate the Jews because he blamed them for Germany’s humiliation in the First World War”. Understanding2 on the other hand, happens at a deeper level, involving a certain degree of acceptance, forgiveness, justification of, or participation in, a practice. It is the root of the French tout comprendre, etc.
Perhaps Wallace merely wants us to understand1 class 3 cultural practices. On the surface, there wouldn’t appear to be a problem with this. After all, I can understand1 Hitler’s reasons for implementing the Holocaust without thereby assenting or becoming an anti-Semite in the process. On the other hand, I’m dubious about any suggestion that understanding1 would have been of any assistance in convincing Hitler to stop. Even if Hitler were amenable to persuasion, presumably this would be achieved by making the discourse run in the opposite direction, by making him understand2 my reasons as to why his practice is wrong. If the goal here is persuasion, why do I need to know his reasons? After all, Hitler’s reasons are irrelevant to my purpose… unless I am trying to remain open to the idea that I might be mistaken. And here there is danger, for it is exactly in such circumstances, where I lack the courage of my moral convictions, that understanding1 can drift into understanding2, and once this happens, I have crossed a moral Rubicon; I have chosen to take the pill, as it were.
Put another way, when we’re confronted with practices that violate our moral norms, we are not in the position to have the kind of gentlemanly academic discussion where we can be “open” to being persuaded by the other. The other is simply wrong, and we must show him why he is wrong. Wallace has not made it clear why “understanding” the other’s motivations is necessary for this task.
Those who, like me, have been raised in a liberal society, and have been taught to be tolerant (to a fault) of just about everything, will be uncomfortable with my conclusions. But trust me, there are others raised in less tolerant atmospheres who are not so squeamish, and the only way to hold our own against them is to have the courage of our convictions. And a conviction is a conviction precisely because it’s non-negotiable.
All of this leads to an obvious question: Does my point of view lead inexorably to a relativistic war of all against all on the basis of conflicting values, in which rational argument must be put aside in favour of brute struggle? I’m not sure, but I hope not. We might take heart in the fact that many have been brought around to “our” moral point of view, and there are others who would gladly embrace our values without being forced if given the intellectual and political freedom to choose.
I say our moral values instead of “Western” values, because the West can no longer necessarily claim them as its exclusive achievement. Ideas that were first articulated in Western nations for historically contingent reasons have found resonance with others, who are embracing them even while the West is losing confidence in them. The future of “Western” ideas and values may lie outside the West.
* * *
I recognize that I have probably attempted to juggle too many ideas in this posting, and that I have raised more questions than I have answered. However, they reflect the rush of ideas that crowded into my head after reading Wallace’s nice little passage.
Monday, February 1, 2010
“Back in the 1960s Lord Patrick Devlin, a prominent English judge, wrote a book entitled The Enforcement of Morals, in which he defended the position once held by the Victorian magistrate Sir James Fitzjames Stephen that the government has the legitimate right to intervene in matters of morality. This position has come to be called "legal moralism" (though some of its critics would sneeringly call it "legal paternalism"). Many critics, most notably the great legal philosopher H. L. A. Hart, tried to refute him. The general intellectual fashions of the time ran very much in a liberal vein, and so it was commonly accepted by those who don't think very deeply that those critics scored a definitive victory. But they simply haven't. I do not have the space here to rehash this old debate, but I promise to do so in a separate posting in the near future” [italics added].
Instead of “I promise”, I should have written “I hope”. In any case I hereby give notice that I now write in partial fulfillment of that promise. Since the topic is apt to swell dangerously, I will focus on an earlier stage of the debate on legal moralism, that between Fitzjames Stephen and John Stuart Mill. I do this partly to conserve space, but also because I would like to resurrect the reputation of Stephen, who I believe was a formidable critic of what seems since to have become the received liberal consensus engendered by Mill. Maybe, maybe, I will visit the Hart-Devlin debate in the future.
Most of Stephen’s book Liberty, Equality, Fraternity (1873) is a response to Mill’s famous essay On Liberty (1859). I have had to read and teach On Liberty in many an undergraduate philosophy course. It seems for whatever reason to have become the liberal’s Bible on the topic of public intervention in the sphere of private morality, which I find perplexing, because the work is flawed in so many ways. I confess I have never been much convinced by what Mill has written in ethics and political philosophy. In any case, before proceeding further, I should give a rough sketch of Mill’s liberal philosophy.
Mill begins by outlining his proposed task, which is to delineate the proper limits of intervention in the private sphere. (What constitutes the “private sphere” is inadequately defined by Mill.) In short, he proposes to discuss the limits of liberty. However, he distinguishes between civil or social liberty and political liberty, proposing to deal mainly with the former. Political liberty deals with political participation, and was the object of the long struggles to replace arbitrary royal government with some measure of popular participation in the conduct of the affairs of the state. That battle, says Mill, at least in England, has largely been won. Political liberty has been achieved.
The new threat to liberty comes not from government, but from one’s fellow citizens, who, by forming a majority, could potentially exercise a tyranny over the minority beyond anything which a single tyrant could have dreamed of. Mill calls this, in his memorable phrase, the “tyranny of the majority”. Mill takes it as his task to set the legitimate bounds beyond which society may not interfere in the affairs of the individual citizen. In other words, he is concerned with social or civil liberty.
It is in this connection that Mill introduces his famous Harm Principle:
“[T]he sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not sufficient warrant.”
It’s a simple enough idea. You are at liberty to do what you wish, so long as you do no harm to others. You may harm yourself as much as you like. If you harm others, then the law may step in and curtail your liberty. Mill distinguishes between self-regarding and other-regarding actions. Your purely self-regarding actions are not to be interfered with, but those other-regarding actions which cause harm to others are the legitimate object of legal and moral coercion.
Some Criticisms of Mill’s Theory
Mill’s ideas seem straightforward, mostly because we as a society have more or less swallowed them hook, line, and sinker, despite the fact that they were subjected to some valid criticisms very early on. One of the most able of these critics was Stephen.
Mill has a nasty habit of introducing quite bold claims (like the Harm Principle passage, quoted above), and then quickly backtracking in light of criticisms. For example, after introducing the distinction between self- and other-regarding actions, he seems to realize that there are some seemingly self-regarding actions which he would still like to see penalized (e.g. habitual drunkenness). He deftly makes allowances for this by admitting that many actions seemingly self-regarding have more distant repercussions, which enables them to be conveniently classed as other-regarding. For example, if the drunkard has a family which must suffer financially by his folly, then society may take action.
In thus weakening his distinction, Mill casts doubt on its validity. Certainly Stephen is skeptical of it: “Men are so closely connected together that it is quite impossible to say how far the influence of acts apparently of the most personal character may extend.” Purely self-regarding actions are at best much rarer than Mill seems to believe, a point which, if true, would greatly extend the area of human conduct of which society may rightly take cognizance.
Stephen also took Mill to task for another inconsistency. The latter had claimed that paternalism, intervening in the affairs of others for their own good, was unjustified according to the Harm Principle. But then, again, he introduces some exceptions. One of these is the case of people in “backwards” states of civilization. He is not entirely clear about what is meant by “backwards”. He is even less clear about why they form an exception to his principle. As far as Stephen is concerned, this exception constitutes an inconsistency on Mill’s part. If we are all entitled to make mistakes in our individual “experiments in living” (the phrase is Mill’s), then why may not “backwards” peoples also be free to make mistakes? This is an example of Mill’s maddening habit of introducing bold claims, and then so hedging and qualifying them as to render them either nonsensical or trivial.
In On Liberty, Mill is largely concerned with a democratic state in which a majority may potentially exercise tyranny over a minority through the coercion of laws. But why the fascination with just this type of coercion? Stephen notes that there are many kinds of coercion that can be brought to bear, among which, the legal kind is by no means the most important. After all, it’s thankfully only a very few of us that are kept on the moral straight and narrow by fear of the law:
“Criminal legislation proper may be regarded as an engine of prohibition unimportant in comparison with morals and the forms of morality sanctioned by theology. For one act from which one person is restrained by the fear of the law of the land, many persons are restrained from innumerable acts by the fear of the disapprobation of their neighbours, which is the moral sanction; or by the fear of punishment in a future state of existence, which is the religious sanction; or by the fear of their own disapprobation, which may be called the conscientious sanction, and may be regarded as a compound case of the other two.”
Now, Stephen’s very last claim in this passage is dubious. Conscience is not necessarily reducible to fear of divine or social disapprobation. As a matter of fact, I’m not so sure it’s reducible to fear at all, unless aversion or disgust are forms of fear, which seems a stretch. Nonetheless, Stephen’s main point here is that there is more than one kind of coercion, and so if one of them (i.e. the legal form, which he says is the least efficacious) is deemed illegitimate, then why not the others? In other words, why doesn’t Mill consider moral and religious sanctions illegitimate in the same way he does legal sanctions?
A society is largely held together by three things: some degree of mutual concern, a system of shared beliefs and values, and a complex system of coercive means to maintain that solidarity of mutual concern and belief. I say that the system of coercion is complex, because such coercion is based on force, and force comes in many forms, of which legal force is just one manifestation.
Given such a broad conception of force, even a seemingly liberal and democratic society has recourse to force more often than Mill would care to admit. As Stephen notes,
“the difference between a rough and a civilized society is not that force is used in the one case and persuasion in the other, but that force is (or ought to be) guided with greater care in the second case than in the first. President Lincoln attained his objects by the use of a degree of force which would have crushed Charlemagne and his paladins and peers like so many eggshells.”
There is no way that a principle like Mill’s Harm Principle can mark off a clear distinction between when force is warranted and when it is not. All state action involves the use of force, and even the most liberal of states must have recourse to it. The art of government lies not in avoiding the use of force, but in knowing when and in what degree it is prudent to employ it. This is, of course, a question not of legitimacy but of policy.
Stephen’s Alternative Principle
One area where Mill and Stephen were in broad agreement was in their acceptance of utilitarianism as a moral and political theory. Mill believed that his Harm Principle would be conducive to the maximization of overall utility, for reasons I haven’t space to explore here. Stephen too was a utilitarian, and so he thought it was the duty of the state and its officials to do whatever was necessary to maximize utility. But he thought that Mill was on the wrong track, and that it was dangerous to tie the hands of the state in the way Mill was proposing.
For Stephen, the proper question is not, “When is it legitimate for the state to employ force?” but rather, “When, and to what degree is it prudent for the state to employ force?” In answer to this, Stephen had a principle of his own, one which bears a striking similarity to a principle employed in Canadian constitutional jurisprudence, the so-called Oakes test. Stephen states his principle thus:
“Compulsion is bad:
1. When the object aimed at is bad.
2. When the object aimed at is good, but the compulsion employed is not calculated to obtain it.
3. When the object aimed at is good, and the compulsion employed is calculated to obtain it, but at too great an expense.”
Compare this to the so-called Oakes test, so named from the case of R. v. Oakes  1 S.C.R. 103. The Canadian Charter of Rights and Freedoms guarantees Canadians certain rights. Section 1 of the Charter contains a “reasonable limits” clause, declaring that those rights are “subject only to such limits prescribed by law as can be demonstrably justified in a free and democratic society.” David Oakes was charged with violation of s. 4(2) of the Narcotic Control Act, for intended trafficking of an illegal substance. Section 8 of the same Act shifted the onus of proof onto the accused to prove that he did not intend to traffic. Given that this “reverse onus” provision was an infringement of his “presumption of innocence” right under the Charter, at issue was whether the infringement was justifiable under the “reasonable limits” clause of the Charter.
In giving judgment, Dickson, CJ laid out the following steps, all of which must be satisfied in order for a violation of Charter rights to be consistent with the “reasonable limits” clause of s. 1 of the Charter: 1. There must be a pressing and substantial objective, 2. the means employed must be proportional, that is, (i) the means must be rationally connected to the objective, (ii) there must be minimal impairment of rights, (iii) there must be proportionality between the infringement of rights and the objective — the infringement should not be more harmful than the ill it purportedly aims to address. This test has come to be known as the Oakes test. (Incidentally, the Court decided in Mr. Oakes’ favour, as it was found that s. 8 of the Narcotic Control Act foundered on the “rational connection” part of the test.)
We can see that Stephen’s 1 roughly corresponds with Oakes’ 1. Stephen’s 2 roughly corresponds with Oakes’ 2(i) and 2(ii). Stephen’s 3 roughly corresponds with Oakes’ 2(iii).
There are, however, a couple of differences. Stephen’s test is framed negatively (“compulsion is bad when the object is bad”), while Oakes is framed positively (“compulsion is good when the objective is good”). Also, Oakes stipulates that the objective must be “pressing and substantial”, while Stephen makes no such stipulation other than the less restrictive requirement that the objective must not be “bad”.
For Stephen, “bad” would likely be cashed out in terms of the objective’s failure to be conducive to general utility. But it’s a matter of degree. General utility always justifies compulsion; it’s only a question of how much and how directly it is wise to employ it.
Stephen’s Conception of Liberty
Mill’s conception of liberty was negative, in that it was basically thought of in terms of non-intervention. This is a common way of conceiving of liberty in the liberal tradition. Stephen’s notion of liberty is quite different. He illustrates it by frequent use of the metaphor of flowing water. For example, he writes that
“the life of the great mass of men, to a great extent the life of all men, is like a watercourse guided this way or that by a system of dams, sluices, weirs, and embankments. The volume and quality of the different streams differ, and so do the plans by which their flow is regulated, but it is by these works — that is to say, by their various customs and institutions — that men’s lives are regulated.”
For Mill, liberty would mean the removal of all these “dams, sluices, weirs, and embankments” that seemingly hinder the flow of life. But for Stephen, these waterworks represent customs, traditions, and institutions, without which there can be no flow, no direction or purpose to one’s life:
“I confine myself to saying that the utmost conceivable liberty which could be bestowed upon them [men] would not in the least degree tend to improve them. It would be as wise to say to the water of a stagnant marsh, ‘Why in the world do not you run into the sea? you are perfectly free. There is not a single hydraulic work within a mile of you. There are no pumps to suck you up, no defined channel down which you are compelled to run, no harsh banks and mounds to confine you to any particular course, no dams and no floodgates; and yet there you lie, putrefying and breeding fever, frogs, and gnats, just as if you were a mere slave!’ The water might probably answer, if it knew how, ‘If you want me to turn mills and carry boats, you must dig proper channels and provide proper waterworks for me.’”
To speak of liberty in Mill’s negative sense, is to speak of a mere nothing. We cannot conceive of it any more than we could conceive of a doughnut hole without reference to doughnuts. Or as Stephen puts it,
“discussions about liberty are in truth discussions about a negation. Attempts to solve the problems of government and society by such discussions are like attempts to discover the nature of light and heat by inquiries into darkness and cold. The phenomenon which requires and will repay study is the direction and nature of the various forces, individual and collective, which in their combination or collision with each other and with the outer world make up human life.”
Stephen as a Proto-Communitarian
There is another criticism of Mill’s liberalism made by Stephen, one which was very prescient in light of the rise of communitarian political thought in the twentieth century. In standard utilitarian thought, as espoused by Bentham and Mill, right action is defined by the maximization of overall utility, universally and impartially considered. As Bentham put it, in calculating utility, each is to count for one, and nobody is to count for more than one.
This has some unfortunate consequences. If there are two drowning children, and one of them happens to be my son or daughter, I am not allowed to favour my child over the other one, unless overall utility would for some reason be better maximized that way. Similarly, if I can give money to charity or use it for a family vacation, my family must wait, because, from the point of view of overall utility, the money can almost always be better spent elsewhere.
And yet, it is very ironic, says Stephen, that utilitarianism should demand so much of us, because its demands would end up failing to maximize utility, by making all of us miserable.
Stephen considered himself to be a utilitarian, but of a kind which we might nowadays call an indirect utilitarian, for he seems to be of the view that utility is not best maximized by directly pursuing its maximization. Rather, it is best pursued through customs, institutions, and certain kinds of relationships of exclusivity and mutual affection. Such a view means that we are all collectively made happier by our involvement, in family (which gives more consideration to the utility of kin than of strangers), friendship (which gives more consideration to the utility of friends than of strangers), and community or nation (which gives more consideration to the utility of fellow countrymen than of strangers).
We must all live in a community if we are to flourish. But community cannot subsist without shared values and mutual ties of affection and concern. Mill’s atomistic liberalism does not give this fact enough weight. Instead, it seems to imagine that we can all live and flourish in a state of mutual unconcern.
Such unconcern, says Stephen, also lies at the heart of liberalism’s preoccupation with tolerance, particularly with tolerance of views and values which diverge from those of the community. Most of us in the West are heirs of Mill’s liberalism, so we tend to think of tolerance as an unequivocally good thing, and indeed it probably was of considerable value in an era of religious wars and persecutions. However, for Stephen, tolerance is a threat to that system of shared values and beliefs of which society is composed. And complete tolerance is the negation of society as such, for “complete moral tolerance is possible only when men have become completely indifferent to each other — that is to say, when society is at an end.” A collection of strangers is not the same thing as a society.
Modern communitarians often make a similar critique of liberalism: that it’s too bloodless, that it doesn’t offer enough “glue” to hold a society together. A society must have some basis in shared values, and, as Stephen and others would point out, sharing values means the willingness to defend those values and to regard incompatible values as inimical. Of course, Stephen would also advise us to pick our battles wisely, and not attack other value systems head on if we cannot reasonably hope to win, or if winning would be too costly. For him, “tolerance” does not consist in “accepting” or “embracing” the other, but rather in making strategic retreats, or fighting by more indirect means. Toleration is an attitude one takes with an enemy one cannot decisively defeat. It is not a virtue.
It sounds harsh, this view of life as inescapable struggle between systems of value, but Stephen would say that it’s a reality liberals ignore at their peril, for there are other inimical, non-liberal value systems waiting to take advantage of this central liberal weakness.
Sadly, politics abhors a vacuum, and if liberalism creates a vacuum of negative liberty, something very illiberal is apt to fill it.