A Curious Miscellany of Items Philosophical, Historical, and Literary

Manus haec inimica tyrannis.

Tuesday, December 13, 2011

Arthur Seldon, “Capitalism”

Lord Ralph Harris, Arthur Seldon, and Friedrich Hayek
Arthur Seldon, Capitalism (Oxford: Basil Blackwell, 1990).

I can honestly say that until I read Arthur Seldon’s Capitalism several years ago I was a socialist, at least in that vague and unreflective way that many socialists are socialist. It would have been difficult for me not to be, given that I was a scholar who had spent much time in an academic environment, and among academics for whom socialism is almost a conditioned reflex.

To this day I cannot remember what possessed me to read Seldon’s book, since at the time its very title would have held so little appeal to me, given my intellectual interests and proclivities. The experience of reading it ended up being a sort of smallish vision on the road to Damascus, my tiny conversion experience. I could easily entitle this post “How I Learned to Stop Worrying and Love Capitalism”. I have since steeped myself in literature of a free market and libertarian bent, but none of it (except Hayek perhaps) has hit me over the head with a hammer in the way this understated book has. In fact, it produced such a profound effect upon my views that I always make sure I own at least two copies, one for my own use, and another that I can give away to potential converts to the religion of capitalism (much like my optometrist, who keeps a stack of Bibles on his desk in case one of his patients should manifest any signs of receptivity to the “Good News”).

To be frank, there are things not to like about Capitalism. It is very much a book of its time, and many of the political events and figures it deals with are now rather dated. And being of its time, it oozes a sense of triumphalism that can only be found in a book about capitalism published in 1990, at the very moment that global communism was collapsing. To be fair, it is understandable that Seldon would be in a gloating mood. He had spent many decades as a voice in the wilderness, defending what had been — and is again — an unfashionable cause before living to see its ultimate triumph.

Seldon came from very humble beginnings, an orphan from the East End of London whose Jewish immigrant parents had been killed in the Spanish flu pandemic of 1918 when he was two. He was adopted by a childless cobbler and his wife, but the cobbler died when Arthur was eleven. His adopted mother, now widowed, provided for the family by selling stockings out of their home until she remarried, to a tailor. Being a gifted student, Arthur managed to get himself a scholarship to the London School of Economics, where he met and studied under Friedrich Hayek. After the LSE and subsequent wartime military service (where he observed widespread government waste and inefficient allocation of resources first-hand) he worked for a while as a consultant in the brewing industry and an editor of a trade journal, until he was called upon by his friend Ralph Harris to join the free market think tank, the Institute of Economic Affairs (IEA) in 1958, marking the beginning of a decades-long friendship and collaboration. The IEA was started up in 1955 by Sir Antony Fisher, and Harris was its General Director, with Seldon acting as Editorial Director.

Nowadays we tend to think of right-wing think tanks — with considerable justification — as establishment bastions, lavishly funded by corporations and moneyed elites with a vested interest in legitimating the market order. But this was not always the case. To be pro-market and anti-socialist in postwar Britain was to be very much on the wrong side of history (or so it was perceived at the time) and very much outside mainstream thinking. Britain had just come out victorious from a devastating World War, which was won by a massive national effort involving the nationalization of industry and economic central planning. Since such socialistic efforts had helped defeat the Nazis in wartime, it was thought that it could accomplish even more in peacetime. Socialism has become a mainstream idea and the centralizing government policies of the Labour and Conservative parties in this period were practically indistinguishable from each other (which is why Seldon was a lifelong Liberal Party member, at least until they too gave in to the siren call of statism).

Indeed, the very reason Fisher had started the IEA was because, alarmed by the message of Friedrich Hayek’s The Road to Serfdom, he contacted its author with the idea of running for Parliament in an effort to turn back the relentless advance of the state. Hayek told him he would be wasting his time in politics, since politicians of all stripes had long ago come to a consensus on the welfare state. In short, Fisher would get nothing done in Parliament. Instead, said Hayek, he would be better advised to turn to the painstaking but ultimately more effective strategy of influencing the intellectuals, whom he referred to as “the second-hand dealers in ideas”. In other words, before political change could be effected, a change in the general culture was required.

Thus, for many years, the free market policy prescriptions of the IEA were either ignored or ridiculed by the state’s kept intellectuals. Seldon and his colleagues toiled in relative obscurity. However, at least one person of influence was listening to them: Margaret Thatcher. When she came to power, the IEA suddenly had access to the ear of government and became a major source of public policy.

Seldon’s working class origin is a very important component of his thinking, as throughout Capitalism he is at great pains to point out the hypocrisy of socialism’s inflated claims to being the self-appointed guardian and savior of the working class, advocating high-minded social schemes which at the same time do much to hurt that same class. Take for instance, the following pungent criticism of the prevailing socialist intelligentsia:

“I have therefore not withdrawn my criticism of the mostly middle-class academics and writers who, for over a century since the time of Sidney and Beatrice Webb to this day, urged socialist solutions without suffering their coercions nor their foregone living standards, but who continued their comfortable lives as teachers in state-financed universities, politicians in state institutions, writers of ‘fiction’ that transparently condemned capitalism, or beneficiaries of government grants in academia, the arts and cultural life provided in part by the poor whose living standards their teaching and administrations have repressed.” (p. xii)

Scathing stuff, but working in a socialist university, I can identify with Seldon’s barely-concealed rage at the ignorance and self-righteous hypocrisy he saw around him, consisting of half-baked ideas put forth by privileged people who mostly know less than nothing about economics but who have learned that bad arguments can be given a sheen of surface plausibility so long as they are prefaced by hot words such as “equity” and “social” justice (whatever that means).

In reality, Seldon argues, the working class was doing just fine before bourgeois socialism appeared on the scene. As he outlines in Chapter 11 of Capitalism entitled “The Galloping Horses”, the story of the working class in liberal Gladstonian England is one of gradual improvement without the help of government programs. Literacy was very high and getting higher, and this without a government-run school system. Pretty much every child had access to a primary school education, and it was a quality education, unlike today’s public school system (using “public school” in its North American sense). Quite serviceable pension and insurance schemes were available, self-organized by various private and working men’s associations. Contrary to popular mythology, just about everyone had some access to health care, making due allowance for the less improved state of medical knowledge and technology at the time; many doctors served the poor and working class either pro bono, or through reduced fees or in-kind payment — which is more feasible through a private system that was more personal than public health care is today. Doctors back then could make up the difference by charging higher fees to their richer clients, arguably a more equitable way of doing things than, say, the NHS, in which the relatively affluent middle class gets a free ride, over-consuming scarce health care resources at the expense of both the rich and the poor. I doubt today that you will find many doctors who would accept payment for their services from the poor in chickens or trade, even if they wanted to. The system simply doesn’t allow it.

Seldon’s point is that the working classes had access to various services or had found their own creative solutions to problems in the absence of government assistance. Not every problem requires a government “program”. Unfortunately, Fabians and other do-gooding socialist elites decided that they knew better than workers what was good for them, and the self-organization of the working class was strangled in its cradle in order to create a socialist utopia that turned out to be a feeble offspring. Seldon waxes wistfully about what could have been if only the socialists had allowed the British state to develop naturally. I guess we’ll never know. In truth, Seldon’s account of the working class in the pre-welfare state sounds a little too perfect to me, and it is rather at odds with George Dangerfield’s classic The Strange Death of Liberal England (1935) — in Dangerfield’s account it is in part the workers themselves who contrive the downfall of the liberal state for lack of a living wage.

Of course, Seldon could argue that he has no doubt that there was suffering among workers and the poor in Gladstone’s Britain, just as there is today, and as he himself knew from personal experience. But where socialists and Marxists saw the degradation of the working class, the truth was that things were gradually improving. A difference of perspective, I suppose. But at least Seldon can claim some direct knowledge whereof he speaks: all his life he had little patience with the description of his own childhood as “deprived”, and the sense of optimism shared by Seldon and those he grew up with in his East London neighbourhood is rather touchingly portrayed in the afterword of Capitalism entitled “Envoi: A Promise Kept”.

There is one central theme of Seldon’s Capitalism that is right on the money (pardon the pun), and which has had a great influence on my own thinking. Seldon observes — and my own experience tallies with this — that if you read socialist literature, in addition to the obligatory denunciation of the market and of capitalism, you will typically find a lot of rhapsodizing over the limitless possibilities of socialism. In other words, says Seldon, capitalism as it exists is found wanting, while socialism as it supposedly could be is found to be desirable:

“The critics of capitalism have persisted in the device of contrasting imperfect capitalism as it is, or has been, with a vision of socialism as it has not so far been, and could not be in the foreseeable future…. This is an act of faith that sustains the intellectual effort long performed by the critics of capitalism to show its contrasts with the alternative of faultless socialism. The familiar non sequitur, ‘capitalism as we have known it, bad; socialism as it could be, good’ (or at least better) still permeates most socialist writing.” (p. 223)

This is hardly a fair comparison, for a number of reasons. First, capitalism as it is exists in nothing like its pure form anywhere, overlaid as it is by decades of socialist experimentation and adulterated by state tampering. Second, socialist experiments have been tried in various places, so shouldn’t these be the relevant examples to juxtapose against existing capitalism? Third, if we want to speculate on an ideal socialist world, shouldn’t the relevant comparison be an ideal capitalist one?

In Capitalism Seldon from the very beginning proposes a more fair comparison: socialism as it exists/has existed with capitalism as it exists/has existed. On every score Seldon makes a compelling case that in this more fair comparison capitalism comes out smelling like a rose, while socialism comes out smelling like week-old unrefrigerated salmon. Perhaps this was a much easier case to make in 1990 than it is today, but I think that if we were to repeat the exercise in good faith, capitalism would still look pretty good.

Seldon never tries to argue that the fair face of capitalism has no blemishes, and he even makes room for government functions that would not be popular among more doctrinally pure libertarians (e.g. environmental protection, the preservation of cultural heritage, and adaptation to new technologies). But according to another strand in his argument, capitalism has at least one crucial virtue that socialism lacks: corrigibility.

Consider this: The well-functioning of markets depends on the free flow of information, not just of capital. Capital can only be put to its most efficient uses if people have the means of finding out what those most efficient uses are. This requires information. Some of this information will come through the mechanism of price signals in markets (another advantage of capitalism over socialism), and some of it will come through people communicating directly with each other. The best way of facilitating the latter kind of communication is through the free institutions of an open society.

By contrast, in a socialist society information is either unavailable or more costly to obtain, for at least two reasons. First, the absence of free markets (or presence of over-regulated ones) means that price signals are missing or distorted. Second, the political institutions necessary to carry out the task of centralized economic planning will necessarily be coercive, and thus closed rather than open. In a coercive state, dissent is not tolerated. But dissent is a form of information signaling and in many circumstances it is the best way a government has of finding out that its policies are not working.

(And that may be where I have some limited support for the Occupy protests — the understanding and proposed solutions of the Occupiers may be simplistic, but the protesters are at least managing to signal that all is not well in the economy. After all, no system is incapable of being improved upon, not even capitalism.)

Thus, although there is no perfect or necessary connection between capitalism and freedom, capitalism will tend to work best where there is open communication and openness in institutions. A country like China seemingly represents a prima facie counter-example to this trend, but I predict it will be a relatively short-lived one: China will either become more open or it will regress economically; “capitalism with Chinese characteristics” is an unstable transitional state.

Getting back to Seldon’s point about corrigibility: capitalism is corrigible because a) it tends to facilitate access to the information necessary to correct mistakes, and b) it tends to have more open and flexible institutions that can respond to the need for change. Unfortunately, if that change comes in the form of socialism, the result will likely be another historical cul-de-sac from which it will be difficult to escape.

Since 2008, we have been moving backward. We have lost hold of that triumphalist “Spirit of 1990”. We have lost that faith which Seldon strove most of his life to instill, and we are reverting to the old gods of planning, government “programs”, and the uninspiring vision of a society of social workers and their clients. Seldon did what needed to be done: a comparison of socialism as it has been to capitalism as it has been. What we need now are more capable voices that will give us inspiring visions of capitalism as it could be, as an antidote to all those voices, so prominent now, who, in focusing exclusively on the faults of capitalism, too easily forget the gifts it has given them, and all the while doing so through media which capitalism has made possible.

Bibliographic Note

The original edition of 1990 has become a little difficult to get hold of and is to my knowledge out of print, though used copies are to be found. However, a more recent and affordable edition was published in 2004 by Liberty Fund as Volume 1 of The Collected Works of Arthur Seldon. The volume is entitled “The Virtues of Capitalism” and combines Capitalism with Seldon’s earlier book Corrigible Capitalism, Incorrigible Socialism. An electronic version is made available for free through the publisher’s “Online Library of Liberty”.

Thursday, December 1, 2011

A Canadian Vice

Viscount Haldane
Anyone who reads a casebook on Canadian constitutional law or on Canadian public law in general, must be struck by the prevalence of reference cases. These are cases submitted to a court, not at the instance of parties to a suit, but rather by a single party (usually a provincial or federal government) in order to seek a judicial opinion before a suit is brought. This will often be done in advance of some piece of legislation or executive action which the government proposes to enact or perform but which it suspects might be subject to legal challenge.

The submission of reference cases is a peculiarly Canadian practice, and fundamental questions of Canadian constitutional law have been “decided” by references (the reason for the scare quotes will become apparent later on).

The Canadian-ness of the practice was recognized in Great Britain as long ago as 1928. I recently came across the following portion of a speech from Viscount Haldane in the context of a debate in the House of Lords on a Rating and Valuation Bill, which would have empowered a Central Valuation Committee, if it appeared that a question of law had arisen or might arise with regard to a valuation, to refer the question to the High Court for judicial opinion. And no, I haven’t taken to reading old British parliamentary debates for pleasure. The extract of Haldane’s opinion that follows appeared in Lord Hewart of Bury’s The New Despotism (London: Ernest Benn, 1929), pp. 126-127:

“VISCOUNT HALDANE: I referred on the last occasion to the liking which had grown up in Canada for submitting abstract constitutional questions to the Courts there and ultimately to the Privy Council [at the time, the Privy Council in London was effectively Canada’s supreme court]. In my opinion experience of that course has led to enormous inconvenience, and successive Lords Chancellor have objected to and denounced it. The late Lord Herschell said some strong things about it, and at times refused to give an opinion. The late Lord Loreburn was even stronger, and other Lords Chancellor and other judges in the Judicial Committee have expressed themselves without restraint upon a system which they deemed to be very mischievous…”
Why did all these British Law Lords view our peculiar Canadian custom with such suspicion? Because, Haldane continued,

“it invited the Court to go beyond the particular case which it had to decide, and to say things beyond the facts to which the decision would be applied, which might prejudice future suitors…. I think this clause [in the Rating and Valuation Bill] is an objectionable one also as drawing the judges into the region of administration…”
There are several threads of argument that might be teased out of these deceptively short passages, leading to serious concerns about the practice of obtaining courts’ references opinions on potted questions posed by the government.

First, note that when a government refers a question to a court, it is in effect setting the terms of the court’s deliberations. The government frames the question, presents the relevant facts (or what they characterize as the relevant facts) to the judges, and invites who it wants to take part in the proceedings.

It is not difficult to see how this process might be open to abuse by government and administrative officials, who might be tempted to use it as a sort of machine to crank out whatever decision they’d like to see on a question they refer to a court. They can set the parameters to generate a preordained answer, by exploiting the fact that the court can only consider the question and the arguments put before it.

The executive can underdescribe the fact-situation on which they wish the court to give its opinion. They can carefully select precisely the experts they would like the court to hear. But even when the executive makes a good faith effort to accurately present all relevant facts, backed by impartial and expert opinion, it is still easy to see how the spirit of the legal rule audi alteram partem — to “hear the other side” — might not be adhered to. For is it realistic that the executive will be as adept at thinking up all the possible opposing arguments against their position as an actual opponent would be? The best test of the legality of the executive’s exercise of its powers is through having that power challenged by an adversary at trial. An adversary who has an interest in the outcome of the challenge is likely to be more creative and thorough in constructing counter-arguments than is the executive’s own legal counsel, who would essentially be acting as mere advocatus diabolus. Judicial review works best in an adversarial context.

This leads to a second problem: the reference process can make it look as if the courts are simply an administrative arm of the executive branch of government. The courts are put in the awkward position of appearing as if they are at the beck and call of the executive, to help and advise it in the execution of its administrative duties. This is especially so where the process is being abused in the fashion outlined above: if the courts are giving the expedient kinds of reference opinions an unscrupulous executive has primed them to give, the traditional and fundamental separation of powers between the executive and the judiciary will be blurred, to the detriment of our constitution.

This Canadian love affair with the reference case seems to me to be in considerable tension with other areas of our national legal sensibilities too. For example, the process by which Supreme Court justices are appointed in Canada has been a source of controversy for its lack of transparency. In 2004 the government was looking into ways to reform the appointment process. Some had floated the idea of US-style parliamentary confirmation hearings. The Canadian Bar Association was adamantly against such hearings. According to its submission to the Prime Minister, the CBA noted that a “U.S. type confirmatory process seeks to predetermine how a prospective judge would decide cases.” No doubt this is true. And to date, no such process has been attempted. But isn’t this objection against pre-judging cases essentially what courts are being asked to do when the government places a reference case before it?

Such pre-judging of cases is dangerous for reasons already outlined: it is not a perspicuous way of examining legal issues because there is no adversary present, and it erodes the constitutional separation of powers between the executive and judiciary. One effect of this is that it trespasses on the traditional principle of judicial independence. Even where such judicial independence isn’t being violated in fact, it can contribute to the perception that it is, which as just as damaging in the long term.

Let us not forget the close etymological and semantic relation between “pre-judging” and “prejudice” or “prejudicial”. When one is prejudiced against someone, one is not inclined to hear their case, to consider that they may have a right, or to look for their merits. Similarly, when one’s case has been pre-judged, it has been prejudiced, in that its merits are not heard or given due consideration. A reference case is a hypothetical case, put by a party (i.e. the government) who has an interest in finding out the answer to its hypothetical question. This interest is likely to be more than intellectual, more than merely hypothetical. Possibilities for future legislation and future executive action will depend on whether it gets the answer it wants to hear. And citizens are apt to take the answer in the reference case for good and current law, even though it is no such thing.

This leads to the last point that Lord Haldane made in the excerpt I gave. Reference opinions might have a “chilling effect” on parties who might otherwise wish to bring a suit before the courts. Where a reference case has been submitted to the Supreme Court, and where the Supreme Court has offered its opinion, that opinion is apt to be taken for law, even though it most certainly is not such (for various reasons already outlined). This might discourage people from bringing their suits to court, despite having a reasonable case, based on facts not perhaps considered in the reference, and backed by reasonable arguments not perhaps considered by judges or government counsel in the course of the reference.

I don’t know if these dangers have ever been empirically verified. The “chilling effect” in particular would be difficult to demonstrate empirically, since it’s obviously more difficult to study cases that aren't brought to court than cases that are. Nevertheless, the dangers of the practice are so great, and its advantages so few (and nefarious), that I would recommend the submission of reference cases be abolished.

Friday, November 18, 2011

Expert Blind Spots

The Act of Settlement, 1701
So, I’ve been reading an introductory casebook on Canadian public law, entitled Public Law: Cases, Materials, and Commentary (Toronto: Emond Montgomery, 2011). Don’t ask me why; I’m not exactly sure myself. In part, I suppose it’s because I realize that I now know more about American constitutional law than I do about Canadian, which I say with some degree of shame, since Canada is my home and native land. But it has actually ended up being an engrossing read so far. In particular, I’m finding interesting the intellectual blind spots of respected jurists, and the rubbish that consequently makes it into the textbooks they write. (Of course, to be fair, experts from every discipline will invariably display peculiar blind spots, so it’s not just a jurisprudential shortcoming.)

Here is an example. It comes in the context of a discussion of the famous (in Canada at least) “persons” case of Edwards v. AG Canada [1930] AC 124 in which s. 24 of the British North America Act (1867) empowered the Governor General to appoint “qualified persons” to the Senate. Now the question that came up in this case was whether or not the BNA Act, one of the most fundamental of this nation’s constitutional documents, intended “persons” to include women. In 1930 Canada didn’t have a Supreme Court per se. Rather, the Privy Council back in London decided matters of fundamental justice on our behalf (one of many reasons why I find it amusing that 1867 is considered Canada’s national independence year). Luckily, in this case the Privy Council was wiser than any of the lower courts in Canada, for it found that women are indeed persons, and are thus eligible to sit in the Senate.

The editors of Public Law pronounced this a triumph. No doubt it was a step forward, but it was hardly a basis for them to trumpet that “while gender discrimination is no longer part of the Canadian Constitution, the appointments process [to the Senate] continues to fuel substantial controversy” (p. 155). For the sad fact is, the ultimate constitutional source of all legislation in this “country” is the British Crown. And the even sadder fact is, no woman can be the representative of that Crown so long as she has a brother with a pulse, even if that brother is a mere babe in arms. For some reason, the editors of the text allowed this plain fact to slip their minds.

It also slipped their minds at another place, while they were discussing the Act of Settlement (1701): “This venerable statute bars Catholics from assuming the Crown, and even precludes the monarch from marrying a Roman Catholic. Furthermore, the monarch must be in communion with the Church of England. The Act’s dictates are clearly discriminatory, viewed from the optic of modern human rights law” (p. 150). The term “venerable” was obviously used with a sense of the utmost irony, and they rightly deplore such a discriminatory piece of legislation, which is still of force and effect as I write. But again, nary a mention of the fact that it discriminates against women. The editors, it seems, are more concerned about the rights of Papists than of the fair sex.

Public Law was put together by a team of seven editors, only one of whom is a woman, and so perhaps this partly explains the text’s gender blind spot. But the discussion of the Act of Settlement is illustrative of a blind spot that is shared by many in Canadian society. Allow me to explain what I mean.

At a meeting of the Commonwealth nations recently, a reform was approved which would remove the gender discrimination in the laws of succession to the British Crown: women will now be able to succeed to the Crown in preference to their younger brothers (but not their older ones). This was being hailed by the media as a giant leap forward into modernity. As if somehow the monarchy is a fundamentally modern and just institution that only needs a bit of tweaking to bring it into the 21st century. In reality, the removal of gender discrimination from the laws of succession is just a red herring, to make us forget about the fact that the very institution is inherently discriminatory, regardless of whether or not the person wearing the Crown happens to possess a penis. You see, in Canada, I still cannot teach my child that if (s)he works hard and plays by the rules, (s)he can someday hope to become our monarch. This is not because of gender. It is not even because (s)he is Canadian rather than British (although this is true too). It is because (s)he was born in the wrong family.

Imagine a job posting concocted by a malicious and demonic HR hiring manager, the qualifications for which are so peculiar, so narrow, and so seemingly capricious that only one person on earth could possibly qualify for the job. Imagine further that these qualifications are such that you could never have possessed them, no matter how hard you tried to acquire them. And finally, imagine that even if you are that one qualified person, you have the job whether you applied for it or not. That, brothers and sisters, is the core injustice of hereditary monarchy. And it cannot be removed save by the removal of the institution itself. No amount of reform can scrub away this moral blemish on the fair face of our constitution.

I suppose that there is this to be said for the monarchy: It is pretty much the only political appointment process in Canadian society that discriminates against both rich and poor equally. You can be rich as Crassus, yet you will still never even have a shot at becoming Monarch of Canada. Of course, the fact is that the one unique person who is qualified to be King or Queen will ipso facto be rich as Crassus.

I can’t forebear mentioning another injustice attached to the institution of monarchy in this country, although at least this one is not inherent to the institution and could in principle be reformed. Take a look at s. 17 of the Interpretation Act (1985), the legislation governing how federal courts are to interpret the terms of federal statutes. There you will find that unless otherwise stated, a statute is to be construed as not binding on the Crown. The Queen is not to be considered bound by the laws she enacts in right of Canada unless Parliament includes in the bill words to the effect that she is so bound. In other words, the Crown’s default position in our constitution is that it is above the law. This I find repugnant to every republican bone in my body.

Returning to the blind spots of writers of legal texts, here is another one. There is a (disproportionately) long chapter in Public Law devoted to the concept of judicial independence. Much here is said on the need for judges to be well-remunerated, and for this pay to be independent of political decision-making. Unseemly bargaining over pay between judges and politicians is to be avoided at all costs. There then follows an excerpt from a couple of cases, including Reference re Remuneration of Judges of the Provincial Court of Prince Edward Island et al. [1997] 3 SCR 3.

Both the writers of the text and the court in the reference case seem inordinately concerned with making sure judges are not underpaid. The great fear is that their independence would be compromised if legislatures were to have the power to reduce their pay. I find two striking blind spots here, both of them so obvious to the non-expert, that they can only be the result of a kind of self-serving false consciousness on the part of the legal experts.

First, the jurists concerned show no awareness of the possibility of legislatures overpaying judges. After all, if politicians can threaten judges with financial sticks, can they not just as easily bribe them with financial carrots in the form of promised pay raises contingent on good political behaviour? And yet, it is only pay reductions that are spoken of as a threat to judicial independence. It seems these jurists’ high-minded scruples about judicial independence show a remarkable “quantitative easing” at the prospect of greater remuneration.

Second, why in the reference case (and others like it) do the judges involved never once question the legitimacy of their bearing responsibility for deciding cases involving the remuneration of themselves and their peers? Isn’t there something rather strange about the practice of having the bench literally acting as judges in their own case? It seems to violate the most fundamental maxims of justice. Apparently, judges (along with the textbook writers) are not overly scrupulous when it comes to possible conflict of interest on the part of the legal profession — though they are untiringly vigilant in sniffing it out on the part of politicians.

Of course, what do I know? I am, after all, a mere layman, not an expert.

Monday, October 17, 2011

“The Federalist”

Hamilton, Madison, and Jay, The Federalist (New York: Modern Library, 2000).

In Philadelphia in the summer of 1787, delegates gathered to come up with suitable changes to the Articles of Confederation, the document that had outlined the terms of cooperation of the thirteen American colonies in their struggle for independence from Great Britain. During the Revolutionary War and in the years immediately following, the Articles had proved themselves woefully inadequate. They bound the confederacy too loosely. Many states were not paying up for their share of war costs, and the Articles provided no powers for forcing them to do so. Interstate commerce was being hampered by state-imposed tariff barriers that were self-canceling at best, self-destructive at worst. And Shays’ Rebellion in Massachusetts seemed to indicate the need for some centralized military power to maintain public order and security.

Even so, the delegates to the Philadelphia Convention went well beyond their brief. Rather than simply improving on the Articles of Confederation, they instead ended up drafting a new Constitution that would form a more perfect union between the states. Many people today, especially Americans, tend to believe that this Constitution emerged fully formed from the heads of the framers. In reality, as the notes of the convention taken by James Madison make clear, the room in which they worked became a veritable sausage factory of political deal-making.

The Federalist papers were an attempt to sell the resulting sausages to the American people during the ratification process that followed. On the other hand, despite the papers’ purpose as propaganda, never has political hackery contributed so much to Western political thought.

The papers offered wise and sometimes profound meditations on the human political animal that ventured far beyond their limited purpose as defense of a particular constitution. They were also very persuasive, although at the time their persuasiveness lay as much in the fact that they were published in such rapid succession — often three per week — that opponents hardly had time to reply. Anti-federalists simply couldn’t keep up with the fertile minds of Hamilton and Madison.

I say “Hamilton and Madison” while leaving out Jay because in reality the latter only ended up penning five of the papers (numbers 2, 3, 4, 5 and 64), before dropping out of the project due to ill health. Thus, of the other numbers, Hamilton authored fifty-one, Madison authored twenty-six, and the remaining three were co-authored by the two of them.

Federalist No. 1 was signed by “A Citizen of New York”. However, Madison then joined the team and he was a Virginian, so every paper thereafter was subscribed “Publius”, the name taken from one of the founders of the Roman Republic, Publius Valerius Publicola (Publicola = “friend of the people”). Since there were eighty-five papers altogether, there isn’t space here for a thorough analysis of all of them, or even for anything like a comprehensive overview. Instead I will discuss three sample papers, the first two of which are stand-alone classics of political theory.

Federalist No. 10 (Madison)

Number 10 is a tour de force of republican political argument and is possibly the most famous of the Federalist papers. It has also become a canonical text for the so-called “public choice” school of economics, of which I consider myself an amateur devotee.

Public choice economists such as James M. Buchanan have criticized mainstream economists, who too often adopt the standpoint of policy adviser to some benevolent despot (i.e. government). Buchanan has spent much of his career arguing that the despot is neither benevolent nor disinterested, and that this fact must be taken into account when designing or proposing to reform political institutions. Buchanan professes himself to be a follower of Madison, who in Federalist No. 10 (and elsewhere) argued that there is no such thing as a disinterested legislator. For one thing, a legislator is a human being (or a group of them), just like anyone else, so why should we expect that he alone would be exempt from the self-interest that motivates the rest of us? As Buchanan would put it, government is an actor in the market, not a spectator standing outside it.

For another thing, legislators legislate, and all legislation involves taking sides. Legislation is not impartial. And because it is not, we can expect the realm of politics to be a battleground where competing interests duke it out. The most that can be hoped for is that the battle remains more or less civilized. In the pursuit of their competing interests, people will form factions. As long as people are free to pursue their ends, competition and factions are natural concomitants of a free society:

“Liberty is to faction, what air is to fire, an aliment, without which it instantly expires. But it could not be a less folly to abolish liberty, which is essential to political life, because it nourishes faction, than it would be to wish the annihilation of air, which is essential to animal life because it imparts to fire its destructive agency.”

In other words, there are no utopias in politics.

It was common for eighteenth-century political thinkers to abhor factions or parties, as these were signs of interest, and politics was ideally supposed to be disinterested and public-spirited. But for Madison, the pursuit of factional self-interest was natural, and since there was no impartial agency that stood outside the fray to mediate, the best that could be hoped for in a free society was to have well-designed institutions that could limit the damage factions can do:

“Is a law proposed concerning private debts? It is a question to which the creditors are parties on one side, and the debtors on the other. Justice ought to hold the balance between them. Yet the parties are and must be themselves the judges; and the most numerous party, or, in other words, the most powerful faction must be expected to prevail…. The inference to which we are brought, is, that the causes of faction cannot be removed; and that relief is only to be sought in the means of controlling its effects.”

This is especially necessary in a democracy, where there always looms that most dangerous of factions, the democratic majority. Imagine a democracy of three people, A, B, and C. It is always a possibility for A and B to form a voting bloc to deprive C of his wealth or rights or liberty. The best way to avoid this outcome is to have institutions and constitutional constraints which prevent A and B from effecting their intentions. Such a constitution may provide for a list of protected rights and liberties that may not be infringed upon by a democratic legislature. Or it may provide that in order for anyone’s rights to be infringed, unanimity is required. Or it may give citizens of C’s description two vote to compensate. Or it may provide for more than one legislative house, with different rules of representation to ensure that one may act as a check upon the other.

 Whatever the precise constraints a society chooses, Madison’s point is that men are — or should be assumed to be — imperfect and selfish creatures, and that there is no power that can be assumed to be impartial. Interest must oppose interest because there is no disinterested power to oppose it. These considerations lead naturally to a discussion of Federalist No. 51, another great masterpiece by Madison.

Federalist No. 51 (Madison)

Because humans are self-interested, and because their interests often clash, there must be some power that can restrain their pursuit of interest. In most political theories, this role has been played by government. But governments are composed of human beings, so they too are subject to the same self-interested motivations and must be restrained. As Juvenal rightly asked, sed quis custodiet ipsos custodes? (“But who will guard the guardians themselves?”). Here is how Madison famously stated the paradox in Federalist No. 51:

“But what is government itself but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither internal nor external controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: You must first enable the government to control the governed; and in the next place, to oblige it to control itself.”

One of the ways of doing this was by a constitution which provided for separation of powers and functions in different branches of government, branches which could act as checks upon one another. It is this that has become the central hallmark of Madisonian republicanism. But just as Madisonian is his encouragement of pluralism: The more groups and subdivisions there are, the less likely it is that any one of them can form a majority and overpower the others. In our three-person democracy, it is very easy for A and B to form a faction to prey upon C. But in what Madison and Hamilton called an “extended republic” a vast number of citizens can form a vast number of factions, no one of which is likely under those conditions to become powerful enough to dominate the others.

This line of thinking informs Madison’s reflections on the separation of church and state. For example, regarding religious pluralism, even though Madison was probably the least religious of the Founding Fathers (he may even have been an atheist), he encouraged freedom of worship because he wanted as many religions to flourish as possible: the more of them there were, the less dangerous they would be to the state. With no single religion having any hope of dominating over the others, religious groups would spend less time trying to take over the reins of government, and more time making jealously making sure that no other sect tried to do so.

Federalist No. 72 (Hamilton)

This work of Hamilton’s is part of a running argument defending a strong executive branch. This is in response to anti-federalist arguments that an executive magistrate should only hold power for a brief term and should not be subject to re-election. It is to the topic of re-election that Hamilton’s attention turns in this paper. I chose to look at this piece because it well illustrates the Federalist’s adeptness at designing republican institutions through incisive analysis of the springs of human political behavior and the incentive structure of political office-holding. In other words, beginning with how humans can typically be expected to behave in political contexts, institutions are proposed that, in the words of Hamilton, best balance the “energy of government” with the “liberties of the people”.

This tension between strong government and individual liberty would in the coming years be reflected in the growing tension between Hamilton’s and Madison’s respective views of republican government. Hamilton tended to stress the necessity of energetic government, seeing a strong federal government as the guarantor of individual liberty (while implicitly viewing the state governments as liberty’s greater threat). Madison, on the other hand, had a growing jealousy of energetic government’s ability to trample on individual rights. His solution was to weaken government through intricate institutional checks and balances, having power oppose power. Despite their collaboration in 1787-88, Madison and Hamilton a rift would eventually form between the two men that was never healed. But the rift still lay in the future when Hamilton wrote Federalist No. 72.

Excluding the executive from seeking re-election would have several pernicious effects, argued Hamilton. For one thing, it would reduce the magistrate’s incentive to good behavior; since he had no hope of being re-elected, he had less incentive to be exemplary in the performance of his office. “There are few men who would not feel much less zeal in the discharge of a duty when they were conscious that the advantages of the station with which it was connected must be relinquished at a determinate period, than when they were permitted to entertain a hope of obtaining, by meriting, a continuance of them.”

In addition, Hamilton argued that the exclusion from re-election would result in a more urgent temptation of the office-holder to make the most of his short time in office financially: “An avaricious man, who might happen to fill the office, looking forward to a time when he must at all events yield up the emoluments he enjoyed, would feel a propensity, not easy to be resisted by such a man, to make the best use of the opportunity he enjoyed while it lasted, and might not scruple to have recourse to the most corrupt expedients to make the harvest as abundant as it was transitory.”

A greater danger than peculation was the possibility of usurpation. An unscrupulous magistrate might simply decide he doesn’t want to give up his office. Without any hope for continuance in office, “such a man, in such a situation, would be much more violently tempted to embrace a favorable conjuncture for attempting the prolongation of his power, at every personal hazard, than if he had the probability of answering the same end by doing his duty.”

I have personally been favourable to limiting the tenure of elected officials to one term, of whatever the length. One of the reasons I have favoured such a policy is the notion that the officeholder would be more independent and willing to make hard decisions if he did not have to worry about re-election. To this, Hamilton conjectures that such a magistrate might actually be just as inclined to pander to the public as one with an eye on re-election, since at the end of his term he would be compelled to return to the people as a private citizen: “May he not be less willing by a firm conduct, to make personal enemies, when he acts under the impression that a time is fast approaching, on the arrival of which he not only MAY, but MUST, be exposed to their resentments, upon an equal, perhaps upon an inferior, footing? It is not an easy point to determine whether his independence would be most promoted or impaired by such an arrangement.” This argument may have been more convincing in the eighteenth century, when a gentleman ran in smaller and more localized circles than the typical politician does today. I am not convinced by it.

Finally, and most dangerously, there may be times where the people themselves wish the person to remain in office. What happens when the wishes of the people for a favourite son are thwarted by such a rigid constitutional restraint? “There may be conceived circumstances in which this disgust of the people, seconding the thwarted ambition of such a favorite, might occasion greater danger to liberty, than could ever reasonably be dreaded from the possibility of a perpetuation in office, by the voluntary suffrages of the community, exercising a constitutional privilege.” Hell hath no fury like a mob scorned. The people may simply decide to pull down completely the constitutional barrier separating them from their fancied man. Again, I do not find this argument of Hamilton’s overly convincing. Organized collective action of that nature takes considerable effort and enough time that the fickle mob is likely to grow indifferent to their erstwhile favourite.

Despite their later divergence in thinking, Hamilton’s approach is similar to Madison’s in assuming that human beings are largely self-interested and that because politicians are humans too, they should also be assumed to be self-interested. Institutions cannot be designed as if people are angels. Instead, writes Hamilton, because “the desire of reward is one of the strongest incentives of human conduct… the best security for the fidelity of mankind is to make their interest coincide with their duty.”

Monday, September 12, 2011

Après moi, le déluge

Gerontocracy


Back in March I discussed some problems of intergenerational justice. Some of that discussion was framed in fairly abstract terms, even dealing with millennial timeframes. In this post, I’d like to return to the theme of intergenerational justice. I recently read an interesting argument on intergenerational justice which I will present for you to mull over. I don’t buy it myself, but I’d certainly be interested in your thoughts.


It’s a well-known fact that in Canada and the US, older citizens are more likely than younger citizens to vote. Several reasons have been offered for this, none of which I intend to delve into hear. However, it is safe to say that the elderly have the attention of politicians and that policy is shaped accordingly. Thus, looking with trepidation towards the future, as our population ages we can expect policies to be adopted that are favourable to the old. For example, I’m sure we can expect increased funding for our medical system, since an older citizenry will be more reliant upon it.


Now, increased funding may be music to the ears of many, but the costs will probably be quite exorbitant. It must be funded somehow. Unfortunately, demography will work against such viable funding, since the number of people comprising the body of the nation’s taxpayers will shrink proportionally to the number of those who are demographically the greatest consumers of tax revenue. The simple fact is that fewer young people will have to work a lot harder to support the increased health care requirements of a growing number of old people — old people who, for the most part, are no longer contributing to their own health care needs.


In his book The Constitution of Liberty (1960), Friedrich Hayek predicted that a time would come when the burden of funding public medical and pension schemes would be so high that the young would simply baulk at continuing to contribute to them. Economist Gordon Tullock has made a similar claim, predicting that a point would be reached when the realization kicks in among the young that over their lives they will contribute more to such schemes than they can expect to receive from them. As soon as that happens, the whole pyramid scheme will collapse. Although our current society is one that eats its young, it may eventually swing to the opposite extreme, becoming a society in which the old are pushed out onto the proverbial ice floe to die. I am not quite so pessimistic, but there are serious structural problems to be dealt with.


I recently read a paper by Philippe van Parijs which floats a thought-provoking idea, although it is an idea that has no hope of ever being put into effect. Still, it is worth pondering, not so much for its feasibility (it has none), but because of the issues of social justice it brings to the fore. The paper is called “The Disenfranchisement of the Elderly, and Other Attempts to Secure Intergenerational Justice” (Philosophy and Public Affairs 27 (1998), 292-333). The title gives more than a hint as to the proposal it contains.


When we vote, we are exercising a right to have a say in policies that will have effects on others besides ourselves. For example, when I vote in favour of a policy that will raise prices in the industry in which I work, or artificially subsidizes it, if the policy is passed, my vote will redound to my benefit, but it will be to the detriment of consumers generally, who will now have to pay artificially higher prices for the goods my industry produces. A perfect example of such a policy would be agricultural price supports.


The point here is that, much like economic activity, such political activities as voting can have negative externalities. However, unlike in most economic externalities, at least in voting the parties affected get some say in the matter.


Or do they? When I vote for some expensive social program that is to be financed by borrowing, I reap the benefits of that program, while leaving the cost of it to be paid by future persons (among whom I may or may not be included). Again, this is a situation where voting creates a negative externality. However, in this case the people negatively affected do not get a say, because they are not yet born or are perhaps too young to vote. This is essentially a form of taxation without representation. Don’t get me wrong. I’m not against expensive social programs per se. If they have broad and informed support, then in principle I don’t have a problem with them. I just think that they should be financed through current revenues. In other words, we should have to pay for them, not others who have no say in the matter and who will possibly never reap any of the benefits.


Which brings me back to van Parijs. He argues that since voting can result in such negative externalities, either extra weight ought to be given to the votes of those who will bear the costs of those externalities, or else the votes of those who stand to benefit without bearing the full costs ought to be discounted. Now here’s the catch: There is often a temporal dimension to externalities. Many policies have distant cost “horizons”, in which benefits are experienced now, while real costs are experienced quite far off in the future. Because humans have finite lifespans, where the cost horizon of a policy lies beyond the lifespan of the person who votes for it, an incentive is created for the old to vote for such policies and the young to vote against them.


Imagine that you are a very old person in a society without a publicly-funded defined benefit pension scheme. Such a scheme is offered as a policy by some clever politician. Given how close you are to retirement, you would have every reason to vote for this policy, since you stand to contribute much less to it than you would receive in benefits from it. Sooner or later it must be paid for, but when the final bill becomes due, someone else will be stuck with it.


The same goes for publicly-funded health care: since the elderly are disproportionately greater consumers of health care resources, if a public system were being set up from scratch, although all might have some incentive to vote for it, the old would have a greater incentive, since they would use it much and contribute to it relatively little. In effect, they would be free riders on the system, even if that is not their primary intent.


Returning to the pension scheme example, we might note that the politician proposing it also has a perverse incentive. Generally speaking, in any such proposed policy, the higher the defined benefit relative to expected contribution, the greater the likelihood that people will vote for it. The old will do so for obvious reasons. The young may vote for it in the hopes that, so long as over their lifetimes they will take out more than they put in, they will have entered the pyramid scheme early enough to cash in. Thus, the system most likely to be proposed will be one that is not viable in the long term (which is in fact the case in most Western nations today), but is at least viable enough in the short or near term to induce a majority of people to vote for it. After all, if a system was proposed whereby you got precisely what you put in, most rational people would opt out, since they could do better through a private scheme. And of course, if the cost horizon is far enough off, the people who will be around when the bill becomes due are not present here and now, when the policy is to be voted on.


Van Parijs’ corrective is disarmingly simple, at least in theory: the old should somehow be disenfranchised, either absolutely, or by relative weighting of votes, since they are too apt to vote for policies that will produce negative externalities on the young and on future generations. Old people simply do not bear the full cost of their voting decisions and should therefore be prohibited from voting, or should at least have their votes count for less.


Of course, there are difficulties with this idea, quite apart from the practical barriers to its implementation, not least being, what exactly should the cut-off age be? Perhaps instead of a precise cut-off age, it could be introduced gradually over a person’s lifetime, with the relative weight of one’s vote bearing an inverse relation to one’s age.


Van Parijs also considers the other option, namely increasing the value of young people’s votes. Unfortunately there are limits to this. For example, how do you give votes to those who are yet unborn? And although the voting age could also be lowered, thereby allowing more young people into the voting pool, giving voting rights to eight-year-olds might be a cure worse than the disease. Eight-year-olds are not likely to understand the issues or policies in any great depth (indeed, most adult voters are intellectually unprepared for democratic participation). What is more likely to happen is that most of those newly empowered eight-year-olds would simply not bother to exercise their voting rights at all, which upon consideration is probably a good thing.


There is the additional danger of children voting for whomever mommy or daddy tell them they should vote for. Ironically, this might not be such a bad outcome, since assuming the parents are below the age of, say, fifty, this would effectively give them extra voting weight. The downside is that it would arbitrarily privilege parents, likely leading to a system that inclines towards policies overly favourable to parents and their offspring (in my opinion, the current system is already too favourable to parents). After all, parents can impose their own forms of voting externality if given the opportunity. For example, parents might use their beefed-up electoral weight to vote for lavish publicly-funded daycare schemes that would impose involuntary costs on the childless.


And this brings us to what I think is the crux of the problem. Van Parijs focuses on the unfair advantage old people get from their demographic strength, their relatively high voter turnout, and from the low voter turnout of the young. But there could just as easily come a time when demography works out differently, and where the young vastly outnumber the old at the polling booths and push them out onto the proverbial ice floe. How far should we go in disenfranchising people on the basis of what could prove to be a relatively short-lived demographic fluctuation?


Right now the old vote for policies that benefit them disproportionately. We can also safely predict that young people — at least the ones who bother to vote — would display a tendency to vote for policies that would disproportionately benefit the young. If the current upside-down demographic pyramid were suddenly to be right-sided, instead of our current gerontocracy we would merely have a different kind of intergenerational injustice, where policies benefit the young at the expense of the old. Van Parijs’ immodest proposal is a temporary bandage for current demographic circumstances. It is not a viable plan for long-term intergenerational justice.


On the fairly conservative assumption that the political behavior of each voter is at least moderately self-interested, just about any demographic subset would benefit itself by imposing costs on others if it was in the position to do so. Age is not the core problem from the point of view of justice.


Usually such free-riding is not even done consciously. In order to get buy-in, every group tells itself that its favoured projects — i.e. the ones from which it coincidentally stands to benefit the most — are good for everyone, and that therefore everyone ought to contribute to them, even when this is clearly not the case. For example, many farmers support policies of agricultural price supports, telling themselves (and the rest of us) that it’s in our interest to have profitable small farmers. In reality, such price supports tend to favour larger farmers even more. Meanwhile, the majority of us who are non-farmers pay more for food. At the end of the day, it probably would be more efficient and rational to simply take money directly out of the pockets of non-farmers and put it into the pockets of small farmers. But if such a pill is to be swallowed by the public, it must be wrapped in conventional pastoral poetry about the superior moral virtue of the agriculturalist and how his valor somehow ennobles us all.


That is just an example. My aim is not to ridicule or demonize farmers. My point is to show that old people are not the only group in society that lives off the rents of others. Handicapping the elderly would probably only serve to empower a new group of people to leverage government for rents.


The sad fact is that we would all be rent-seekers if we could. We all have things we’d like and we’d all prefer it if other people paid for them. That is why in the final analysis, rather than disenfranchising people, we should instead concentrate on structural political reforms that take away government’s power to distribute rents, so that when people vote, they do as little damage to others as possible.

Wednesday, August 24, 2011

The Morality of Rioting

The Chav Spring?
Over the six or so years I spent (but not necessarily wasted) in graduate school studying ethics in my quixotic attempt to someday become an academic, I noticed some things that, over time, led to a disintegration of my faith in academic moral philosophers and what they had to say about moral matters.

One of these is the tendency among professional ethicists in their writings to deploy what I like to call the ethicist’s “royal we”. They do this on those rare occasions when they actually dare to make an evaluative moral statement (rather than just talking about the nature of evaluative statements in the abstract). Examples usually take the form of such locutions as “We all believe that it is morally right to X” or “We all believe that Y is morally wrong”.

Deployment of the ethicist’s “royal we” tends to occur in cases where the ethicist uttering it is either (i) not entirely convinced that everyone really does believe that Y is morally wrong, and so needs to bolster his claim with bandwagon rhetoric and a little group auctoritas, to cow his audience in to assenting, or (ii) he intends the “we” to refer to other like-minded people, who are usually affluent, white, articulate academics who are rarely troubled by the problems of the mass of the world’s people, and who most likely will assent to his claim in any case. Actually, he may not even intend his “we” to refer in this way; more often than not it is simply subconscious.

The latter case of the ethicist’s “royal we” can be quickly inferred where the ethicist employs some potted situational example to illustrate his claim that Y is morally wrong. Most often these examples will be horribly contrived and underdescribed, betraying a palpable lack of engagement with lived experience. If I bring up the term “trolley problems”, ethicists will know what I’m getting at. Besides contrived situations, they will also trot out those rather genteel examples that mostly concern the fraught etiquette of the faculty meeting or the proper measure of justice to be observed in the grading of student papers. These are examples that will evoke a nod of the head from the academic reader, but will leave outsiders simply bored or puzzled at the fussing when there are real ethical problems in the world.

In truth, these are rather minor sins against good moral philosophy, for at worst ethicists who indulge in such things as trolley problems and faculty meeting ethics simply fail to engage with the lived experience of many. They make themselves irrelevant, and at least their irrelevance is a kind of harmlessness. What they write and say will typically be ignored outside of academic circles on account of its sheer lack of utility. It is much more dangerous when an academic has an ethical opinion that does engage with the real world, but which is at the same time patently absurd, at least to someone who has not yet managed to have her moral sense smothered by breeding or education. The danger is that such ideas will be adopted by the general public on the basis of the utterer’s authority and putative good intentions. Bad ideas, you see, too often display a tendency to filter down from the educated to the less educated, a point to which I shall return.

This has added poignancy for me in light of the recent riots in the UK and an article on them by essayist Theodore Dalrymple. I knew sooner or later that Dalrymple would hold forth on the riots. For those unfamiliar with his writings, Dalrymple (a.k.a. Dr. Anthony Daniels) is a retired psychiatrist who spent much of his career working in prisons with the criminal underclass in Britain. He writes much about the moral bankruptcy of Britain’s poor, sometimes with acid wit, sometimes with despair, and occasionally with great humanity. Dalrymple sees Britain’s social degeneracy (across all social classes) as the end result of a sort of trickle-down effect, through which bad ideas that become fashionable among the intellectual and social elite are adopted (with predictable bowdlerization) by the lower strata of society.

To be fair, Dalrymple writes about the moral bankruptcy of elites as well. The problem, he says, is that bad ideas are relatively harmless to the affluent folks that adopt them. The affluent can afford to make a few mistakes in life without it destroying their long-term prospects. But those same bad ideas, when they filter down, can have devastating effects on the poor when the poor adopt them, or indeed when they are applied by the rich to the poor (or when the affluent have stupid ideas about the poor, which the poor then internalize).

If verbosity reliably tracked depth of knowledge, then the affluent would doubtless be the world’s great experts on what is good for the poor, since they seemingly have so much to say about it (while actually doing very little). Conservatives and liberals alike all have some pet theory for why the poor occasionally behave badly. Conservatives tend to blame it on the poor themselves, attributing it to laziness or a lack of moral fibre. This is, of course, simplistic. Yes, many poor people are lazy and, frankly, stupid. But many are not. There are some smart and hardworking people too who are poor. Virtue and hard work won’t always make one rich. And needless to say, one can be stupid and lazy and rich.

Liberals tend to have more elaborate theories about why the poor behave badly, ones in which ultimate responsibility lies with others, or with “society”. Such liberal explanations are likewise simplistic. However, a good liberal will rarely let truth get in the way of a good narrative and a chance to pat themselves on the back for their virtuous intentions and their costless charity. It is mostly such liberal theorizing that Dalrymple’s writings set out to deflate.

Dalrymple is often excoriated by progressives for being an elitist. Many of the reader comments accompanying Dalrymple’s recent column on the riots were hostile to precisely this perceived elitism. They are right in one sense: he is an elitist. But being elitist is not the same thing as being wrong. A mere label cannot invalidate an argument. And the label is a bit unfair, since it is applied to a man who has spent so much of his professional life among poor people, something that cannot be said of most of the progressives who are so quick to give us their armchair theories about them. The “elitist” label is also ironic because Dalrymple’s writings make clear he believes elites are every bit as immoral and intellectually lazy as the underclass they’ve spawned.


But I digress. Returning to the affluent and their very bad ideas. Among these is the notion that none of us is really responsible for our actions, and that poverty or “society” or genetics or brain chemistry or [insert pet theory here] is responsible for whatever bad things we do. When such ideas spread and become pervasive among all classes, immorality (let’s call it what it is) is excused. Indeed, it may even garner the evil-doer sympathy and respect, while his victims are ignored and forgotten. And because in such an intellectual environment the evil-doer rarely experiences the ill consequences that his actions would naturally earn him in a more moral society, he has little incentive to become better. He need feel no shame, because after all, his actions are not really his. In short, misguided liberalism in moral matters has us drifting towards a shameless society, or at least a guiltless one.

(Of course, human nature being what it is, there is cognitive dissonance here too. While we are quick to disavow our bad actions as the result of poverty, discrimination, society, Mommy-issues, and the like, we are equally quick to take moral credit for our good actions, and even for those happy events that put us in a good light but that are in reality the result of mere fortune. And there is the double standard effect: we are quick to blame others for their bad acts, while we excuse our own by attributing them to exogenous factors. But again, I digress.)

If we were to take liberal morality seriously and believe that none of our actions are really ours, we can safely predict that freedom as a political concept would likewise be demoted in our value system. Or else, we’ll value freedom (license?) for ourselves, while believing others should be controlled. In either case, the resulting society runs the danger of being very illiberal. Furthermore, is there not a strange tension between the liberal aspiration of liberty, and the liberal cant of deterministic non-responsibility for our actions? Even liberal liberty becomes deterministic: a sort of freedom to indulge the brute passions and desires that would control us like puppets if we let them.

In any case, returning to the UK riots, one might have predicted the mass outbreak of liberal hand-wringing from well-intentioned (and well-educated) folks who have been quick to “explain” (or “excuse” in my opinion) the rioters as poor down-trodden youth with no jobs or futures, as if the riots were a predetermined act of righteous protest. In my mind, those who have respect and good intentions for the poor should be very careful about resorting to this kind of quasi-deterministic “explanation” of (anti)social behaviour.

For one thing, it hopelessly confounds the descriptive and the normative. One implies causality when one makes a descriptive statement like “poverty leads to rioting”, but there is also a normative implication that such rioting is excusable, since we should have seen it coming, and since, in a sense, those who riot were “driven” to do so by supposedly intolerable conditions. The descriptive and the normative should be kept distinct. Even if it were indisputably true that poverty leads to rioting (it isn’t), this fact cannot entail the claim that rioting is therefore morally excusable. An action can be causally determined and still be wrong. Aberrant sexual drives may “cause” someone to commit a rape, but we do not on those grounds excuse the rape. I would condemn the rape and the person who committed it, and without resorting to the ethicist’s “royal we”, I hope you would condemn it too, otherwise I do not much covet your acquaintance. Those whose moral praise and blame are too dependent on the issue of causality will inevitably find themselves forced to excuse things they wish to condemn — and what is just as bad, failing to praise what is praiseworthy.

Also, once you go down the road of attributing people’s actions to exogenous causes, you begin to strip them of their moral agency. They may become mere objects of scientific enquiry, and (more ominously) maybe even objects of scientific intervention. Punishment, as atavistic as the concept may sometimes seem, is at least driven by the notion of there being some proportionality between misdeed and penalty. But when immoral conduct is instead turned into a disease, we may then be licensed to apply aggressive therapy to “correct” it. Perhaps this is a weak slippery slope argument, but it is something to consider. The fact is that, somewhat paradoxically, moral blame at least treats the person blamed as a person, as a subject rather than an object, as a “thou” rather than an “it”.

The other problem with “explaining” the riots as a protestant reaction to poverty and/or oppression is that the facts simply don’t back it up. There was a conspicuous lack of protest signs. Nor did the “protesting” seem directed at anybody who had done the “protesters” any particular wrong. They seemed more interested in getting their grubby paws on luxury consumer goods than in sending a message about poverty. Even if they were trying to send some kind of a message, the message likely didn’t get through to most people, who, misguided liberal intelligentsia aside, were rightly appalled at their conduct.

They were criminal looters. That is all. Let us not valorize their acts. As a matter of fact, they looted and burned down the shops and homes of people who were equally poor or only marginally better off. “Yes, but this was an irrational reaction to their own degradation,” I imagine the liberal will argue. “They were articulating, in their own way, the frustration they are experiencing.” This is very dubious. Irrational crowd action happens when a group gathered for some purpose gives in to drives stirred by the setting of the crowd, manifesting itself in mass acts of misdirected anger. The actions of these looters were something rather less spontaneous. These are people who got dressed up for the purpose in hoodies beforehand, and who cleverly coordinated their activities through social networking. Between the poverty and the action lies the intention, and it is the intention which ought to be judged. In this case, the intention seems to have been criminal, no more and no less.

Third, when we speak of “poverty” here, we should be clear with ourselves that we are talking about relative rather than absolute poverty. I doubt very much that the absolutely poor can afford the Blackberries these people used to coordinate their rampaging. Finally, and perhaps most importantly for those who are actually concerned for the poor, to attribute the kind of criminality we saw in England to poverty is to insult those millions of poor people in England and elsewhere who work hard, live upright lives of dignity and meaning, and play by the rules.

In truth, I don’t have a theory for why these people did what they did. Many were poor; some were not. The majority of the poor took no part in the mayhem. Thus, poverty alone cannot explain it. I care less about what caused the riots than I do about making sure that it is stopped. In the social sciences, as opposed to the physical sciences, it is often the case that what will stop something has little or no relation to what caused it in the first place. Even if poverty did cause the riots, there are other ways — quicker and cheaper ways — to stop it than fixing poverty. First of all, despite billions of pounds a year the British welfare state has thrown at the problem, poverty (in its relative sense) still exists and thrives in Britain. So it is not something that can be solved right away. And the solution will probably take something more innovative than simply throwing good money after bad. And how exactly do you give jobs to people who are simply not qualified educationally or possibly even morally to do them? You could offer them an education, but would they take it? As it is, many of them are either not qualified to do the work that many Eastern Europeans have willingly accepted, and they have dropped out of the (admittedly sub-standard) publically-funded education system.

In the meantime, people who have done nothing to deserve having their homes torched and their businesses looted have every right to expect the state to protect them. If the state cannot perform this one fundamental function, then woe to civilization, for we are doomed. It is time liberals spent more time sympathizing with the real victims of the rioting rather than with the rioters who victimized them.

Three Remedial Lessons on Morality

Many intelligent and extremely well-educated moral philosophers have said some of the strangest things about morality. I submit to you, dear reader, that this is often the outcome of the kind of abstract thinking that clouds sound moral sense. Let us think back to how we learned about morals, long before any of us went to university.

I can remember three lessons I learned about morality as a child that have stood me in good stead as I consider moral problems today, as a moral philosopher. They stand me in good stead as I consider the UK riots too.

Lesson One.
When my parents taught me that something was bad or morally forbidden, they used language that was not hedged or qualified, as that would only have served to confuse me. For example, when I was about four years old, I was with my mother in the produce section of the grocery store. I took a little loose piece of the green plastic material that separated the different kinds of fruit from each other. I have no idea why I did this. It must have struck my four-year-old fancy I suppose. It was of absolutely no value to anyone, and it was really a tiny piece. I may as well have stolen pocket lint, it was that insignificant. Nevertheless, my mother saw it and tore a strip out of me, because I had stolen it. I was taught that it is wrong to steal. Full stop. I was not taught that it is wrong to steal unless it is something nice or unless you are poor and can’t afford it. There was no “unless” for me in this lesson. The Ten Commandments are couched in similar absolutist terms, as are most criminal statutes.

Children generally first learn moral propositions in the form of such absolutes. Once we get a little older and wiser, we learn that these absolutes can sometimes have exceptions. But these exceptions are relatively rare, and it is still wise policy to think of morality in absolute terms, and to encourage others to do the same. It may be permissible to steal bread if you are starving to death (although even this has been debated). However, it is not okay to smash a window and steal $200 sunglasses because you can’t otherwise afford them. And yet, this is effectively what I’ve been hearing liberals claim it is okay to do when they excuse rioters on grounds of poverty. Put in these terms, we can see it is wrong. But frame the circumstances in bowdlerized social-scientific jargon, and we suddenly become inoculated against good sense.

Lesson Two.
When I was a very young child, I would sometimes throw my candy wrappers on the ground. My parents told me it was wrong to litter (again, notice the lack of an “unless” clause here). When I told them I didn’t understand why it was wrong, my parents told me that litter makes things ugly. They also asked me to consider what would happen if everybody littered. We ought to ask ourselves the same question with regard to the rioting and looting: What if everybody smashed shop windows and stole what they wanted? Or burned down buildings when they were angry?

Lesson Three.
When I was about five years old, I punched my cousin in the face because she was being bossy. I was punished for it, which I felt was a grave injustice, since she was asking for it. My mother told me I deserved to be punished because hitting is not the way to solve problems. More importantly, she told me to think how I would feel if she had punched me in the face. In retrospect my mother was (mostly) correct. At the very least, before you go punching someone in the face, you should do a little “in the other person’s shoes” thinking.

The rioters should have done the same before they decided to torch homes and loot businesses. Equally as important, liberals should do a little “in the other person’s shoes” thinking about the terror and loss the victims of this mass violence experienced before being so quick to “understand” the motivations of the supposedly downtrodden perpetrators.

Addendum

I’m aware that this post has long outrun your patience. However, I cannot stop myself from sharing a few of the appalling reader comments on Dalrymple’s column (with charming misspellings and bad grammar preserved for effect), if only to scare people back into moral common sense who would otherwise be sympathetic to the liberal sob stories. I would also encourage you, as a sort of homework exercise, to think about how the three moral lessons discussed above can be applied to what these readers had to say.

First, consider this chilling assessment by someone styling himself “Terminalcityman” who thinks that the police are just too judgmental when it comes to young people: “If were [sic.] a young, unemployed man in a place with that kind of finger wagging going on all the time, you’d through [sic.] a brick at a cop too given the chance.”

I certainly hope I wouldn’t. If I were young and unemployed (which I have been), that is absolutely the last thing that would be on my mind. I would be concerned about finding a job, at whatever pay I could find. That’s what separates morally decent people — whether rich or poor — from morally bad people.

Some of the reader comments contained a lamentable but entirely typical tincture of racism and xenophobia as well. And remember that I’m not necessarily talking about right wing neo-Nazis here. If anything, they’re misguided leftish bleeding heart types. Take the following comment by one styling himself “mg4011”: “A lot of the lower paying jobs are taken up by citizens of the European Union, many from East Europe. If the UK did not belong to the EU, businesses would have to hire locals. These young people in England have no choice, there is no work for them. So what do you do when your mind is not focus on something productive? You do RIOT [sic.]. The only to blame [sic.] is the GOVERNMENT.”

This is an example of the usual nauseating bleating about how hordes of Poles and other assorted Slavs are invading the UK and taking jobs away from decent British blokes. If the argument has any truth to it at all, it speaks volumes about the sense of entitlement among British youth. If a Pole can’t find work in Poland, he will emigrate to where he can. He might go to the UK and work at a job a British person would turn his nose up at. But according to this reader’s reasoning, when an Englishman can’t find work in the UK, rather than emigrate or accept a job at a lower wage, he’d rather burn and loot his own city. And, says this cretinous line of “argument”, he’s right to do so. Why? Well, because his “predicament” is the fault of the government, of course. As if it is the government’s duty to give everybody a job. And not just any job, but the job they want at the wage they want! (And to people who for the most part couldn’t be bothered to stay in school long enough to become even remotely qualified for such work.) This is infantile rubbish; there is simply no more charitable way to describe it.

Both the Pole and the Brit are presumably poor. And yet their respective responses to that fact are very different. One has a sense of the intrinsic dignity of work — at whatever wage — and has no expectation that someone else will provide his bread for him. The other only has a sense of entitlement. But what could ever legitimize such an entitlement except the dignity and moral worth of he who is entitled? The rioters’ very behaviour demonstrates that they are entitled to nothing. And the difference in conduct between the poor Pole and the poor Brit belies the canard that poverty simpliciter causes rioting.

The next line of argument is exemplified in the comments of those like “Ken in Paris”, who discounted the rioting as “mere” property damage. So a few houses burned down and some goods were stolen. Who cares? No real harm was done. Property is replaceable. And besides, insurance will cover most of the damages anyway.

I would advise Ken to do a little “in the other person’s shoes” thinking here, and ask himself whether he would be singing the same tune if it were his home or business that was burned to the ground by a bunch of thugs, possibly while he was still in it. Contrary to what Ken says, property matters. If I work my fingers to the bone to save up for years to buy a home, who is he or anyone else to say it doesn’t matter if a thug burns it down, and that I shouldn’t complain because at least it’s not my body or my life? It certainly does matter. A lot. Even if insurance covers it, dealing with the mess and the associated bureaucracy is no picnic, and in the long run insurance premiums will go up for those living in the neighbourhood, people who are not responsible for the rioting, but are more likely to be its victims. Contrary to Ken’s cretinous worldview, there is no such thing as a free lunch. Somebody somewhere pays sooner or later. And there are things that insurance cannot cover, like photographs, heirlooms, perhaps an urn containing the ashes of a deceased loved one.