August 19, 2009

Capitalism and Communism: A Few Things I Changed My Mind About

I sat over lunch under an apple tree with some old comrades. We reminisced about the U.K. referendum on EEC membership back in 1975. At the time, we all campaigned against. I mentioned that since then I had changed my mind. Why? Because, I offered, the EU had done more to spread and consolidate democracy in central and eastern Europe than any other factor or force. I'm not sure, but I think someone close to my right ear muttered "Shame!" That, and a few other remarks, made me realize that some of those I was sitting with might not have changed their minds about much, despite the passage of a third of a century.

Some things I have kept. I was brought up in a high-minded atmosphere of nineteenth-century rationalism. Now, I would not recommend this for everyone. It was not a lot of fun. I did not really learn how to party, for example. However, I did absorb a lot about the sanctity of truth and the beauty of logic. As for politics my mother, a lifelong Liberal, imbued me with the notion that:

Whoever could make two ears of corn or two blades of grass to grow upon a spot of ground where one grew before would deserve better of mankind and do more essential service to his country than the whole race of politicians put together (from Gulliver's Travels by Jonathan Swift).

That was the "rationalism" side of my upbringing. The "nineteenth century" bit was the optimism that came with it. I had instilled in me a belief in the possibility of progress -- that we, the human race, could learn from experience and reasoning to make things better for everyone.

These things I still believe.

But some I don't. One thing I used to believe was that the government could always fix things -- at least, if not the government, then some other government.

I lost faith in this idea gradually, a bit at a time. To begin with, I believed it wholeheartedly -- as did we all (this was those of us that were studying economics in Cambridge, England, in the late 1960s). The only problem would be if the government was mistaken in fact or logic. If so, it was our job to put it right! We all saw government service as the highest calling of a professional economist. I nearly went that way, but I got bitten by the economic history bug.

A little later, my view of politics had darkened. I no longer trusted the government -- our government, the capitalist government, that was. I became a revolutionary socialist, and then a communist. (By this point I had forgotten about the two blades of grass.) It was still the government's job to fix things, but it had to be a government of the people, by the people, for the people. This outlook wasn't anarchistic, but it was libertarian. I wanted a world, foreshadowed by Marx in the Communist Manifesto of 1848, where,

In place of the old bourgeois society, with its classes and class antagonisms, we shall have an association, in which the free development of each is the condition for the free development of all.

What kind of government would that be? Well, although a communist, I did know it wasn't the Soviet government of the day. I had lived and studied in Moscow; I knew it was a police state and didn't much like it, although there were other things I was ready to admire. But the voices from the Soviet bloc that I listened to were the Czechoslovak and Polish reformers (some of them now exiled to Britain) and, in the Soviet Union, democratic Marxist dissenters like Roy Medvedev. This was the now forgotten era of Eurocommunism which, germinated by the 1960s, blossomed briefly during the 1970s. Italian and Spanish communists put forward the daring view that Soviet socialism had something missing from its makeup. The Russian Revolution of 1917, although not a mistake, had driven a wedge between democracy and socialism. In Britain some communists, but by no means all, took this up. It was our job to put democracy and socialism back together. (We failed.)

We debated the mistakes and crimes of Stalinism. This debate turned out to have some unexpected twists. In the Great Terror of 1937, Stalin had murdered a million people. No one really wanted to defend this. Those who wanted to support the Soviet Union on principle generally divided into two. One lot went into denial: some real enemies had been justly executed, and the rest was a fabrication. Others accepted the truth, but stuck to the line of Khrushchev in 1956: it was the fault of Stalin and a few leaders, who had died or been got rid of, and everything else was basically healthy, so that made it okay.

More disturbing, if anything, was the problem of the far more numerous victims that Stalin didn't intend, but killed anyway: for example, the five to eight million deaths resulting from the famine of the early 1930s. There was no plan to kill them, but they died because Stalin's drive to industrialize the country took too much food from the villages, leaving not enough to keep the rural population alive. Their bones were buried in the foundations of socialist construction. This was harder for some to face up to than premeditated mass murder. If a death was a crime you could convict the murderer, but killing by mistake placed the whole Soviet system on trial.

We wanted to heal the rift between socialism and democracy. We were failing, but we didn't know it yet. For the mid-1980s saw the coming to power in the Soviet Union of a leader who walked and talked like us: Mikhail Gorbachev. Like us, Gorbachev wanted to put socialism back together with democracy. The Soviet Union could become a free, democratic society! We were re-inspired, briefly.

It wasn't all philosophy and infighting. While disagreeing on history and the Bolshevik Revolution, we lived in our own country in the present. Putting differences aside, we engaged in many campaigns. We fought for jobs and full employment, opposed racism, supported strikers, marched for peace, campaigned for votes, and worked to enliven and empower our local communities.

Some other beliefs that I still held at that time mirrored my faith in political action to put things right. One was that fairness matters more than efficiency. In the late 1980s, shortly before his final illness, I became friends with Peter Wiles. We soon understood each other pretty well. Given our different starting points -- in many ways he was a classic liberal -- he was exceptionally kind to me. But even when he was no longer quite sure who I was or why I was there, he would turn to me suddenly and say: Efficiency! You've never paid enough attention to efficiency! Efficiency is very important!" And he was right. Because, the more efficiency you have, the more blades of grass and ears of corn you have, and the easier it is to be fair. At the time, this was something that I was still thinking about.

Then the Soviet experiment came to an abrupt end, a complete and total failure. Sometime early in 1991, I decided that the era ushered in by the Bolshevik Revolution was over. It was time to move on. I didn't know where, but I knew I couldn't stay where I was. I turned in my party card, and that was it.

A few years later, I was still stuck with nineteenth century rationalism, but I had changed allegiance from Marx and Engels to Smith, Ricardo, and Mill. In economics and politics I had become a liberal. I was happy -- as most liberal economists are -- with progressive redistribution through taxes and benefits, and tax-financed health and education services. I still had an optimistic belief in progress. But I no longer thought the government could drive progress, or fix everything, and I didn't even want it to do these things any more.

Political economy and the study of bureaucracy helped me to this view. Politicians and government officials, I realized, are not to be judged by their high-mindedness. Whether capitalist or socialist, under democracy or a dictator, political leaders and civil servants are self-interested. If the incentives align their private interests with those of society as a whole, well and good. Mostly, however, this is not the case. I ceased to believe that good government needed only correct facts and correct logic. I began to grasp the possibility that governments could fail systematically, perhaps more often than markets could fail.

From there it was a short step to the idea that a good way to organize society is to place the government under strict constitutional constraints, and let the citizens govern themselves as much as possible.

There were plenty of things I struggled with then, and still do today.

One is climate change. When climate change is (in the words of the Stern report) "the greatest market failure the world has ever seen," it is clear that without some kind of political action there is no solution. (You can actually read me struggling with this in my first and only article about climate change, written way back in 1991. I had figured out the political action problem, although in a crude and overdramatic way, but not yet the coordination problem that goes with it.)

Another is military intervention. I still thought military force had a purpose in the modern world and, to be perfectly honest, I still do. That doesn't mean I know exactly what that purpose is. Here's an example. I was in favour of the U.S. led invasion of Afghanistan in 2001, and less surely in favour of the invasion of Iraq two years later. What do I think now? There is a lot of evidence now to suggest I was wrong. I still think Saddam Hussain brought defeat on himself by pretending to have weapons that he wanted to have, and had tried to develop, but did not in fact possess. Also, I think the full consequences will not be known for many years, and could well differ greatly from what seems obvious now. Still, that is to anticipate hindsight that we don't yet have.

More to the point is this. I never forgot a conversation about Iraq with an American friend and fellow economic historian. I visited his university in November 2004 when Bush had just won his second term. Depressed and angry, Tim exploded at me: "You ex-coms are all the same!" (I wondered how many he knew.) "When it comes to military intervention you still think the state can fix everything." I think he had me just right. I was skewered.

A third thing I struggle with is who gets my vote. I favour policies that are economically conservative, socially liberal, tolerant and generous in international affairs, interventionist when forced but always reluctant and mindful of the perils of selective intervention. The only party that would be all these things is a party that is not interested in power. No party is all of these things in any country that I can think of. But if we don't vote, I believe, they will take our liberties anyway.

The last thing I want to mention is what it has meant to me to have spent the last eighteen years working in and with the Soviet state and party archives. First, a wonderful privilege: what luck, that I was granted such an opportunity. I have used it to work on a wide range of topics -- statistics, economic planning, growth and development, wartime mobilization, defence planning and procurement, decision making, information, secrecy, lying, cheating, whistleblowing, and repression. There is so much to study! This was a state of 200 million people and one sixth of the world's land surface that recorded everything of note in millions upon millions of documents over 70 years.

And second, a strange voyage of discovery, hard to define in a few words -- but I'll try. In general, no great surprises. The documents show a vast, centralized dictatorship with a mailed fist and a decaying metabolism. But we knew that, already. The fact is that academics and writers older and better than me, the dissidents and scholars of Peter Wiles's generation, had already worked out the main dimensions and characteristics of the Soviet system, its politics and economics. This was a state that just had too much power. 

In specifics, though, my sense of shock, accompanied by a full span of emotions from grief to laughter, is continually renewed by the opening of each new file. Two examples: First, how did I get interested in secrecy? I was working on Soviet military procurement. Every year the government gave the Red Army a cash budget to buy new equipment. Soldiers toured the factories to work out what weapons were available and at what price. Industry was supposed to sell weapons to the military at cost price. So, the officers' first question tended to be: "How much does that cost?" And the standard answer? "We can't tell you. It's a military secret." It sounds ridiculous! But it worked! Year after year and decade after decade, it worked. That told me there was something interesting and remarkable in the operation of Soviet secrecy that needed to be understood.

Second example: Earlier in the summer I took a first look at the files of the Lithuania KGB, newly acquired from Vilnius by the Hoover Institution archive. Every year the KGB second administration, responsible for counter-intelligence, made a plan of work and a report of work. They enumerated the thousands of "objects" that, in the course of the year, they would aim to monitor, intercept, warn off, compromise, recruit, blackmail, or arrest, and the hundreds of informers they would deploy to achieve the plan. This is what the KGB did in Lithuania year after year, right up to the end of the 1980s. The term "object" is no mistake; they coldly manipulated "the lives of others" with casually understated brutality. Suddenly at the end of the 1980s the endgame arrived, and a hundred thousand people were on the streets, demonstrating for independence -- half of them, party members! They were taken completely by surprise! They'd been watching the wrong people! (Or had they? Again, there's a story in this.)

And finally, an inner struggle between the calls of science and morality. As a social scientist, my first duty must be to understanding. Understanding comes from new knowledge, and there is so much new knowledge in those dusty files and blurred microfilms! Judgement should come later. But there is also a feeling that spreads involuntarily from my gut, a voice that I can't shut out: Reagan was right. This was an evil empire.

Do I regret my past associations or activities? No. I believed or did many things that seem silly or misguided with hindsight, but I did not betray anyone or do anything really wrong. Many good people belonged to the communist party who inspired me both as idealists and as activitists. From them I learned about how to translate ideals into action, and how to work with people of differing views, build cooperation, and get things done in the face of criticism and opposition; it is hard to imagine that I could have learned these things in any other way. One thing I learned was always to start from the world as it is, not as you would like it to be. This was one reason I did not write off the Soviet Union at the time. Which bring us to mistakes. Well, they are supposed to help you learn. I made many, many mistakes and this just gave me plenty of scope to learn from them. Of course, I probably did not learn all that I should and I probably made many more mistakes than I ever recognized.

No doubt there is some degree of self-serving fiction in my story. The way I tell it, I remained true to the values I got from my mother: truth and reason before everything else. The facts changed, so I changed my reasoning. The world changed, and I moved on. But there could be other versions.

My children might say: In his youth, Dad was a free thinker. He got older and more established, put on weight, and settled for a comfortable life in an armchair.

The old comrades I lunched with might not go along with that. After all, they got older too, but they did not settle for comfort or accommodate to new times. They remained true to the cause. Among them, some might tell a story of treachery and betrayal, in which I began with my heart in the right place, but eventually sold out the cause in return for academic status and reputation. Others might wonder if I wasn't always a middle-class revisionist, just playing with politics, an enemy within from the start, never a true comrade. Somewhere in this tangled tale lies the golden thread of truth -- but where? You choose.


August 18, 2009

Prince Charles: A Time to be Silent?

Writing about web page http://www.guardian.co.uk/uk/2009/aug/17/prince-charles-national-trust-patronage

Recently I thought: I long for Prince Charles to be king. Then, he will have to be silent.

This scientifically uneducated, culturally backward-looking, self-indulgent, self-pitying moral coward believes that he has a mission to educate the nation and control its thinking about the environment, health, aesthetics, and family values.

Have I overstated things? Perhaps Prince Charles is no worse than anyone else. He has made mistakes, as we all have done. But he is no better than the rest of us, for sure. Nothing in his life qualifies him to tutor us, other than the accident of his birth.

Prince Charles surrounds himself with advisers, none of whom has the courage to tell him that there is a time to be silent.

My hope was that, when he becomes king, Charles will have to step back from controversial posturing, so we shall hear from him less.

But then I realized that, as king, he will have even more opportunity than now to meddle in secret, behind the scenes.

I have always been a republican, but not a passionate one since I came more to appreciate the value of long established institutions.

Prince Charles's behaviour may increase my enthusiasm for constitutional change.


August 14, 2009

Health Care: A Letter to Americans

Writing about web page http://www.telegraph.co.uk/news/worldnews/northamerica/usa/barackobama/6001372/Sarah-Palin-calls-Barack-Obamas-health-reform-plans-evil.html

Working on both sides of the mid-Atlantic discipline of economics, I find I spend a certain amount of time trying to interpret America to my British friends and colleagues. Less often, I have to do it the other way round. Now seems to be one of those times. Britain needs to explain itself to America.

It seems Britain has become an issue in America: specifically, our national health service. Recently Chuck Grassley, a Republican on the Senate finance committee, was quoted as saying:

I don't know for sure. But I've heard several senators say that Ted Kennedy with a brain tumour, being 77 years old as opposed to being 37 years old, if he were in England, would not be treated for his disease, because end of life – when you get to be 77, your life is considered less valuable under those systems.

Now, I have only limited claims to offer any insight worthy of note. Specifically, I'm not a health professional (or a health economist). But I am a person, so from time to time I get sick. As a parent I had to see my children, now grown up, from birth traumas through the usual illnesses and accidents of childhood and adolescence. Recently I turned 60, so I have started to encounter conditions that go with aging. Within the last 20 years I experienced the deaths of both parents, both of advanced age, one after a long illness and the other after a short one. Along with all that, I sometimes worry about nothing. For all these reasons, I have had plenty of personal experience of health care in the U.K. and consider myself qualified to talk about that. So:

Dear Americans, here are some personal answers to some of the questions that seem to be on your minds.

  • Has the British national health service ever been a bad experience for you?

No. It has looked after me and those I love with unstinting professionalism.

  • Has the British national health service ever denied you or your family treatment on grounds of cost?

No. We have occasionally been faced with delays for testing or treatment that were a little longer than was comfortable. It was more urgent in our eyes than in the eyes of the medics and managers.

  • Do you worry about whether the state will pull the plug on you when you get older?

Absolutely not. I do worry that I will be kept alive beyond the point where I would prefer to slip away. That's another story, not for now.

  • Is that just because you're a national treasure? Do the national health service bureaucrats just give out privileges to famous people?

Hmm, let's think about that. It sounds like the Stephen Hawking argument -- that Hawking survived our health service only because he was famous. I'm fairly sure my NHS doctor doesn't know I'm famous. That's suggested by the fact that last time I saw him he asked me what I did -- and then wrote it down. My wife and children like to think I might be famous but for some reason I'm keeping it from them. In fact, I'm keeping it under wraps so effectively that even I don't know I'm famous. Nope, I think I'll be treated exactly the same way as everyone else. Just like Stephen Hawking, in fact.

  • Doesn't the NHS ration health care?

Sure. The truth is that health care is rationed everywhere. In a society without public provision, it tends to be rationed by price, or by the insurance premium. Those that can't afford it, don't get it. In the national health service, affordability is reckoned at various levels, national and local, but not in terms of the depth of my personal pocket.

  • Is the NHS perfect, then?

No. It has many of the imperfections of government provision. For one thing, it can be squeezed by budgetary limits. As a result, bringing in expensive new hi-tech treatments can be at the cost of basic procedures or attributes such as cleanliness and diet in hospitals. Results are uneven: Britain is not very good at diagnosing and treating some cancers, for example. I could go on. The main thing is that, despite many imperfections, it basically sort of works. Specifically, it has been working for half a century without leading Britain to Nazi eugenics or a communist dictatorship.

You have to notice that David Cameron, who leads the most free-market mainstream political party that we have, is vociferous in defense of the national health service. Why? Because he knows it is very, very popular. It's popular because, despite the imperfections, it works.

  • If you have to be in hospital, wouldn't you rather be in hospital in the United States?

Yes. It's a no-brainer -- American hospitals are the best in the world. At least, that would be my preference, conditional on having full insurance cover. But if you changed the question slightly, my answer would change. If the question was: "Unconditionally, which system would you prefer to live under?" then my answer would be the British one, because then I would not worry about needing treatment for conditions that were not covered by my insurer, or about exhausting the limits of my cover, or about possibly losing my job and my cover with it -- not only mine, but my family's cover too. Here, I am not afraid to be ill or injured. Of course, that's my personal choice; others might choose differently. But it is not an irrational choice.

  • How do you square acceptance of tax-financed health care with free market economics?

The market economy can solve most problems, but not this one. There are three reasons. First, according to market principles, the consumer is sovereign. In the market for medical services, most of us face a huge difficulty in trying to enforce this idea: the doctor knows best! We are too ignorant, and too emotionally involved, to be the best experts in our own care. That's why it makes sense for a powerful intermediary to buy medical services for us. That intermediary can be either the government or a private insurer. This leads to the second reason: private insurers like to cherry-pick their risks. Poor people have consistently worse health outcomes, and so make poor risks. When the only intermediary is private insurers, they will inevitably tend to price poor people out of the market. Only a government scheme can make sure that poor people are included. The third reason is that poor people should be included. This is on several grounds, starting with social justice, including justice for their children, who are not to blame for their parents' life choices, and because otherwise poverty will spread untreated diseases through the community.

Within our national health service I am in favour of the unevennesses that give us individual choices. It's a good thing if all doctors, hospitals, therapies, and procedures are not exactly the same. This lets us compare results and make choices among them. I'm also in favour of the internal markets that let doctors choose between consultants and facilities from which to purchase care for their patients.

In short, the NHS does violate free market principles, but with health care these principles are going to be violated anyway -- even in a free market. Health care raises issues of market power, information, and health spillovers that do not arise in most markets. It is an exception.

  • Doesn't government health care create huge bureaucratic overheads?

Well, yes. Interestingly, however, the overheads of government-financed medicine may not be as large as the overheads of insurance-based health care. As far as I understand it, my country commits a much smaller proportion of its GNP to health outlays -- and gets considerable better average outcomes than the United States, measured by life expectancy and many kinds of morbidity. Of course, there are confounding factors that complicate our understanding of the causes. Government purchasers are not necessarily any better than private insurers at holding down underlying costs. But it is not hard to see that taking ability to pay (or insurance status) out of the equation cuts out a lot of bureaucracy.

  • Still, wouldn't you prefer to pay a voluntary insurance premium over compulsory taxes?

I pay both. And I do so very willingly. My taxes go to the national health service, which ensures that I and my loved ones are fully covered both for emergency treatment, and for all other procedures that are available although not necessarily exactly when we want it. My insurance premium then lets me bypass many queues if I need to. Moreover, the two systems mesh smoothly, allowing me to switch back and forth between them -- as I did recently when my NHS doctor recommended some tests that could not be done instantly within the national health service. My insurer paid so that I could have them done privately, and I took the results back to my NHS doctor.

In other words, it's not a question of government versus private provision. You can have both working together -- and in fact, in the United States, you do have both. It's a question of the right balance. The balance we have in the U.K. right now may not be perfect, but it is not a bad balance.

  • Don't you mind that your taxes also pay for the care of needy and feckless people that pay no taxes themselves?

No. In fact, I'm very happy that the lazy scumbags get health care too. This is partly on pragmatic grounds, so that they do not pass their diseases onto others, and so eventually to me. Another reason is moral: poor and needy people have children who themselves can be in need of medical care. And from the moral to the personal: the poor and needy of the future might turn out to be my grandchildren! Or even my children! (I didn't mean to say that, it just popped out.)

  • Does the NHS explain your bad teeth?

No, I obtain my dental care privately under an insurance scheme. My bad teeth are connected not with socialized medicine, but with the fact that I spent my childhood in Britain in the 1950s.

In conclusion, dear Americans, you must make your own minds up. We Brits can understand perfectly well the importance of private versus public, free market versus government, and individual versus collective responsibility. These are big important things that all of us should and will debate freely.

What we don't get is the depth of anxiety with which some of you face the prospect of wider sharing of health care. British experience gives plenty of food for thought. We may not have got it exactly right. But the choices we have made are well within the parameters of a society that is free as well as modestly equitable.

Keep well, Mark


August 06, 2009

How Can We Get to See What's Coming Round the Corner?

Writing about web page http://www.wehc2009.org/programme.asp?find=world+in+2030

At a meeting I attended earlier this week, some of the world's best economic historians discussed how the world might look in 2030, based on their knowledge of the long-run trends at work over the last couple of centuries (or, in the case of China, the last couple of millennia). People talked about trends, models, and forecasts, using a lot of numbers and graphs. The picture was generally optimistic, in a moderate sort of way.

One speaker worried about the small number of big things that, although they happen only rarely, might completely derail our visions of the future: things like deadly pandemics, Great Depressions, and global wars. Anther speaker worried about the rise of nationalism, how far it might go, and whether it could then have unpredictable effects.

This made me think about people that haven't had the benefits of a training in economics -- most historians, for example, but of course not only them. It occurred to me that such people often think about the future in a completely diffferent way, a way that is much more intuitive than economists' trends and models. They think about the future by telling stories taken from the past. (This is not a completely original thought. I began to think about it after reading "The Political Economy of Hatred," by Ed Glaeser, in 2005 in The Quarterly Journal of Economics 120:1, and more recently Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism, by George A. Akerlof and Robert J. Shiller, published by Princeton University Press in 2008).

How do people use stories from the past to think about the future? First, they think about the conditions obtaining in the world today. Then, they scan the past for stories that began with initial conditions somewhat like these. They ask: "What happened next?" Then, they let the story unfold. From the story, they work out what might be about to happen in our own future. In this way, they try to see what's coming round the corner but still hidden from direct view.

Here are two examples.

  • Today, we are in the early stages of a Great Recession that was preceded by a financial crisis. That's somewhat like the world economy in around 1930 or 1931. What happened in the Great Depression was a global contraction followed by the breakup of world markets, the rise of nationalism, the attempt of Germany, Italy, and Japan to carve out new empires, and World War II. Is that what might come next?
  • Russia today is a great power that began the transition from totalitarianism to democracy -- and got stuck half way. The Russian political elite feels encircled by an old enemy, NATO, in the West, and a new rival, China, rising rapidly in the East. With a shrinking population, Moscow may well feel that time is running out. That's somewhat like Germany in around 1912. Germany was stalled half way from Prussian absolutism to parliamentary democracy. Germany was a rising power, but with the sense of being encircled by old enemies, Britain and France, in the West, and new rivals, Russia and Japan, that were rising even faster in the East. What happened next in that case was that the German political elite took an immense gamble. They launched World War I, not because they were confident of winning, but because they feared that time was running out. They feared the consequences of doing nothing -- the certain continuation (as they saw it) of peaceful decline -- more than the combined risks of victory and defeat if they started a military adventure. Is that what might come next?

Story-telling like this has some remarkable features. First, it is indeed a way of thinking about the possibility of rare and unpredictable events. As such it is immensely powerful. Its intuitive appeal is much greater than models, charts, and numbers. It speaks the language of nations and politics: shared experiences, common destinies, collective rights and wrongs. It is easily voiced by leaders and heard by followers untrained in statistical thinking about trends and standard errors. As a result, while politicians may turn to economists for technical advice, they get historians to help write their speeches -- Arthur Schlesinger Jr (John F. Kennedy), Richard Pipes (Ronald Reagan), and Norman Stone (Margaret Thatcher).

Second, story-telling is deliberately selective. When we scan history for stories, we look by definition for sequences of events that have a beginning, a middle, and an end. In the middle, something happens that is out of the ordinary, dramatic, and unexpected. Invariably, we rule out all those past historical circumstances that also somewhat resembled the present day, but after which there were no surprises and nothing much happened.

Third, story-telling typically sounds an alarm. In history, dramatic events are rarely good news. The good news in history has generally been made up of the slow, steady progress of emancipation, literacy, and prosperity. Such good news is easily illustrated by statistics and trends, but does not make good stories. It is the bad news of crises and wars that makes good stories.

Fourth, exactly because story-telling is alarmist, an entirely legitimate purpose of stories may sometimes be to sound the alarm about the risks we face and so avert their realization. Putting it in its best light, we sound the alarm of another Great Depression so that governments will take the actions necessary to save us from having to relive the experience. We warn of the danger of a new war so that governments will change their policies to a peaceful track. One result is that it is generally very difficult if not impossible to test the efficacy of story tellng as a form of prediction.

Fifth, some stories can be self-fulfilling. There is a particular kind of collective story, for example, that communal identity politicians like to tell (Glaeser wrote about this in "The Political Economy of Hatred"). These are stories of past hate crimes allegedly committed by some other ethnic or religious group against their own group: Black against White, Germans against Jews, Jews against Palestinians, Protestants against Catholics, Sunni against Shia -- and, in all cases, vice versa. Such stories can be extrapolated into predictions of future hate crimes yet to be committed, and then into justifications for hateful and violent action to preempt the future crimes.

Related to all this is the problem that we control the initial conditions of the stories we select only very imperfectly. The world economy today is only somewhat like the world economy of 1930; in many ways it is quite different -- and the differences may well turn out to be crucial. In the same way, Russia today is only somewhat like Germany in 1912. And so on. Thus, the stories we tell have the capacity to be deeply misleading about the true underlying risks in the world today.

If the risk of war or depression illustrated in the story does not materialize, this could be because telling the story stimulated effective action to avert the risk, or because the risk, although real, happened not to materialize this time (i.e. we were lucky), or because the risk was nonexistent in the first place; we may have no idea which is the case. If the risk of community violence does materialize, we don't know whether the underlying story merely predicted it -- or actually precipitated it.

In summary, such stories are powerful. They have great potential to illuminate the risks we face, but this potential is also dangerous; it is a power to accentuate risks, as well as to illuminate them.

I draw two simple conclusions from this. One is that economists and economic historians interested in addressing the wider public should think carefully about the stories that can put our messages across. The other is that we should pay close attention to the stories that others narrate in public about what has happened in economic history: we should look out for these stories, identify them, test them carefully against the evidence that we have, and then report the results to the public ourselves.


July 29, 2009

The Social Work Taskforce: Why Not Just Pay Them More?

Writing about web page http://publications.dcsf.gov.uk/eOrderingDownload/DCSF-00752-2009.pdf

The British government's Social Work Task Force was set up to review "frontline" social work practice and to recommend improvements and reforms of the social work profession. Its interim report, out today, is entitled Facing Up to the Task.

Everyone can see that social work in our country is in a mess. If social workers fall down on the job they are treated like murderers; if they try to do it properly they get treated like the Gestapo. If they spend all their time on the "front line" they have no time left to talk to each other and to other agencies; if they talk to each other the way the government requires, they spend all their time doing paperwork and have no time for their clients. In the words of the report (page 12):

Widespread staffing shortages mean that social work is struggling to hold its own as a durable, attractive public sector profession, compromising its ability to deliver consistent quality on the frontline. There is no robust, standing system for collecting information on local and national levels of vacancies, turnover and sickness, and for forecasting future supply and demand. Local authorities are finding it hard to identify effective methods for managing the workloads of frontline staff. Staff shortages and financial pressures are making these challenges harder still.

In other words, a big problem facing social work managers today is that demand exceeds supply. That sounded to me like an economic problem. As an economist, without a background in social work, I thought about the Economics 101 solution: if demand exceeds supply the price should rise. Maybe social work would become more manageable if we paid our social workers more?

Sounds simple -- maybe too simple. How would it work? Well, in several mutually reinforcing ways:

  • With higher salaries, more people with better qualifications would be attracted into training for the profession. (In today's Guardian, David Brindle quotes Sue Berelowitz, deputy children's commissioner for England, as saying some universities accept students on social work courses with E grades at A-level; some courses have pass rates for essays and exams of just 30%.)
  • There would be fewer unfilled vacancies.
  • A larger number of better qualified and more competent social workers would share out the work, so workloads would become more manageable.
  • Properly managed, with higher salaries and lower workloads, even existing social workers ought to become more effective.

Given higher salaries, it is true, social service departments would probably have to reduce their social work staff complements. But this would be a price worth paying. With fewer posts unfilled, the number of social workers actually in post ought to increase. Departments would be spared the expense of frequent resort to expensive agency workers and consultants to make up for staffing shortfalls. And at least some social work catastrophes would be avoided, sparing everyone those other sorts of costs that then arise: deaths and injuries, investigations and trials, commissions of inquiry, imprisonments and sackings.

Given higher salaries, why would the existing social workers perform better? There are two reasons. First, the existing workers would have more, better people around them with whom to share the work. The other reason is that, the higher your salary, the greater is the cost of losing your job. Assume that bad social workers eventually lose their jobs. If so, then a higher salary would increase the cost of being a bad social worker, and so make existing social workers work harder to avoid being seen as bad. Of course, this depends on good performance management being in place so that bad social workers are actually let go.

All this is first-year economics. I wondered what the Social Work Task Force would make of the first-year answer. I note that the report emphasised the need to achieve "a much more sophisticated understanding of supply and demand." I looked for what this might involve. I found two things (both on page 18):

First, numbers of workers supplied and demanded:

A better future for social work depends on an appropriate supply of suitably qualified applicants into stable teams with the right mix of experience. The supply, recruitment and retention of social workers is therefore a central issue for reform. As a prerequisite for improvement, there need to be robust and durable arrangements for understanding and forecasting supply and demand across training and the job market.

I think what this means is that, in the view of the task force, one of the main instruments for bringing supply and demand into balance in the long term is forecasting demand and then increasing training places to match. The reference to retention, however, suggests an important role for pay, the factor that an economist would see as bringing supply and demand into balance. This brings us to the second thing I found (which actually came before the first one):

Social worker pay has also been raised in a number of different ways with the Task Force.

  • Levels of pay are felt by some to be too low and not reflective of the importance of what social workers do and the pressures they currently work under. However, others have argued that levels of pay in themselves are not necessarily a decisive issue but assume importance because of wider problems with status, recognition and investment in training, support and the working environment.

  • Pay differences within local authority teams between permanent staff and agency staff who may not be handling the same complexity of cases) are a source of some frustration and disillusionment.

  • Shifts and variations in pay between local authorities are causing some dissatisfaction and may be contributing to movement and turnover in the workforce, with authorities competing to attract staff and address shortfalls through localised improvements in pay and conditions. This has led some to suggest that the profession needs a single national framework for pay and other conditions of employment in the statutory sector

Now, I understand very well that pay is not everything. If it were, I wouldn't be an academic. People come to many jobs, especially those involving education, health, and social care, because they are drawn to the work itself rather than the pay packet. In fact, people who care only about money would make bad academics and probably bad social workers too. Because of this, not offering very high pay can be a way of screening out people that care only about money (at this point we've moved from first-year economics to the second year).

However, it does not look to me as if the main problem in British social work today is that the profession is being invaded by money grubbers. On the contrary, there is an equal risk from offering relatively low pay: it can be felt as society's way of saying that the job is unimportant and a professional motivation is rubbish.

Because of this, for the sake of their motivation, it is important to pay people in proportion to their responsibilities -- and frontline social work is a very responsible job. If social workers are to match up to their responsibilities, commitment alone is not enough. The profession also needs to attract people that, in addition to being committed, are organized, fair-minded, team-oriented, competent, knowledgeable, and decisive, qualities that are valued highly -- and often highly rewarded -- in business. All this suggests that raising salaries could be part of the solution.

Raising salaries would seem to be a much more promising line of advance for the profession than the failed route of responding to crisis through frequent and costly reorganizations, reforms, and commissions of inquiry and task forces.

To give an example, every time there is a disaster, we are told that social workers failed to talk to each other and to other agencies. But competent, organized, knowledgeable, motivated social workers who are not crushed by overwork will talk to each other and to doctors and teachers without being told to do so. It is necessary to try to force social workers to do these things and to create artificial channels for them to do so, only because social workers are underpaid, underskilled, and overloaded.

To return to the report, if local authorities are being forced to raise salaries in order to compete for scarce social workers, isn't that a good thing? To judge from the tone of the task force report, "authorities competing to attract staff and address shortfalls through localised improvements" is being presented as a negative; "a single national framework for pay and other conditions" is put forward as the alternative to employer competition. When employer competition is pushing up pay, it looks like there are those that would prefer to hold it down.

A final thought on good and bad uses of money. One of the task force's headline recommendations is "The creation of a national college for social work" (page 40):

We are therefore exploring the case for a new organisation to support social work, which can play a role similar to that of the Royal Colleges that support the medical and allied professions. This might take the form of a national college for social work in England. ... In particular, the Task Force is interested in the potential for the national college to have a key role in driving learning and best practice in social work and provide a strong voice which speaks to the media about the profession. We are also considering the roles it might play bringing coherence to the professional and occupational standards which underpin different aspects of social work training and practice, and in relation to regulation of professional practice, training and education.

This Royal College of Social Work (say) would be in addition to the bodies that already exist: the General Social Care Council (GSCC), the relevant sector skills councils, the Social Care Institute for Excellence (SCIE), and the British Association of Social Workers (BASW).)

How much would this cost? In 2007/08 the Royal College of Nursing had 400,000 members and an annual budget of around £80 million, so around £200 per head of the profession it serves. In the same year the GSCC spent about £42m and the SCIE another £8m, so £50m for these two bodies to cover around 100,000 registered social workers and social work students. In other words, social workers were already paying around £500 per head for their own statutory regulation. (I'm not sure why it already costs so much more to regulate and support social workers than nurses.)

My question: wouldn't we all be better off if, instead of creating yet another expensive statutory professional body, we abolished them all and used the money saved to pay social workers more? Maybe there's a lesson in Economics 101 after all.


July 27, 2009

Rationalising the Macroeconomy

Writing about web page http://www.ft.com/cms/s/0/478de136-762b-11de-9e59-00144feabdc0.html

In The Financial Times on July 21, Paul de Grauwe published the best comment I have read so far about the crisis in macroeconomic policy. If your time is scarce, don't read on; click the link and read him.

De Grauwe makes a fundamental argument, which I will summarize in four steps.

  • Today, macroeconomists are distributed along a spectrum from "Keynesian" at one end and "Classical" at the other. They tend to clump at the extremes so there are many passionate Keynesians and passionate Classicals, as well as less passionate scholars in between.
  • The Classical macroeconomists expect the macroeconomy to bounce back quickly from a major disturbance (for example, a credit crunch) on its own accord; government intervention is more likely to hinder than help. The Keynesians believe the opposite.
  • For practical purposes, both schools model the behaviour of the people in the macroeconomy as follows: their behaviour is based on expectations of the future that are guided by the model, whether Keynesian or Classical. Classical macroeconomists assume that people's behaviour is based on the expectation that the outcome of the Classical model will be fulfilled, and Keynesian macroeconomists similarly.
  • In both models, these expectations are self-fulfilling.

De Grauwe's punch line:

So what? Does it matter that economists disagree so much? It does. Take the issue of government deficits. If you want to forecast the long-term interest rate, it matters a great deal which of the two camps you believe. If you believe the first [Classical] one, you will fear future inflation and you will sell long-term government bonds. As a result, bond prices will drop and rates will rise. You will have made a reality of the fears of the first camp. But if you believe the story told by the second [Keynesian] camp, you will happily buy long-term government bonds, allowing the government to spend without a surge in rates, thereby contributing to a recovery that the second camp predicts will follow from high budget deficits.

In short, in a Keynesian model, the agents are assumed to expect that a credit crunch will have lasting adverse consequences. As a result they will rein in consumption (because households expect lower incomes) and investent (because firms expect depressed markets). The economy will stay depressed until government action flips the economy back to normal. But in a Classical model, the agents are assumed to expect that a credit crunch will soon be overcome, provided markets are allowed to work normally. Do nothing, and any damage to confidence will soon be restored. Unnecessary government action, however, by enlarging public spending and debt, will depress long term expectations and so inhibit the restoration of confidence.

This point is not new. I'm not sure who made it originally. It has been around a long time. I checked my notes from 1998/99, the first year I lectured to first year undergraduates at Warwick on this particular topic. I found the following passage:

We’re trying to explain a state of the world in which at least some unemployment is involuntary, money isn’t instantly neutralised by price change, and business cycles last anywhere between 5 and 9 years. The fundamental problem of the RE [rational expectations] approach is that it proves this state of the world can’t exist. Underlying this are some basic conceptual faultlines.  Learning from experience may be more difficult than RE theory assumes. Large experiments are rarely if ever repeated under controlled conditions (e.g. joining, then leaving the ERM). Large shocks (e.g. oil shocks, monetary shocks) make it hard to discern the underlying things which remain the same.  What is the true model of the macroeconomy? RE theorists tend to assume that most people adhere to a Classical philosophy. But since economists have such difficulty decided how best to model the economy, it’s not clear why rational non-economists should be different. Policy demonstrably does affect the real economy, so why should rational people believe it won’t?  This is particularly important since the outcomes of actions based on RE tend to force the world to conform to the model, not the other way round. What is created here is a "guessing the winner" problem: what’s important in forming rational expectations is not "how does the economy work?"; nor even "how does the economy work in my opinion?"; but "how does the economy work in most people’s opinion", bearing in mind that in forming their opinions they are all asking themselves the same question.

I claim absolutely no credit for this; I was not saying anything original. I got the argument from somewhere or someone else. My point is that the basic paradox in rational expectations has been understood for a long time, but the horrendous policy implications are perhaps only now fully apparent.

How bad does that make economists? Ten years ago I told my students that the idea of rational expectations, although not wrong, contained a paradox. I had no idea how to resolve it, however. One route the profession has taken has been to consider that, just as economists learn, so do non-economists. As a result, macroeconomic models have been developed that incorporate heterogeneous expectations -- when different people in the macroeconomy start out with different models of how the economy works and so different forecasts of the future -- and model how they might then learn from experience. A recent review by George W. Evans and Seppo Honkapohja is here.

This takes me well outside my comfort zone. I thought about it, however, when a friend forwarded some lines from an internet discussion including the suggestion:

Until the "science" of economics detaches itself from econometrics and unilateral modelling and realises that humans are "rationalising beings", not "rational beings", then the predictions and opinions stemming from its adherents should be treated with caution.

In the context I took the gap between "rationality" and "rationality" to reflect some falling short of cognition or computation. It wasn't that I disagreed; the suggestion seemed almost trivially true (apart from the reference to econometrics, which seemed silly). What it made me think is this: If humans are "rationalising beings," then so too, being human, are economists. All economic models have cognitive and computational limits. They model reality; they don't and can't reproduce it. 

In the often misquoted words of George Box and Norman Draper (from Empirical Model-Building and Response Surfaces, New York: John Wiley 1987, p. 63):

All models are wrong; the question is how wrong do they have to be to not be useful.


July 18, 2009

Afghanistan's Future Lies in the Past

Writing about web page http://www.guardian.co.uk/world/2009/jul/12/paddy-ashdown-afghanistan-policy

A few days ago Nicholas Watt, chief political correspondent of The Guardian, reported Lord Ashdown in the following terms:

In remarks likely to fuel the debate about the future of the war, Ashdown accused Britain and other European countries of setting "ludicrously ambitious targets" of attempting to turn Afghanistan into a fully democratic and progressive nation.

I thought to myself: Of course, Ashdown's right. Then, I recalled a short paper that I wrote and circulated more than 7 years ago (my first draft was dated 4 December 2001), when blogs were in their infancy (and years before I started one). Anyway, no one noticed what I had to say at the time, that's for sure.

My argument was simple: For Afghanistan, real democracy (and real capitalism) was going to be a step too far. If not democratic capitalism, what should we aim for? My answer was: For a country like Afghanistan, even feudalism would be a step forward -- as long as it was of the right kind.

This is a very general argument. It is about our goals, more than the strategy or tactics required to achieve them, or the value and costs of doing so. I don't have a clue about important details like whether we should have fortified bases in Helmand or even whether we are currently winning or losing.

Still, I believe the merit of my general argument has, if anything, been strengthened in the years since 2001. If nothing else, Paddy Ashdown agrees! So, I decided to reprint it. Starting here is the full text, exactly as I revised it on 9 January 2002.

What Afghanistan Needs is the Right Kind of Feudalism

Afghan warlords have been meeting in Germany to try to agree their country’s future. This future looks bleak. Afghanistan lies ruined by decades of foreign intervention and civil war. Its territory is being redivided among heavily armed rival warlords with dreadful records of human rights abuses based on ethnic and religious factions that hate and mistrust each other. How, under these conditions, can Afghanistan’s economy be rebuilt? How can purpose and prosperity be returned to its people?

It might be thought that what Afghanistan needs is a powerful dose of democracy and liberal capitalism. This isn’t going to happen, at least not for a century or so. For a start, take democracy. We think of democracy as “majority rule”. But majority rule by itself is not enough. It’s also important that the majority doesn’t rule by exterminating minorities. In a democracy, minorities have rights that cannot be overriden: rights of free speech, criticism, and opposition. In Afghanistan there are many minorities, but there is also too much hatred and there are too many guns for any minority to be sure of these protections.

Then take capitalism. For a capitalist market economy to work ownership rights must be taken for granted most of the time. Business grinds to a halt if you have to spend all your time guarding your property with guns, or paying lawyers or bribing officials to get what you’re due. If warlords, thieves, or bureaucrats take a cut too frequently, economic life will slow down or come to a near stop. Too much of a cut and the only activity that’s left is when people grow their own food and then hide it until they can eat it. Afghanistan has already come to this.

Feudalism is the best that Afghans can hope for right now. Feudalism emerged in Europe from the Dark Ages, when the costs of fighting became so heavy that warlords got together and chose rulers to keep order among them. The result was a more stable form of society in which everyone had prescribed rights and responsibilities and everyone knew their place. In fact, everyone was fixed in place: peasants in their villages, squires in their manors, monks in their monasteries, kings in their courts. The farmer served the noble by providing him with food and labour. The noble served the king by providing him with taxes and men. Kings and nobles provided justice and protected those under them. As a result, economic progress became possible. Feudalism of the right kind proved prosperous and stable, and left monuments of art and culture that are still admired and loved after many centuries.

It’s true that feudalism was an unfree society. People could not choose where to live, what to believe in, whom to serve, or with whom to trade. Peasant revolutionaries saw it as organised robbery. Women and children were subjected to domestic tyranny. Yet it was not the worst of all possible worlds. For what feudalism restricted first of all was the universal freedom of each to rob and kill all others.

The late American historian Mancur Olson put it like this: such rulers are thieves, but a thief who stays in one place and settles down is better than one who plunders and moves on. The reason is that the thief who settles and rules the territory around him has an interest in its prosperity. He protects the people under him in his own self– interest, because they are his assets. To enlarge his own revenues he gives them legal rights and provides them with services to encourage the economic activity that he can tax. Thus as European society was stabilised on feudal lines, dukes and kings built roads and towns, provided schools, and organised trade. They taxed the trade, which was bad, but they prevented others from taxing or robbing it and this at least was good. They spent heavily on armies and navies; they also patronised the arts and sciences. They provided law: the laws were biased in their favour, but at least there were laws, not just the law of the jungle. Even the hereditary character of the lords and monarchy, which now seems laughably antique, had an important function. By agreeing to the hereditary principle, the nobles gave incentives even to a dying ruler to care about the stability and prosperity of the kingdom he would bequeath to his children, and ensured that the ruler’s death would be followed by an orderly succession, not civil war.

It will be a step forward if the Afghan leaders meeting in Bonn can agree upon this kind of society. But they will only do so if their self–interest lies there, so that they recognise the alternatives of unbridled rivalry and civil war without end as worse. The thief who controls a province promotes its prosperity only while he can be sure some other thief will not invade it and drive him out. The Afghan warlords need to agree some rules of mutual self-restraint. Everything must start from this. Otherwise there will be no rules at all and Afghanistan will return to civil war; or a dictator will emerge to restrain the warlords by force, and Afghanistan will have to undergo a new tyranny.

What kind of feudalism? It was important for European countries that their feudalism was of the right kind. The ruler had to accept limits on his power in relation to both the nobility and all citizens. In England the king’s responsibilities were agreed among the nobles and written down in the Magna Charta of 1215 which also set out principles of justice, ownership, and trade. The Magna Charta stopped the king from robbing, imprisoning, and killing without lawful reason. The result was that the English monarchy became more pluralistic than tyrannical. The state and religion, although not fully separate, at least retained separate powers. Women had some rights, although fewer than men. There was a Parliament, although at first it was only for the nobles.

For England the next few centuries included a necessary civil war to cut down the ambitions of the Tudors and Stewarts, and a period in which burgeoning democracy nearly descended into wholesale corruption. There were many colonial wars in which Englishmen behaved badly to the Scots, Irish, Africans, Indians, and others. At the same time they continued to mistreat large numbers of English women and children and also each other. British history has not been a Sunday School picnic. Still today there is enough poverty and discrimination at home that we cannot afford to be complacent. But by global standards Britain has become a relatively prosperous and stable society. If you were born here or can get in, it is a freer place to live than most. Is feudalism really the best that Afghans can hope for? I do not mean that we should silence Afghan democrats when they ask for elections or Afghan women when they demand education and a visible role in society. Part of the deal should be their right to speak and be heard. But we should not blame Afghan rulers who do not deliver this immediately and in full. There are worse things they can do, such as return Afghanistan to a perpetual state of internal warfare.

There are implications for the west. The decisive step in reconstituting Afghanistan is to establish the rights of the rulers, not of the ordinary people. If ordinary Afghans are to gather more rights than they have at present, they will flow at first from the self–interest of the rulers, not from ideals of citizenship and democracy or conventions on human rights. Only a stable division of rights of the rulers will provide this. We shouldn’t expect too much from Afghanistan’s new rulers. The big rewards that the west can offer, such as dollars for reconstruction and development, should be delivered to those that show commitments to peace, to rule that is governed by law, and to separating religion gradually from the state.

As for democracy and a market economy in Afghanistan, these lie in the future. Mancur Olson also wrote that democracy had the best chance to evolve when it was hard for one ruler or group to impose their will on all the others, leading to an absolutist dictatorship. To encourage power-sharing, and make it difficult for new big or little tyrants to emerge, we should distribute aid to projects and communities that cross the boundaries of each warlord’s domain, valley by valley. Perhaps a period of enlightened, pluralistic feudalism may then permit Afghanistan to evolve over the next few centuries into a more decent place for its citizens to live.

This version:  9 January 2002. First draft: 4 December 2001


June 22, 2009

Moats, Mole Catchers, Munchies, and Mortgages

Writing about web page http://www.guardian.co.uk/news/datablog/2009/jun/19/mps-expenses-houseofcommons

What on earth were they thinking of, as they filed their claims for moat cleaning, mole catchers, munchies, and non-existent mortgages? Where was their sense of decency? Of reality?

The answer is most likely that members of Parliament are just ordinary people, doing what ordinary people do. Sometimes, ordinary people that join together in a crowd do things together that they would not do individually. Their sense of normality comes to differ from that of others, precisely because it comes from the crowd they belong to, not from outside; because of this they come to behave in ways that other ordinary people not in the group find ridiculous or abhorrent. The "in" crowd are not bad people, at least to start with, but they may end up doing bad things.

This is part of a growing field at the interface of economics and psychology -- the psychology of crowds, and the economics of herd behaviour. A recent article by Andrew Oswald on Herds, Housing, and the Crisis provides an application to the housing market bubble. It can also apply elsewhere. Recently, MPs have had a bubble of expense claims -- or is it that the public has overinvested in the trust we place in our representatives? Either way, the bubble has been well and truly pricked.

Yet the truth is that our MPs do represent us. We are, most of us, capable of being like most of them.

The best recent evidence of this comes from a recent study of MBA students carried out by Scott A. Snook, an associate professor at Harvard Business School. (His study will be published soon by Harvard University Press as Becoming a Harvard MBA: Confirmation as Transformation). Looking at their ethical maturity, Snook found that his students fell into three roughly equal groups.

One group of students had reached the fully-developed, self-authored adult perspective that we would like to think we all have. Deciding to undertake some action or other, they would judge: It’s okay because, having fully weighed the costs and benefits to others, I have decided that it is. At the other extreme, there was a group that operated from a largely "transactional" view of the world. They would act, having judged: It's okay if it benefits me. Of greatest interest to me was the intermediate group, described as predominantly other-directed. These would act on the basis: It's okay if others do it too.

It's really not difficult to construct the dialogue that most new MPs must have been through. "Can I really claim that?" "So, do the others claim it?" "Oh. Okay."

It doesn't make it right. It just makes it understandable. To me that's a good thing; I'm a social scientist, not a judge. Have we put too much trust in our elected representatives? Of course. Do we expect too much of them? For sure. Do they need new rules? Absolutely. But we need to remember they are just like us, even in ways that we might find uncomfortable; for one thing, they are no more (and no less) moral than we are.

In fact, they are just like us in every way -- but two: First, they are more ambitious than the average, and that is how they get to be where we put them. Second, we put them in a special club, the Houses of Parliament, a club where they learn from each other and learn to follow each other.

We need to get used to it.


May 31, 2009

Comrade Frumkin Was Right

This is about a forgotten prophet of the twentieth century. In 1951, the case of party member M. S. Frumkin came for investigation to the party control commission in Moscow. Frumkin was accused of adopting "a Trotskyist standpoint on matters of building socialism." 

The scandal arose in the context of a lecture that Frumkin gave on April 11, 1951, to teachers of the USSR transport ministry college for commanders of its armed security forces. The unpromising subject of Frumkin's lecture was "The conditions of material life of society." In the course of the lecture Frumkin remarked:

Transitional forms of production relations can exist not only during the transition from capitalism to socialism but also, conversely, during the transition from socialism to capitalism.

This opaque remark caused uproar. As the investigator commented afterwards, Frumkin had contradicted Stalin's "entirely clear" teaching on the transition from capitalism to socialism; according to Stalin, transitional production relations arose only in the context of movement from a lower form of society to a higher form -- not the other way around. The listeners protested. What was this "transition from socialism to capitalism"? One commented:

Comrade Frumkin's statement contradicts the laws of historical development of society ... it would follow from this formulation that the socialist system should be replaced by the capitalist [system].

Another asked:

Why was so much blood spilt in the struggle for socialism, if a return to capitalism is inevitable?

Instead of recognizing his mistake, however, Frumkin went on to defend it to the listeners, giving three historical examples of transition from socialism to capitalism:

  • The fall of the Paris Commune (1871)
  • The crushing of the Hungarian Soviet Republic (1919)
  • And the defection of Yugoslavia to the camp of imperialism (1948)

As I read the report (in the Hoover Institution's Archives of the Soviet Communist Party and State collection, RGANI, fond 6, opis 6, file 1643, folios 26 to 28), my interest mounted. These seemed like good examples to me. How would the listeners respond? But they objected: "These examples are incorrect!" Frumkin took a step back: the issue had "not been worked through and was for discussion."

Over the next few weeks Frumkin maintained this position. During this time he was first criticized at a party committee meeting in the college, and then reprimanded by the township party committee "for the political error that he committed and for reluctance to correct it at the proper time."

When the matter came finally to the party control commission, Frumkin accepted his mistake, putting it down to a "slip of the tongue." He claimed that he had confused it with the possibility of a violent capitalist restoration from outside, of the sort that Stalin himself had admitted in a letter "On the Final Victory of Socialism in the USSR," published in Pravda on February 14, 1938. Since Frumkin now accepted his mistake, and had been penalized within the party, the party control reporter proposed no further action.

Who was Frumkin? We are given only a few details. We know his initials but not his given name or patronymic. He was born in Russia in 1903; his family background was working-class. He joined the communist party in 1925. In 1935 he graduated from the Lenin Military-Political Academy in Leningrad. From there he was sent to teach in military schools in Briansk, then Gor'kii.

In 1943 Frumkin was taken into the Red Army where he served until 1946 as deputy chief of the political department of the 153 rifle division. On demobilization he was appointed deputy chief of administration of educational establishments for the RSFSR ministry of trade, and then section chief of the ministry of transport college where the incident took place. By the time of the investigation he had been moved on -- or down -- to be a political instructor in the security establishment of the Moscow-Riazan railway.

In short, Frumkin was a functionary of his time; there were millions like him in everything but that instinct that led him, for a few weeks in 1951, to defend the idea that history could go in reverse. Frumkin would have turned 88 in 1991, so he is unlikely to have lived to see his prophesy come true.


May 18, 2009

What Should Every Econ Grad Student Read?

We sat round the table discussing what is missing from the reading lists of today's graduate students in economics. Today's syllabuses concentrate heavily on stocking up their mathematical and econometric toolkit. I don't have a problem with that. On the contrary, I regret the technical deficiencies in my own background, and I regret them more as it becomes less likely that I'll ever make them good.

Still, we worried: do today's syllabuses neglect a broader understanding of how institutions have evolved and of what history shows? What should every economics graduate student read?

There has been a lot of comment recently on how to educate today's kids in animal spirits, neuroeconomics, and behavioral stuff. But that was not at the centre of our concern, important though it is (I wrote about it recently here). This is a correction that is already under way. When Thomas Sargent says that rational expectations is "oversimplified," it won't take long to trickle down into advanced macro.

What bothered us is deeper issues: are today's graduate students learning, discussing, and debating how successful market economies have evolved, and how and why markets work, what stops them working, and how best to let them work?

One suggestion was that the graduate students should all read The Wealth of Nations by Adam Smith (1776). The only Nobel laureate at the table (for this was Stanford) dismissed the idea with a wave of the hand. "Too eighteenth century," he said.

Overawed, I kept my mouth shut. Here is what I thought afterwards.

My first recommendation is an article called "The Use of Knowledge in Society" by Friedrich von Hayek (1945). Here, Hayek explained how markets economize on information. In a market economy, supply and demand allocate resources without outsiders or superiors needing to possess complete information about individual preferences or firms' capabilities. Bureaucracies, in contrast, need to know everything about you, me, and everyone else, before they can make decisions. Where market economies thrive on information, bureaucracies choke on it -- something that I see daily, sitting in the Hoover Archive among the milliions of documents bequeathed to history by the Soviet command economy.

Having read that, a natural question, particularly in our present-day context, is: what should be done when markets nonetheless fail? Here I turn to Oliver Williamson (1985), who proposed the idea of the "impossibility" of selective intervention. Most people think we should aim to combine the best of market forces and political action (I do too). Let the market economy do its wonders where it can; where it can't, let the government intervene and fix things. Williamson points out that in principle this cannot work out. The reason is that there is intrinsic uncertainty about where political action can allocate resources better than markets. If you give politicians the power to intervene selectively, it is certain that some of their interventions will make things worse. (And they do! Look around you!) As a result, no government, democratic or otherwise, can commit to intervene only when the result will improve social welfare.

Despite this, governments do intervene. When they do, do they improve things on balance? An essential handle on this question is provided by an article on "The New Comparative Economics" by Djankov et al. (2003). I do not know whether this article has truly founded a "new comparative economics" but it does conceptualize and model a fundamental idea. This is that every society faces its own trade-off between losses from political action and inaction. The absolute losses can be large or small, depending on each society's institutional arrangements, but every society has its own optimum. There is no guarantee that an optimum will be reached, however. Learning where we are in relation to our own optimum is similar to understanding whether we are suffering from too much or too little intervention.

Finally, although selective intervention is impossible, government have historically intervened and have often required the advice of economists to do so wisely. And they will continue to do so. Therefore, graduate economists need to understand how their advice can affect both economic policy and the economic lives of millions. In particular, every graduate student should know more about the Great Depression. No one has written a better account than Peter Temin (2000), and the story he wrote ten years ago has vivid, extraordinary relevance for the present day. I hope he is proud of it; he should be.

Unless you have a better idea ...

References

  • Djankov, Simeon, Edward Glaeser, Rafael La Porta, Florencio Lopez-de-Silanes, and Andrei Shleifer. 2003. The New Comparative Economics. Journal of Comparative Economics 31:4, pp. 595-619.
  • Hayek, F.A. 1945. The Use of Knowledge in Society. American Economic Review 35(4), pp. 519-30.
  • Smith, Adam. 1776. An Inquiry into the Nature and Causes of the Wealth of Nations. In four volumes. Edinburgh.
  • Temin, Peter. 2000. The Great Depression. In The Cambridge Economic History of the United States, Volume III: The Twentieth Century. Edited by Stanley L. Engerman and Robert E. Gallman. New York: Cambridge University Press, 2000
  • Williamson, Oliver E. 1985. The Economic Institutions of Capitalism. New York: The Free Press.

I am a professor in the Department of Economics at the University of Warwick. I am also a research associate of Warwick’s Centre on Competitive Advantage in the Global Economy, and of the Centre for Russian, European, and Eurasian Studies at the University of Birmingham. My research is on Russian and international economic history; I am interested in economic aspects of bureaucracy, dictatorship, defence, and warfare. My most recent book is One Day We Will Live Without Fear: Everyday Lives Under the Soviet Police State (Hoover Institution Press, 2016).



Economics Blogs - BlogCatalog Blog Directory

Mark talks about why and how he blogs on Warwick’s Knowledge Centre.

Search this blog

Blog archive

Loading…

Tags

Most recent comments

  • Great article on coronavirus Keep sharing your knowledge with us Educational and technology blog by Amrit on this entry
  • Thanks! Trying to work this out—as far as I knew, Joan Littlewood had the author down as "unknown." … by Mark Harrison on this entry
  • Powerful stuff, Mark. I look forward to reading the memoir. The lyric to "and when they ask us" was … by Robert Zara on this entry
  • Great history lesson. Something that was never taught in school, nor hinted about to egg your on to … by Julian Fernander on this entry
  • Thanks Tony! by Mark Harrison on this entry
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV