January 29, 2010

The Tyranny of the Authority: Blair at Chilcot and the worrying undertone of autocracy

History is written in terms of winners and losers - but Tony Blair's day at the Chilcot enquiry has come and gone, and the annals will have to call it a draw. It was a Frost/Nixon experience without the collapse at the end: while the questions took us on a tour of the failures of the invasion of Iraq, Blair deftly sidestepped the more scathing accusations and at times made the inquiry panel look rather weak and obsequious. He didn't shine enough to deliver a crushing blow to his adversaries but he was confident throughout and never expressed a hint of regret. Both Blair's supporters and critics will have found plenty to support their arguments, and nobody's opinion is likely to have changed.

Now, although my political alignment is typically 'student' in nature, I've never been the most ardent critic of Blair. For an example of that, have a look at Michael Rosen's Twitter. (That isn't a dig; the man is a legend.) Yet Blair's stint in front of the panel today, the way I see it, leads to a chilling indictment not simply of Blair but of our attitudes towards politics, leadership and the role of government.

The more Blair said, the more it emerged that the main reason the United Kingdom went to Iraq, in a nutshell, was that Blair wanted us to. There wasn't a hidden deal, a conspiracy to rob the world of its freedom and resources - it was the conviction of one person who, not to put too fine a point on it, felt like it. The swathes of empirical evidence, the sigcificant possibility of illegality, the legion of voices opposing the decision, were judged as insignificant; the unilateral 'World Police' attitude reigned supreme. This became clear relatively early on: when talking about his meetings with Bush prior to the start of the campaign, Blair told the panel of his thoughts that, if the US was going to occupy, "we had to be involved", had to be "right alongside" them. That's how most of us think about the FA Cup final or Blur's gig at Glastonbury, not a war that kills thousands. A personal rhetoric, not a political one. 'You had to be there, man, it was the best war of the decade!'

It bears pointing out that Blair did make wildly conflicting statements on this issue where it suited him. While one of his concluding statements was to the effect that he stood by "the decision I took", he earlier shifted a huge chunk of responsibility onto Lord Goldsmith by insinuating that Goldsmith's testimony was the main reason the war went ahead. Anti-Blair voices will certainly find something to raise a eyebrow about in the idea that 'it was Blair's personal decision except when it wasn't'. That's pretty bad, but isn't what I find especially worrying - in a six-hour interview with a politician who was professionally trained as a lawyer, one can hardly be surprised to find a few dodges and logical solecisms. If I was writing to highlight those (and rest assured there are plenty who will), I could be here all night.

No, what gets to me is that one person making a pivotal political decision that governs the lives of countless people is not just the case, but it is an admissible defence, if not at a trial, then to an inquiry panel and to a vast proportion of the public. It seems wrong that it is a possible and widely supported course of action to ignore the mounting (perhaps even overwhelming) evidence that invasion was not justifiable - and, of course, with the news that Iraq didn't have any WMDs, the sceptics eventually turned out to be right on this one - because of one's own borderline-fanatical belief. The official line that Britain has abandoned the old ways of theocracy and autarchy, and embraced democracy, is now another little bit harder to believe.

Right on cue, the Blair supporters were out in force reciting this very mantra. Some comments on the BBC website include calling Blair "a brave man" and "a great leader, who had so many important decisions to make". Of course, for every one of these comments there is another one criticising or vilifying Blair. But why does this debate even take place? In making his personal decision to take the country to war, Blair used reasoning that was specious, cognitively biased and unscientific, but it was still easy for his intentions to be accepted and enacted. Of course, one could reply that as the elected leader of the country, Blair has been chosen by the people to make the decisions and so it is his right to do so. But democracy does not end at the voting booth, and when a decision as incredibly momentous as this is made without consulting (or even fully informing) the people, I cannot help but wonder why the decision was solely his to make in the first place. Why was ignoring the evidence even an option? That is the root of the issue - while there are innumerable commenters with both the pro- and anti-Iraq war leanings, they are seen as two equal sides of an issue to be contested. The people who opposed the Iraq war were, in terms of its initial rationale, proved correct - but they were just one side of a debate, and there's always a debate about government actions, so you can really pick either one and they'll be equally accepted in the end.

That's how it goes with regards to our attitude towards leadership. If there is one idea that applies here, it is the typical left-wing motto that a central part of citizenship is to be sceptical of one's own government. But time and time again it is demonstrated that since we have appointed officials to lead us, we will unquestioningly accept whatever they decide. There was an expert legal opinion that the Iraq war would not be justifiable; there was a public whose support could at best be called 'divided' (and continued to diminish as the occupation dragged on). Why, when the evidence points one way, do we think it's 'just the way things are' when a leader decides to go the other? Why is it not the default norm for a country's actions to be swayed more by scientific, legal, or other expert consensus and the body of public opinion than by one person, simply because we have placed them on the revered pedestal of 'leader'? Blair may have kept his supporters and his 'great leader' image today, but his testimony shows that when massive, worldwide, countless-life-affecting decisions are to be made, our processes need to become far more democratic, not authoritarian.

(A brief caveat: you might find it rather cavalier that I'm focusing on the petty matter of the democratic processes in this country, rather than what you might think to be the real disaster which is the broken country, occupied for oil, that still doesn't have regular electricity. You're probably right, but so many commenters inevitably focus on the aforementioned that I decided be different, to address an issue that had come up for me today while watching the inquiry. This is a personal response, not a holistically political one.)

November 05, 2009

The Long Tale, a post script

At last, a truly happy ending!

Having heard a number of breathless tales of the visit of Mudimbe to the university, I checked out his stats on Amazon. Turns out The Invention of Africa doesn't do so badly in the Flesch stakes - it scores 19.9 which is just slightly (by a couple of percentiles) easier than Kant. Not too shabby, but check out this gem which is written by Richard E. Lee and edited by Mudimbe. It leaves Kant and everyone else in the dust, scoring 5.5 which puts it into the hardest percentile. Hurrah! Amazon further informs us that the book has a mean of 1.9 syllables per word, and 40.3 words per sentence. Is it even possible to write something with an average sentence length of over 40 words? Perhaps I shall try to write such a piece.

October 10, 2009

The Long Tale: Amazon's hard–to–read books in the Flesch

Oh, joyous day: Amazon has provided me with another means of needlessly quantifying subjective information. For a large number of their books they now publish readability statistics, in the form of the Flesch Readability Index, a number based on the mean lengths of sentences and words. An easier-to-read text gets a higher score; a score of 60 to 70 means a text is readable by an average 13-15 year old. This method of testing readability has actually become quite ubiquitous across America - the U.S. Department of Defence uses the Flesch index as its official standard and requires that its official documents meet certain score standards. More relevantly to me, Microsoft Word has also bundled Flesch for some time, and this led to my compulsively checking the scores of my undergraduate essays every five minutes, certain that an incredibly hard-to-read essay would translate directly into a high mark. (Detractors of literary criticism may have something to say about that, but who cares about them? They probably haven't even read this far.)

What Amazon does that is particularly noteworthy is turns the Flesch score into a relative percentile value - you can see what percentage of books are easier or harder than the one you're reading. Naturally, that leads to the tantalising question, "What is the hardest-to-read book in the world?" I scoured Amazon, determined to find and then immediately read the offending book so I could fill myself with the smug, self-satisfied assurance that everything I ever read would be a comparative walk in the park from now on. Unfortunately, it wasn't as easy to find ultra-difficult books as I had thought it would be.

Let's start with that old confounder of literary intentions, James Joyce. One might expect Ulysses to garner a particularly low score, but no such luck: it gets a score of 68.1. Finnegans Wake, then? It is harder, but not by much: it scores 60.0, which Amazon informs us is easier than 67% of books out there. Other books with reputations for difficulty fare just as badly. Moby Dick scores 57.9; War and Peace, 60.2; Gravity's Rainbow, 60.4. (Would you set Gravity's Rainbow as reading for someone in Year 10? I don't think my 14-year old self would ever get past the part where Katja and Brigadier Pudding... well, you know). All these, apparently, are comfortably in the easier half of all books ever written. Is every book I can think of, comparatively speaking, really a breeze? Even my previous blog entry, according to Word, manages to score in the thirties. Am I really twice as hard to read as Joyce? That's not much of an endorsement.

I became fearful that the detractors would score another point on me by virtue of scientific or legal textbooks being far harder than anything an English student might lay eyes upon. However, this is not the case. To give a few examples: a contract law textbook selected at random scores 53.5 (57% of books are harder), a book on Markov chains scores 62.2, and the formidably named "An Introduction to Magnetohydrodynamics" barely breaks into the harder half, with a score of 46.5 (43% of books have lower scores). Of course, a fool like me can't tell the difference between an easy maths textbook and a hard one, and there are, I am sure, many obscure and inaccessible science books which would be deep in the lower percentiles of reading ease, but this type of book is generally not included in Amazon's "Look Inside" scheme, so its stats aren't calculated and presumably won't affect the average.

The question, then, is: Where are all the really hard books? If I just picked books at random, about half of them should be harder than the selection above. And one in every hundred should score in the hardest percentile - something I've got absolutely near. Two more pieces of non-fiction did give slightly better results: Marx's Capital scores 43.3 (36% are harder), and the selected writings of Baudrillard scores 32.5 (21% are harder). It's an improvement, but still, one of every five books I look at should be harder to read than our friend Jean - so why can't I find any? Perhaps this is an error with the calculation itself. The formula for Flesch reading ease, according to Wikipedia, is:

flesch readability index

It strikes me that a computer can't calculate syllables as easily as it can, say, letters in a word. Neither Amazon nor Microsoft Word actually produces a total syllable count for a given text (Amazon gives you an average syllable count, rounded spuriously to one decimal point, while Word doesn't even do that) - if they are just approximating, or guessing, based on word lists or the number of letters in a word, then this might not be that accurate a gauge in the first place. But if that is the answer, it is not a very satisfactory one. The very essence of this blog is playing the numbers game on words, applying unreliable numerical explanations to things that are wholly unquantifiable or subjective; given that to begin with we are frivolously adopting such inappropriate modes of thought, to conclude that they are indeed inappropriate seems to beg the question somewhat pathetically. And even if Amazon's Flesch index is entirely off-kilter, surely there would still be some books at the very hardest end of the scale, even if they are there undeservedly?

Thankfully, one writer finally came to the rescue: Kant. His Critique of Pure Reason scores in the hardest 10% of books - a feat that was breathtakingly unequalled, until two minutes later I checked the Critique of Judgment and found it to have a score of 18.4, putting it in the top three percent. Success! It appears that extreme difficulty is a facet of the writer and not of the subject matter, for other Kantian philosophers get much easier scores - it's not philosophy, but Immanuel himself, that is a tough nut to crack. Statistically I should still have been able to find many books of similar difficulty, but perhaps I'm just not looking hard enough; maybe you will have better success. In the meantime, I will be using the frontispieces of Kant - not Joyce, Pynchon or any other writer of fiction - to hide my comic books in.

(A quick postscript: I had intended to create some sort of graph of how hard to read a book is against how good it is, using Amazon's star rating as a measure of the latter. It didn't work out, however, because almost every book on Amazon has a very similar average rating (around four out of five). This is because by and large people only review a work if they love it or totally despise it; therefore most books have a ton of 5- and 4-star ratings, quite a few 1-stars, and almost no ratings of 2 or 3. So, on these continuous linear scales, books vary widely in terms of difficulty, but every book is roughly as good as every other.)

September 30, 2009

Source of a revolution: in praise of the internet

The internet is our greatest hope for the future. There; contest it, laugh, berate me for being polemical - it's true. I will avoid producing a cute list of the internet's more annoying phenomena, which seems a little too obvious at this point; instead, be assured that I'm not talking about image macros and 4chan-spawned memes, but the democratisation of knowledge and opportunity, the transfer of privilege from the few to the many. The future, literally in front of us. Time magazine began to realise it some time ago, so it's not even a novel idea and shouldn't be controversial - but the popular opinion is still, more often than not, to reject it in favour of the internet-as-tacky-distraction, the grisly melange of pornography and lolcats. No - not while I'm logged on, which is embarrassingly often - the future, connected to you by a series of tubes (I'll stop now).

This piece of unadventurous futurism has been my personal opinion for some time, borne of a mixture of social theorisation and solipsism - in equal parts the hope for a more egalitarian, less plutocratic society and the desperate wishing that something, anything constructive or even tangible could come from all those hours spent idly browsing reddit and TV Tropes. But recently it has passed from mere thought into academia, or at least something roughly in that vicinity. Wikinomics: How Mass Collaboration Changes Everything by Don Tapscott and Anthony D. Williams covers the bases well, taking the reader on an articulate tour through all the successes of the internet's collaborated content - Linux, craigslist, blogs, InnoCentive, Second Life - as well as the (partial) defeats of those forces such as DRM that aim to halt the collectivist revolution. Using the term 'prosumers' they denote the ways in which the end-users of technology have become the makers of its creative content rather than just passive receivers (a point since made irrefutable, I note, by the unprecedented success of the iPhone's app store). Naturally, Wikipedia is used as the flagship for this new model of production - it is the biggest success of them all, a colossal peer production network that blew all its more traditional competitors into the past and is so ubiquitous that it verges on becoming a proprietary eponym, a Kleenex, a Hoover. And it's hard to argue - when was the last time you fired up Encarta?

Yet when it comes to the troublesome subject of money, the story is altogether different. Money has traditionally been where the internet-idealism comes unstuck - any sceptic you happen to be sitting next to will at this point nod smugly and inform you, 'There's no money on the internet'. And in accordance with this weak spot of the web, the authors are uncharacteristically timid: the politics becomes pronouncedly more straight-laced, fitting the collaborative model into conventional economic practices rather than - as might fit better with the tone of the rest of the book - seeking to break free of them. The chapter "The Wiki Workplace" details a few instances of how the web has translated into 'real world' profits (such as Best Buy's Geek Squad, which apart from turning profits is also hated by many for charging extortionate amounts to perform simple tasks), and postulates that web-based collective efforts could soon be turning over huge sums of money. Elsewhere, a sub-chapter entitled "Why the Open Source Critics are Wrong About Free Enterprise" outlines the benefits of peer production for businesses, rejecting the supposition that it might undermine the typical capitalist model.

I do not find these sections as appealing. The evidence is not there for the conventional economic success of peer production: of all the co-collaborative social networks, none is profitable of itself apart from Second Life, which has become so by creating its own in-game economy that users enter into financially - something Facebook, Youtube et al are unlikely ever to do. Nobody makes any money directly from Wikipedia. And in all this futurism let us not forget the recent past: the dotcom boom, which was a thrilling financial frenzy until the dust settled and revealed a desert strewn with unprofitable websites and shattered hopes. No, the old ways are not the only ones.

I personally find Dan Pink's analysis of the situation to be more applicable: as the internet moves the goalposts further towards creativity and co-operation, the traditional financial motivators are no longer all-powerful. We cannot rely on money to bring about the next set of revolutions in knowledge, production or social interaction - but as Wikipedia, Firefox, Linux and so many others have demonstrated, we do not have to, and in fact we should not, for it stifles thinking and encumbers progress. It is undeniable, there is very little money on the web - but this only makes it more of a dynamic and crucial force.

September 20, 2009

The long short list: a numerical Booker

Big book, big evil. So goes the alleged proclivity of the Man Booker Prize judging panels of recent years towards picking slim volumes as winners and leaving the doorstops without a look-in. A recent issue of Private Eye speculates that the 2009 award may halt this tendency due to the head judge being James Naughtie, whose capacity for huge reads has been expressed on more than one occasion. But does this make a difference? (And does any of this really matter?) The issue can only be resolved through the method I know best: a bout of meaningless numerical grinding.

There follows a graph of the lengths of Booker prize winning novels since the award's inception. In each case, the edition of the book in question is the one most readily available today (thanks, Amazon!). This is rather subjective and unscientific, but books don't tend to vary all that wildly in length from one edition to another, so the graph's integrity isn't irreparably damaged. Also, on the two occasions where the prize was awarded jointly to two books (1974 and 1992), both winners are on the graph in arbitrary order.

Booker prize winners by length

(list of winners and nominees)

The first observation, one that is as necessary as it is dull, is that there is barely a long-term trend to be found. Well, I didn't promise fireworks. That said, the insubstantial novellas that we hear are being picked to win recently aren't actually that short: with the exception of The Sea (which, and apologies for this churlishness, seemed a lot longer than 200 pages) they have been hovering close to the 300-page mark for the past seven years. Rather than a general shortening of winners, this is more like a convergence on the middle: compare it to the wild oscillations from the late seventies and all through the eighties, wherein the Booker readership tended to tire itself out with a long book and rapidly switch back to letting a tiny one win. (The main exception to this is 1990-2, when in an extraordinary show of resilience the panels picked three long books in a row. The graph spike would look more impressive if we elided The English Patient, which actually scrapes into the longer half of all the prizewinning books, but looks tiny by comparison).

So, if you want to write a Booker winner and aren't sure how long it should be (I know I have suffered a few sleepless nights), then there really are no strict guidelines, but recent records show it's preferable to hedge your bets. That, or write a book of exactly 288 pages - with four books to its name, it is the most successful pagination, followed by 336 pages which has produced a respectable three winners.

But wait! Back to Naughtie and this year's shortlist. Private Eye may not have been so wrong after all, for the books on this year's shortlist average around 450 pages. It is time to break the trend! Who will win? It doesn't look good for Coetzee, whose book is both the shortest of the lot (224 pages? Pah!) and the only entry from a foreign author in an unusually British list, making it something of a loner. Neither does Foulds's The Quickening Maze look a likely winner, as it will surely be seen as chronically undersized. No, for a clue we must look to the past, to the 1990 award, when Hilary Mantel was on the judging panel that gave the prize to A.S. Byatt's Possession, the longest book on the shortlist. This year, Mantel has evidently remembered this and got one up on her fellow nominee: poor old A.S. might have thought she'd clinched it with a 624-page epic, but Mantel trumps her, clocking in at 672. Yes, the Naughtian predisposal to verbosity combined with Mantel's shrewd lengthiness can lead only to the selection of Wolf Hall to be the Booker winner, and in fact the longest book ever to win the prize.

Of course, since the judging panel changes from year to year, the appearance of any lasting trends of any sort is bound to be more coincidental than anything else. Furthermore, most people would agree that the frivolous schoolboyish business of comparing lengths is no substitute for actually reading the books and deciding which is the best-written; then again, it doesn't require shelling out for six hardcovers, which makes it infinitely preferable to me. And they say investigative journalism is dead.

October 2022

Mo Tu We Th Fr Sa Su
Sep |  Today  |
               1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

Search this blog


Most recent comments

  • http://www.dailytech.com/UK+Online+Advertising+Beats+TV+Advertising+Revenue/article16378.htm by Colin Fallon snr on this entry
  • I feel the old chestnut that there is no money in the Internet has become outdated see this link by Colin Fallon and on this entry

Blog archive

RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder