Disarmament Insight

www.disarmamentinsight.blogspot.com

Friday, 2 November 2007

Putting Predictions to the Test


As part of it's work, UNIDIR's Disarmament as Humanitarian Action project is currently working on a publication looking at some constraints on multilateral negotiations, and how some of these might be overcome or at least potentially mitigated.

One conundrum for policy makers in general is that it's very difficult to accurately predict outcomes of complex political and economic processes, and this has implications for their work. Multilateral practitioners are no exception.

Meanwhile, it's often very difficult to get a handle on how good "expert" political judgment is. But not impossible. A recent book by Philip E. Tetlock, an American psychologist, has revealed through a careful decade-long study that people who make prediction their business - people who appear as experts on television, for instance, or who are quoted in newspaper articles, advise governments and businesses - are, on average, no better than the rest of us.

Tetlock's research team asked various specialists to judge the likelihood of a number of political, economic and military events occurring within a specific timeframe (about five years ahead). Close to 300 specialists offered outcomes representing a total number of around 27,000 predictions. As Nassim Nicholas Taleb observed in his book, The Black Swan:

"His study revealed an expert problem: there was no difference in results whether one had a PhD or undergraduate degree. Well-published professors had no advantage over journalists. The only regularity Tetlock found was the negative effect of reputation on prediction: those who had a big reputation were worse predictors than those who had none."
Ouch. But Tetlock's study didn't end there. His focus wasn't so much to show the real level of competence of experts, but to investigate how they spun their stories. Tetlock discovered various cognitive mechanisms of generating ex post explanation - mostly in the form of belief defence, or self-esteem protection.

Prominent were phenomena that psychologists call motivational biases, something Ashley Thornton has explored in previous posts on this blog. These include the self-serving attributional bias (the human tendency to blame unfavourable outcomes on external causes but take credit for favourable outcomes) and the confirmation bias (we tend to seek out and process information that confirms our pre-existing beliefs).

The New Yorker put it succinctly: "The experts' trouble in Tetlock's study is exactly the trouble that all human beings have: we fall in love with our hunches, and we really, really hate to be wrong."

Food for thought. More about Tetlock's work in future posts.


John Borrie


References

Philip E. Tetlock, Expert Political Judgment: How Good is it? How Can We Know? (Princeton NJ, Princeton University Press, 2005).

Nicholas Nassim Taleb, The Black Swan: The Impact of the Highly Improbable (New York, Random House, 2007).

To read a neat introduction to Tetlock's work in a review of his book in the 5 December 2005 edition of The New Yorker click here.

Picture by F33 retrieved from Flickr.

0 comments: