Expertise

The WSJ discovers the unreliability of wine critics, citing the fascinating statistical work of Robert Hodgson:

In his first study, each year, for four years, Mr. Hodgson served actual panels of California State Fair Wine Competition judges--some 70 judges each year--about 100 wines over a two-day period. He employed the same blind tasting process as the actual competition. In Mr. Hodgson's study, however, every wine was presented to each judge three different times, each time drawn from the same bottle.

The results astonished Mr. Hodgson. The judges' wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100. A wine rated 91 on one tasting would often be rated an 87 or 95 on the next. Some of the judges did much worse, and only about one in 10 regularly rated the same wine within a range of ±2 points.

Mr. Hodgson also found that the judges whose ratings were most consistent in any given year landed in the middle of the pack in other years, suggesting that their consistent performance that year had simply been due to chance.

It's easy to pick on wine critics, as I certainly have in the past. Wine is a complex and intoxicating substance, and the tongue is a crude sensory muscle. While I've argued that the consistent inconsistency of oenophiles teaches us something interesting about the mind - expectations warp reality - they are merely part of a larger category of experts vastly overselling their predictive powers.

Look, for instance, at mutual fund managers. They take absurdly huge fees from our retirement savings, but the vast majority of mutual funds in any given year will underperform the S&P 500 and other passive benchmarks. (Between 1982 and 2003, there have only been three years in which more than 50 percent of mutual funds beat the market.) Even those funds that do manage to outperform the market rarely do so for long. Their models work haphazardly; their success is inconsistent.

Or look at political experts. In the early 1980s, Philip Tetlock at UC Berkeley picked two hundred and eighty-four people who made their living "commenting or offering advice on political and economic trends" and began asking them to make predictions about future events. He had a long list of pertinent questions. Would George Bush be re-elected? Would there be a peaceful end to apartheid in South Africa? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits were asked to rate the probability of several possible outcomes. Tetlock then interrogated the pundits about their thought process, so that he could better understand how they made up their minds. By the end of the study, Tetlock had quantified 82,361 different predictions.

After Tetlock tallied up the data, the predictive failures of the pundits became obvious. Although they were paid for their keen insights into world affairs, they tended to perform worse than random chance. Most of Tetlock's questions had three possible answers; the pundits, on average, selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals. Tetlock also found that the most famous pundits in Tetlock's study tended to be the least accurate, consistently churning out overblown and overconfident forecasts. Eminence was a handicap.

But here's the worst part: even terrible expert advice can reliably tamp down activity in brain regions (like the anterior cingulate cortex) that are supposed to monitor mistakes and errors. It's as if the brain is intimidated by credentials, bullied by bravado. The perverse result is that we fail to skeptically check the very people making mistakes with our money. I think one of the core challenges in fixing our economy is to make sure we design incentive systems to reward real expertise, and not faux-experts with no track record of success. We need to fund scientists, not mutual fund managers.

More like this

Needless to say, the political pundits were hilariously wrong about the New Hampshire primary. I won't hypothesize about what actually happened, other than to say that I think many voters here wanted a longer primary. They didn't want an Obama coronation in the beginning of January. This says less…
In recent days, there has been a lot of discussion about Sarah Palin's lack of experience in foreign policy. These criticisms all depend on the same assumption: that knowing more about foreign policy is always better. (Experience is typically used as a stand-in for knowledge, so when people say…
Just a quick note on the liberal/conservative psychological study that everyone is talking about. (Dave Munger has a thorough write-up here.) Color me dubious. My own bias is to distrust any experiment that tries to collapse extremely complex cognitive categories - such as political belief - into a…
Joe Keohane has a fascinating summary of our political biases in the Boston Globe Ideas section this weekend. It's probably not surprising that voters aren't rational agents, but it's always a little depressing to realize just how irrational we are. (And it's worth pointing out that this…

I agree whole-heartedly. Experience and hubris should not qualify as grounds for trust and reward. While past success is important, I wonder if a comprehensive history of mistakes and lessons learned could well serve as a better indicator of successful prediction-making.

"Proves, yet again, nobody can predict the future! It is not possible to extrapolate a future data from historical data."

Not so fast there! There are plenty of places where past success is indicative of future success. Let's say you're scouting NCAA basketball - you've got a senior and a true freshman with the same stats. Who will go on to have the best NBA career? Based on the performance of thousands of other basketball players, it's vastly more likely to be the freshman.

Too obvious? How about actuarial tables? Given your age, sex, weight and smoking status, we can predict your life expectancy to within a few years with pretty good confidence.

Still not convinced? If you tell me how many hours you sleep on average and what time you went to bed last night, I can tell you what time you probably got up this morning.

Every outcome is probabilistic, but some things are more random than others. Taleb's point is that the downside of being fooled by randomness in financial markets is massive; in wine-tasting, nobody cares.

It's easy to pick on wine critics

Which of course in no way means that it shouldn't be done.

But I do take the point that the love should be spread around.

By xenoforth (not verified) on 16 Nov 2009 #permalink

I believe we did hire scientists- or at least physicists----let me point you to a NYT article from about September. In it, those scientists seemed to make the error that there was no friction involved in the markets- meaning that there was no room for people and their profoundly dumb and random choices. And then there were those pesky derivatives that seemed to exist on paper only-------

http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html?pagewanted…

Armed with their new models and formidable math skills â the more arcane uses of CAPM require physicist-level computations â mild-mannered business-school professors could and did become Wall Street rocket scientists, earning Wall Street paychecks.

Proves, yet again, nobody can predict the future! It is not possible to extrapolate a future data from historical data. A good read on the subject is Nicholas Taleb's "Fooled by Randomness."

You are misinterpreting Taleb's book. Taleb doesn't argue the future cannot be predicted at all; instead he argues only that historical data can only predict events of a sort contained in the data; there will come certain sorts of events which cannot be predicted because the historical data set does not contain anything similar. Future events which are similar to those in the past can be predicted from historical data, and Taleb does not contest this. (In addition, Taleb's critique only applies to statistical prediction; predictions derived from physical (dynamical) models have quite different strengths and weaknesses; some dynamical models have correctly predicted future events which were unlike anything in historical data.)

Experts are straw targets. It is how we respond to the information that matters. Also, predicting the future is only one part of what we take away from experts. Surely there is some value to knowledge, experience, wisdom, etc. even if humans are no better than dart throwing chimps at predicting the future, at least on a statistical level. I wonder if there is an example of a living breathing person who has entrusted his own savings to a system that randomly picks investments.

"In Mr. Hodgson's study, however, every wine was presented to each judge three different times, each time drawn from the same bottle."

Wine changes with time and the same bottle can taste DRAMATICALLY different in the first 5 minutes it's open vs after it has been open for 30 minutes vs 1 hour vs 10 (depending on the wine). I agree with the gist of this article, but the premise used seems to not hold up at all. Unless he means he presented each critic the same wine three times in a row, this doesn't have any statistical merit.

On pundits: the worst sort of punditry is when it devolves into a sort of horse racing spectacle, where political fortunes are predicted and handicapped. The real benefit of these people is their expertise on the political system and the presentation of perspectives on ideas.

On fund managers: I'd still like to know how in the world shareholders get suckered in to paying out such absurd salaries and bonuses. The standard explanation is that good ones can bring in millions for a company. But what does this have to do with the market value of each hiree?

I assume that prospective hires can demonstrate past success - and that compensation structures have to be competitive with other companies. But it all seems so arbitrary. It is a fact that the financial industry is incredibly nepotistic - many traders have little or no education and were hired based largely on who their father, friend or neighbor was.

It seems shareholders could easily pay millions of dollars less for people just as talented & who will work just as hard. Yet one must have a successful record, and thus have worked for some company - and that company was already in the position of paying outrageous sums to their employees. What are the starting salaries at these places? How did the incentive structure become so absurd?

Do fund managers or political pundits get paid for accurate predictions?
No.
They get paid for selling their sponsor's products (financial, political, commercial, or whatever).
That explains their income.
If they can maintain such utter cluelessness as not to know this (Bush), or even believe their own patter (Palin), or just lie with convincing arrogance (Greenspan), all the better.

By Janus Daniels (not verified) on 22 Nov 2009 #permalink

I think what's most incredible about Tetlock's research is that in the early 1980's, not only did he predict that George Bush would be elected he also predicted that there would be a "dot com bubble". Having said that, I guess it's telling that not one of the pundits questioned him on what a "dot com" was...fascinating...

By Nit-Picker (not verified) on 23 Nov 2009 #permalink

Some things are predictable, some aren't. Bateson said convergent sequences are predictable, and divergent ones aren't. In his examples, there was always a trigger, and this was the part that eluded prior analysis. The grosser a thing was, the more predictable it was. Fracture in a rock was easier to predict than in finely ground glass.

I think I remember something about the wine tasters: we can only keep four flavors straight at a time, and the experts were describing more notes than four. Similar story with the pundits and fund managers. Some dumb thing is always gonna come out of left field.