There are two contradictory headlines today on Google News, both regarding someone I couldn’t care less about. However, they nicely illustrate one of my key concerns about the internet: the pervasive illusion that the “wisdom of crowds” is in fact wisdom, or in fact fact.
Both stories involve the heinous Jon Gosselin, who as far as I’m concerned is a waste of attention. You may have heard that the former reality TV star had his apartment trashed over the holidays, and that no one knows who’s responsible. But if one turns to Google News, one can see that People Magazine appears to have an answer: “Readers: Jon Gosselin Apartment Trashing Was a Stunt.” It turns out the article is simply reporting the results of a reader poll – there is no evidence whatsoever that the title is actually true. But it’s vitally important nonetheless that we learn that 70% of People readers are certain Jon Gosselin trashed his own place. Thank goodness that’s cleared up – now I don’t have to formulate my own opinion!
But wait. Directly above the People article, there’s a headline reading “Hailey Glassman Behind Jon Gosselin Break-in, PopEater Readers Say.” This article appears proud of its utter lack of factual basis:
Even though there is no evidence to tell if his ex-girlfriend Hailey Glassman actually robbed Gosselin’s pad at this point, 20,000+ readers voted and believe she’s behind the whole hoax. Over 70 percent said they were sure Glassman was responsible.
Wow. Now I have to decide if I should listen to PopEater readers or to People readers! Such a hard call! How could the “wisdom of crowds” fail me like this? (assume much gnashing of teeth and rending of garments here.)
Let’s step back for a moment just to describe the “wisdom of crowds” concept. Basically, it suggests that if you have a large group of people, and you ask them a question – the number of beans in a jar, for example – the average of their guesses is almost certainly more accurate than any one person’s guess, and likely better than experts’ guesses. (That’s why the “Ask the Audience” option on “Who Wants to be a Millionaire” is so popular.) The idea has been recently popularized by James Suroweicki and discussed by many others, including Daniel Tammet and Cass Sunstein. “Collective intelligence” or crowdsourcing are related concepts. And the “wisdom of crowds” is a legitimate phenomenon – if certain criteria are met. (Not all crowds are wise.)
So what if two different crowds’ “wisdom” produces incompatible verdicts?
One way to reconcile the differences is to look at poll methodology. People’s poll gave readers three options: Jon did it, Hailey did it, or it was a random crime. (are those really the only options?) PopEater’s poll simply asked whether Hailey had anything to do with it – a leading question.
Another way to explain the difference might be to investigate the demographics of PopEater readers vs. People readers – perhaps PopEater readers are more likely to be male, and thus more likely to have sympathy for Jon Gosselin. Who knows.
But of course the best answer here is that neither group has any idea what they are talking about. The “wisdom of crowds” concept only applies if the members of the crowd are slightly more likely to be right than not. So I should give zero weight to these efforts by Big Media to generate meaningless content through polling people on topics they know nothing about. Pretty much a no-brainer, right? But the problem is that a lot of people do think that if you ask 20,000 people any question, they’re more likely to get it right than not – and more importantly, that those 20,000 people have a democratic right to have their opinions heard and weighted equally with so-called “expert opinions”. Why else are reporters always interviewing people like Bob, a hardware salesman in Nebraska, to sound off on global warming and how it’s all Obama’s fault? Remember how Joe the Plumber became a political pundit during the campaign?
Of course this “ask the common man” trope didn’t originate with the internet. But because the internet constantly aggregates opinions in obvious (polls) and nonobvious (Wikipedia) ways, the internet is constantly bombarding us with messages about what the “typical” person believes, and encouraging us to weigh in on topics we should admit we know nothing about. It takes nothing more than a click and thirty seconds of typing to announce that “vaccines cause autism, male pattern baldness, and swine flu, and I’ll be damned before I let my kids be vaccinated for some stupid disease like measles or mumps that doesn’t exist anymore!” If enough people do that, suddenly it starts to look like consensus. And given that we’ve told the public over and over that science is built on consensus, people may start to think an internet consensus of biased non-experts is a valid one.
Tim O’Reilly says, “like Wikipedia, blogging harnesses collective intelligence as a kind of filter. What James Suriowecki calls “the wisdom of crowds” comes into play, and much as PageRank produces better results than analysis of any individual document, the collective attention of the blogosphere selects for value.” I’m willing to grant that process works pretty well for posts – because bloggers, like traditional editors, have reputations that they place on the line when they recommend a link. The good bloggers invest a great deal of time in their work, and bloggers often have subject matter expertise of some relevant kind – like the science bloggers here at Scienceblogs. (The people who are most highly motivated to blog on science are probably scientists themselves, for whom science seems disproportionately central to their lives.)
But such factors don’t regulate the outcomes of polls, or even comment threads. When people without any expertise on a subject are invited to respond, and the effort necessary to do so is very low, there is no reason to expect their responses to be more accurate than a random guess. Expertise matters. Obviously, if you want to know the total volume of raindrops that will hit you as you dash from your door to the car during a thunderstorm, it would be better to poll a classroom of physics or math students than a classroom of English majors. If you want to know the odds of getting sick from eating holiday ham left out on the buffet overnight, poll microbiologists, not lawyers. I’m not sure who you should poll about Jon Gosselin’s apartment – ideally no one, because who cares? The point is that subject matter expertise matters. Your opinion on the physics of a banana flung out of the space shuttle is simply not as good as an expert’s (present company who are space banana physicists excepted, of course), because they have analytical tools you don’t and are familiar with peer-reviewed literature on the topic.
People hate to hear that experts know more than nonexperts, because it smacks of elitism. Yet we all abide by that principle in daily life: we routinely seek out professionals for their training and experience. Few among us would be brave enough to diagnose our own illnesses AND repair our own cars AND brew our own whiskey AND rewire our houses – although I know one or two people who certainly could! But the point is that expertise is perceived as valuable, or we wouldn’t pay for it. (Of course when it comes to choosing professionals, we’re likely to go to Yelp or some other site reflecting the opinion of crowds – but whose reviews do you give weight to: the reviews written by people who’ve actually hired the professional in question, or those written by people who haven’t? Not all opinions are valid or useful.)
So I leave you with this question, which I recently brought up in a meeting at NSF and got absolutely no traction with at all. When we strive to encourage public participation in science, as in “citizen science” outreach efforts, how do we guard against the misperception that the scientific consensus should reflect the opinions of nonexperts? Telling nonscientists that their input is valuable is good – it gets them invested in the process of doing science, and helps them learn. We should make it clear that science is democratic, because it’s something everyone can learn, not matter their gender or ethnicity or socioeconomic background. Scientists are incredibly diverse people. But we also need to differentiate between the way science works – consensus among experts who are actively testing hypotheses – and the popular conception of a democratic “wisdom of crowds” process for determining the “truth.”
Anti-global warming, anti-vaccine and anti-evolution advocacy groups are already using arguments based in democratic principles: if so many people doubt global warming, or vaccine safety, they say, and if science is really based in consensus, then why won’t scientists listen to the public and admit they might be wrong? The answer, we know, is that scientists have been studying these ideas for a long time, and the popular misconceptions about them do not reflect a significant divide among the subject matter experts on interpreting the data. The scientific consensus is not always right, but it’s more likely to be right than a poll of People readers, and it’s almost certainly what we should rely on in making public policy.* The question is, how do we make that clear, while still welcoming everyone to participate in science?
*Just to clarify: the average citizen’s opinions about what kinds of policies our nation should have, how we should allocate tax dollars, what values and cultural mores our government should reflect, or who the President should be, are legitimate expressions of preferences in a democracy like ours. Many (most?) public policies aren’t based on science. But polls show the public holds science in high regard, and politicians frequently use science to support their political positions. And if a policy decision is supposedly based on science – whether it’s the efficacy of vaccines, or the existence of global warming, or the existence of stem cell lines – it should be based on the actual scientific consensus on the issue, not what the public mistakenly believes, or what they would prefer the scientific consensus to be.