paradigm shift: fact-checking (journalism) vs debugging (programming)

If you've been following the Jared Diamond/New Yorker controversy, or my ongoing posts on journalism vs. blogging (here, here, here, here, here), you might be intrigued by this conversation about the culture of fact-checking in journalism, between journalism professor Jay Rosen and programmer Dave Winer, in their podcast series Rebooting the News. Consider this riddle: how is fact-checking in journalism like (or unlike) debugging a computer program?

Here's Rosen's take on it:

One of the features of a rebooted news system would actually be borrowed from the tech world. And it's the notion of bug catching, which is a very useful thing that programmers regard as normal. 'If you help us catch a bug -- if you point it out -- that's good, because it helps us make the program better.' There's no way to catch all the bugs before you release a piece of software. You need users to help you out. And for some reason, which we could talk about, that attitude has never been part of professional journalism.

Even though there are such things as corrections, and they do occasionally appear, it's actually more of a problem when you point out a bug in journalism, than a good thing. And I regard this as a defect in the culture of the profession.

Rosen isn't talking about the Jared Diamond case - he's talking about Maureen Dowd lifting a line from a blogger at Talking Points Memo (see here for Slate's summary). Winer notes that the tech community doesn't take the discovery of bugs with perfect grace, but it is at least seen as a normal, necessary process. In contrast, when a mistake is caught in journalism - particularly when it is caught by a blogger, as happened in the Dowd case - the reaction is often extreme defensiveness. Here's what Scott Rosenberg said about the New York Times' reaction to the Dowd situation:

As always, the problem here isn't the actual incident, which is hardly earth-shattering; it's the personal and institutional instinct to circle the wagons, which here has made it look like Dowd and the Times care more about preserving their reputations than leveling with their readers.

The "we stand by our story (or writer)" reflex is an old one that news organizations developed in a different era; it serves them poorly today. The reflex ought to change to a more cautious and open sequence of, first, "we'll get back to you" and then "here's exactly what happened."

Back on Rebooting the News, Rosen expanded on the differences between traditional and online news:

When something happens that shows that there's a rupture or a tear in the facade, the smooth facade of professional control, it's a big problem, whereas in the culture of blogging, where people are actually a little less pretentious about what they are doing, or less persuaded of the authority of it. . . it's easier to say "oh, I got that wrong" and fix it. So I really think that it's related to ideas about how you present authority. And there is no doubt that the online world and journalism moving online presents a challenge because it forces journalists to kind of reconfigure their authority, because they're not able to present that same facade.

Isn't this precisely what traditional media are struggling with - that instead of seeing the participation and criticism of their readers as contributing toward a better, "truthier" outcome, they tend to see reader criticism as an uncontrolled threat to their authority, and thus, a threat to the valuable cachet/reputation/prestige held by venerable mainstream media outlets, with which blogs still can't quite compete? Embracing a model in which public criticism is welcomed may not be the best financial move for mainstream media, when the boundaries between traditional outlets and the online free-for-all are already blurring. On the other hand, it may be the only way they can maintain their credibility. Who knows.

It's also interesting to compare engineering and journalism with science: how do scientists react when "bugs" are found in their research? The culture of science (hopefully) values the accuracy of the information over the reputation of any individual researcher. While honest mistakes are forgiven - artifacts, etc. - deliberate fraud is usually not. But would you say that the scientific community welcomes debugging by outsiders to a given field? I'm not sure I'd go that far. . .

Meanwhile, I opened my May 18 New Yorker to find a humorous poem by Ian Frazier. From Canto VII:

Look, I am turning forty, all right?

Let's just leave it at that.

Critics and people in the media who would ruin a celebration with this kind of "gotcha" behavior make me sick.

If you still doubt me,

Please be assured this magazine has a rigorous policy of fact checking,

And all the information in this poem has been checked,

And directly verified with me.

Enough said, really. :)

I first learned of Rebooting the News through the excellent Nieman Journalism Lab blog.

More like this

The year that just ended, 2009, was a year that saw huge changes in the world of media and the world of journalism. Science journalism has also been greatly affected, with many media outlets firing their science journalists first, then firing all the others afterwards. Much virtual ink has been…
The year that just ended, 2009, was a year that saw huge changes in the world of media and the world of journalism. Science journalism has also been greatly affected, with many media outlets firing their science journalists first, then firing all the others afterwards. Much virtual ink has been…
I was recently reading A Scientist's Guide to Talking With the Media, a useful and clearheaded book by Richard Hayes and Daniel Grossman of the Union of Concerned Scientists. Emphasizing the importance of science outreach, Hayes and Grossman praise the pop-sci luminaries who followed in the…
This week, Nieman Journalism Lab is running a fascinating series of video interviews with the New York Times' R&D group on the possible future face of news media. I know - you're wondering why the supposedly financially moribund NYT is wasting money on nerds who play with Kindles. Who do they…

You heard about it from Nieman Journalism Lab? Not from me, on Twitter, blog and FriendFeed? From Day 1?

Sorry, Bora, but yeah - I refuse to engage in friendfeed, so it wasn't until this last podcast which they have on their new website that I actually listened. And I really can't keep up with Twitter. I don't know how you do it. I think I may have seen it on your blog but I am so behind on reading posts, I admit I have no idea what you have blogged about for the past week or so, or shall I call it "the week during which Scienceblogs was completely unusable and farked." :P

LOL - I was joking, of course. Nobody can follow everything!

Hm... perhaps also a triage system for errors, akin to that of bugs/patches? Advisory (name mispelling or similar details not affecting substance of story), Important (plagiarism of content not affecting information conveyed), Critical (major factual errors such as in George Will's global warming column)?

Diamond would seem to be between I&C....

A nit: George Will's writings are analogous to virus programs. Sure, they too have bugs and for that we're generally amused and grateful. But programmers aren't interested in repairing the virus program.

By Matthew Platte (not verified) on 31 May 2009 #permalink

The problem with finding an error in journalism is that no matter how fast you find it, it's too late. We the readers, watchers, absorbers, have already processed it and moved on, making our judgments, associations, and choices from the mis-information. No one reads the "Corrections" column and goes back to rewrite their memories. The "suspect involved in the bombing" or "the suspected rapist" may as well be convicted before the trial, if the media makes it sound convincing, evidence be damned. We have a 15 second accumulator in our brain, beyond that, it's imprinted and it takes a drastic shock to reverse the outcome.
The last administration made full use of this short-coming.
That is why it is imperative that the news be accurate, and WRITTEN SUCCINCTLY before it is published. At least in programming, once you find the bug, you can fiddle with it, try it, see if it blows up, and try it again, even before anyone sees how poorly it was written.
- T.

I don't think the software analogy really works as presented.

Even though bugs occur in software, they aren't _supposed_ to occur in released software. That's what alphas and betas are for: they allow real users to use a product in a real environment on the understanding that there may be bugs. The bugs are gradually fixed, and when things seem stable enough, the product is released.

Where you could make the analogy work is in the pre-alpha phases. Most software companies have some group of testers who dig through their product and look for problems. They do the basic due diligence to make sure that the product mostly works before clients ever get to see it. You could argue that editors and reviewers are the in-house testers. They (should) dig through the assertions made in a story, pick a handful, and verify them.

If we're talking about rebooting journalism, a peer review process may be a better model to follow. Editors are relegated to checking grammar, readability, and the technical stuff. Peer reviewers would be something like the beta-testers in the software process: they are domain experts who can judge whether a set of assertions are sane.

Of course both of those methods slow down the publication cycle, which news outlets wouldn't like.

Bugs are absolute inaccuracies, usually hidden by untested dependencies.

Journalistic errors are usually much more obvious, but only if you already are knowledgeable on the subject. However, outside of, for example, science, less clear cut - particularly where the magic quotes (coding pun unintended, btw) are introduced, and believed truths move into believed opinions.

The Quality Assurance approach applied to journalists and coders is very different - both in terms of up-front direction and post production.

This is very interesting. A hypothesis I am working on regarding autism spectrum disorders is that it is a trade-off between a âtheory of mindâ and a âtheory of realityâ.

The way I define them, a âtheory of mindâ is the cognitive structures that translate a mental concept into the data stream of language (and back). The only way that individuals can communicate is by the first individual having a mental concept, translating it into a data stream, transferring the data stream with the second individual converting it back into a mental concept. The two individuals must share a âtheory of mindâ to be able to communicate. A âtheory of mindâ is only useful if it matches the âtheory of mindâ of the person you are trying to communicate with. A âtheory of realityâ is only useful if it actually corresponds with reality.

A âtheory of mindâ can be fixed and ideally never changes. A âtheory of realityâ must change when ever it is found to be in error. I think the âtheory of mindâ gets âfixedâ around the age when children lose the ability to develop a Creole language (a language synthesized from multiple pidgin languages). If a âtheory of realityâ gets âfixedâ, then the individual loses the ability to discard wrong ideas and learn new ones that contradict earlier ideas. Such individuals canât adapt to a new scientific paradigm. They may be able to do science, but only what Thomas Kuhn calls âordinary scienceâ within the scientific paradigm that they already have.

The compulsion to be correct, to conform to peer pressure and to fit within the group is what drives the compulsion of a cohort of children to develop a single well-formed language, a Creole from the pidgin languages of their parents. I think this same compulsion drives MSM and people oriented individuals (i.e. politicians) to âgo along to get alongâ. If you are thinking with your âtheory of mindâ, people that donât match canât be understood or dealt with. This is the ârealityâ that the âreality makersâ in the Bush Administration believed they were generating. It is actually a mirage, those with the shared âtheory of mindâ canât perceive ideas that cannot be represented by their âtheory of mindâ and mapped into their cognitive structures. They think they are making a reality, but actually they are constricting their ability to perceive other than their own conceptualization. So long as they can constrict everyone else to thinking only within what they conceive of as reality, they will be on top.

The problem with the analogy of error correction or finding bugs is that the âbugsâ in the two different systems are opposites of each other. If you find someone whoâs âtheory of mindâ doesnât match your own, the way to fix that bug is to impose your own âtheory of mindâ onto them. This is essential for top down control, as in political systems, religious systems, every system of power and privilege as in the Kyriarchy.

In a âtheory of realityâ system, errors do need to be fixed when they are found. But that is only necessary for individuals who live by their own âtheory of realityâ. If you live by your âtheory of mindâ (by being able to exploit those in the Kyriarchy beneath you), then you need to change the âtheory of mindâ of everyone else.

If you want your âtheory of realityâ to be correct, you have to make sure yourself that it is, by checking what you allow to be put into it.

I expand on this in the context of autism spectrum disorders on my blog.