There was a lot of talk on the ScienceBlogs back channel last week about Mike Dunford's post on President Bush's wrangling with Congress over funding the Iraq war. The post attracted a lot of attention, including many comments from readers who claimed Dunford didn't "support the troops." If they had actually read the post they would have realized that the "troops" include Mike's wife and two brothers.
Bora Zivcovic remembered a post by Chris Clarke, which argued that very few readers were willing to read very long blog posts. In the past, he had written several three-part articles, requiring readers to click from page to page. He found that the first page got fifty times as much traffic as the second page, which got fifty times as much traffic as the third page.
So that brought up the question: if a blog post is on a single page, how many readers bother to read it all the way to the end? Mike's post, at 1,700 words, was a comparable length to the articles Clarke was writing about, so it seemed like the ideal opportunity to conduct an experiment.
In order to do it, a little deception was required: we claimed that the Casual Friday study for last week was actually about political attitudes and legal knowledge, but in reality it was nothing more than a reading comprehension test. We "suggested" that readers consult with Mike's post before participating in the study. Then half the participants were given a second chance to reread the study before taking the test, and half of those were told that it really was a test of how closely we read blog posts, and encouraged to read it closely. Here are the results:
There were seven questions about the content of the post, corresponding to information given at progressively later points. If blog readers behaved like readers of multi-part posts, you'd expect that they'd get less accurate as the test progressed. Indeed, it does appear that the readers we surprised by linking only to the blog post, without warning them they'd be tested on the content, fared worse as the test went on: their scores for the last three questions are significantly worse than those for readers who were given a second chance to read the post, as well as those who were aware of the real purpose of the test.
But perhaps these readers simply skipped the post entirely -- they weren't just bad at the end, they were bad all the way through. Since we asked readers whether they read the post at all, we could recalculate the scores, throwing out those from readers who skipped the post entirely. Here are those results:
The green line shows the average scores for readers who were only given a link to the post. For six out of the seven questions, their scores were statistically indistinguishable from the other participants -- even those who were told they would be tested on the reading. (They scored better on question 2, but this is because the question asked how many follow-up posts Dunford had written, and this could only be answered if you had visited his blog.)
So Cognitive Daily readers, at least, don't appear to pay less attention to the beginning of a post than the end -- unlike the experience Chris Clarke had when he used to break his articles into several pages.
Thus, this study also makes an very good case for not breaking articles up into separate pages -- your readers are very likely to pay attention all the way through to the end, while Clarke's experience suggests they are very unlikely to do so if they have to click through to visit several new pages.
So why did some commenters on Dunford's post not acknowledge his family's military commitment? They, like 20 percent of our respondents who got the question about his family wrong, must have just picked the wrong paragraph to skim over.
P.S. Mike Dunford has yet another follow-up to his post here, where he recommends an excellent way for the rest of America to understand the type of sacrifice his family has been making throughout this conflict.
- Log in to post comments
Interesting. I know you're opposed (I think foolishly) to displaying error bars, but for heaven's sake, can't you at least put the Ns on the graph??! That green line could represent one dude for all the reader knows.
Interesting that the red line isn't markedly above the others in that second graph! (given I was taught as an undergrad that knowing you will be tested improves recall). When I participated I didn't read the post that carefully, and I feel like some of the questions were answerable based on a general idea about the writer's POV... therefore, maybe, for some people, the test didn't measure attentiveness of reading that precisely but rather reflected educated guesswork. Maybe a control group who is just given a short overview of the argument?
So, what, js? You're telling me that this "one dude" answered 63% of question one correctly?
:) Ok, fair enough Andrew, it could be three dudes and you could be misreading 66% as 63% (it would also be good practice to include actual numerical values on the graph or as a separate table). The point is that not including sample sizes is just bad science. Sure, these casual Friday posts aren't exactly scientifically sound anyway (no one is claiming that the poll respondents constitute a probability sample), but having sample sizes is just elementary good scientific citizenship.
Ns:
333 total, but a lot dropped out before responding. Among those actually answering the questions:
84 in the link only/read post condition
63 in the aware condition
57 in the second chance condition
122 in the link only condition
Thanks, Dave! I'm curious: were a disproportionate number of the dropouts from the link only group? I'm surprised that such a high percentage of that group read the post, and I can easily imagine participants from that group dropping out once they understood it was a reading comprehension test about something they hadn't read and weren't being given a chance to read.
I took the test last week, and I think I probably got them all correct except the title question... I read both the article and the follow-up article fairly quickly, but in toto.
I think this particular test, while a fascinating question, is lacking because these questions were really easy. In fact, I think that most people would have gotten some of them correct with only a passing knowledge of the article.
A new twist on this experiment might have questions of varying difficulty, all asking to recall specific details of the article.
Js:
175 were in the link only condition -- of those, 122 answered the questions, but just 84 read the post. So less than 50 percent in the link only condition actually read the post.
I followed the link...but for an Army brat who last year buried her daddy (with full honors, at Arlington National, but now I'm just bragging), the post tore at too many wounds to be able to complete the assignment.
BTW, my cousin's son is now in Bethesda after a suicide bomber shattered his femur. The hurting isn't stopping for military families.
Have you considered a follow up? So we noticed you followed our link...?
Tree,
Thank you for your comment. Our goal wasn't to trivialize the very real pain you and so many others are experiencing. While sometimes we joke about "casual" readers, there's nothing casual about what we asked you to read.
But I also don't think that it is wrong for psychologists to study (casually or not) how people read and respond to arguments about war. These arguments have everything to do with how wars get started (and stopped).
Don't overlook the sheer logistics of it. Those of you with grown-up offices and technology forget the hundreds of thousands of us, primarily in rural areas, still relying on dial-up connections. I read pretty fast, scan even faster. But an article or link has to be pretty damn compelling to get me to click through to a second page and wait for it to load, knowing that at some point I'm going to have to click back to continue reading the main blog.
For what it's worth, I did read the Dunford article and take the quiz. And here I am on the second page of this article about the results.