iPods are (insignificantly) more Popular than Beer


A poll of 1,200 undergrads at 100 colleges in the United States found that 73% of the students think iPods are "in". One tenth of all old people know that "in" means "hip". Half of all old people think "hip" means "the thing I just got replaced". Drinking beer and stalking Facebook tied for second most "in" thing -- scoring affirmative amongst "71%" of the students. Sorry, got a little bit too aggressive with the quotes; I promise it won't happen again. Given my infatuation with alcohol, I figured this problem needed to be addressed. By problem, I mean the 29% who don't think beer is the shit.

Please bear with me as I take a diversion into nerdiness (some will wonder whether I really need to divert from anything to approach nerdiness). The article notes, "The margin of error is plus or minus 2.3 percentage points." I've played around with different values, and I can't figure out what the hell they are referring to as the "margin of error". (Those quotes were necessary.) The 95% confidence intervals for the two surveys, iPods and beer, are +/-2.5% and +/-2.6%, respectively. Did they just divide one side of the C.I. in half to get the margin of error?

I also remembered (from intro stats) that many surveys assume that the probability of success is 0.5 (rather than the observed values of 0.73 and 0.71) to maximize the error (making any conclusions highly conservative). If we do that, we get a 95% C.I. of +/-2.8%. Once again, there is no obvious way to get from that C.I. to the reported 2.3% margin of error. Anyone have any clue what the hell the margin of error means? Drop a note in the comments if you've got an idea.

It's important to note that the iPods are not significantly more popular than beer (p=0.8). Thank god. A significant difference would provide irrefutable evidence that college students are absolute tools. Please follow that link. It was the number one Google Image result for the search string beer+ipod.

More like this

Well, if you calculate a 90% confidence interval for a .5 proportion (N=1200) you end up getting a margin of error of 2.37%.

I'm as baffled as you are how they got that number. I would have thought that maybe they were adjusting for a stratified and/or clustered sample, but such adjustments should have increased the margin of error, not shrink it (the traditional formula taught in stats classes assume a simple random sample and would underestimate the standard error for non SRSs).

Thanks for the interesting post, it's a great example of how people ignore statistical significance to push an interesting story.