Rampton on Lancet study

Sheldon Rampton has a nice summary of the reactions to the new Lancet study. He concludes:

Even so, the results of the Lancet study, combined with what we know about the limitations of other attempts to count the dead, suggest that the war in Iraq has already claimed hundreds of thousands rather than tens of thousands of lives.

It is rather striking, moreover, that critics of this research have mostly avoided calling for additional, independent studies that could provide a scientific basis for either confirming or refuting its alarming findings

Meanwhile, David Kane has started a blog just on the Lancet studies and has written a one thousand word post on the single pre-war death from air strikes recorded by Lancet 2.

Tags

More like this

My post on the Lancet article has attracted a fair amount of comment, both in the comments here and on other blogs. On the whole, those who have addressed my criticisms have disagreed with them. I've read the criticisms and re-read both the new and the 2004 Iraqi death toll studies a couple of more…
Last night, I attended a talk by Sherman Alexie, who was hilarious and at times, biting. One of the curious things he noted, though, was that he had said something about the disastrous conduct of the wasteful war in Iraq, and despite this being an audience of collegiate liberals, no one applauded.…
OK this is officially quantitation week on The Daily Transcript. Today's number is provided by Gilbert Burnham's group at The John Hopkins: 655,000 deaths due to the Iraqi war. From the Globe and Mail: Mr. Bush has previously put the number of Iraqi deaths at 30,000. He reaffirmed that number…
A couple of weeks ago, the ever-inimitably sarcastic master of pus himself, Mark Crislip, posted an excellent deconstruction of a very disappointing article that was published in the most recent issue of Skeptical Inquirer, the flagship publication of the Committee for Skeptical Inquiry (CSI). I…

Tim wrote:

Meanwhile, David Kane has started a blog just on the Lancet studies

[Here](http://lancetiraq.blogspot.com/), if you're interested, but the bottom line on his post is that he's not a statistician.

But what happened when the Washington Post published an article calling for the collection of better data, and explaining in detail why such data is needed?

Tim took offense at a lack of praise for the Lancet study and described it as "more poor reporting by the Washington Post."

Ah, yes -- this article.

Here's what it has to say (all it has to say, actually) about the study's methodology:

The Lancet study relied on a door-to-door survey of Iraqi households in 33 neighborhoods. The surveyors asked for details of deaths in the months before and after the invasion and found a significantly higher death rate after. But the approach was flawed. War is not like a pandemic; it comes in pockets. And the study itself qualified its conclusions, acknowledging that the figure could range enormously between 8,000 and 194,000.

While not as bad as it could have been, that is a shoddy and misleading way to describe the study. Even within space constraints, it is possible to give a general audience some idea of what a probability distribution means, and to explain that confidence intervals are standard practice in statistical reporting. And to imply that cluster sampling is invalid when the effect under study is unevenly distributed is simply wrong. As to "calling for the collection of better data, and explaining in detail why such data is needed" I sure don't see that in the Post article. Perhaps you were thinking of this:

Let's not expect the Pentagon or the president to keep a perfect score. Let's not quibble at the margins of a total that, by any honest admission, remains unfathomable. But if war is too important to be left to the generals, then war's effect on civilians is too important to be left to the pacifists.

Sewall's article appears to be generally well-intentioned, but it did a bad job of informing the public -- the point, I think, of a criticism which mentioned ... what was it, again? Oh yes: "poor reporting."

The Rampton piece is the best summary I've seen yet. The only thing I'd add is that at least some journalists also think the death toll is probably in the hundreds of thousands--Robert Fisk and Nir Rosen.

By Donald Johnson (not verified) on 18 Nov 2006 #permalink

Donald, the only reason that more mainstream journalists aren't in support of the Lancet figures is because, to quote the Glasgow University Media Research Unit, what you read in the nmainstream state-corporate media "...Is not a neutral and natural phenomenon but a manufactured product of ideology". In other words, market forces and prevailing attitudes of western elites play a significant role in what the mainstream writes and who writes it. There's nothing vaguely conspiratorial about it, but the fact is that the vast majority of western journalists are sitting where they are and writing what they honestly believe EXACTLY because their views conform closely with those of their employers. The medis employs a filtration system which selects against certain views or expressions of opinion that conflict with those of the privileged groups that dominate society and the state. Sure, a few courageous and honest journalists are 'allowed' through the filter to create the impression of objectivity, but for every Fisk, Pilger, Monbiot, Hersh etc., there are hundreds of pundits who defer to the cause of empire in what they write. This may explain why the so-called 'liberal' media in the UK is nothing but a myth. I find it hard to read the Guardian, Independent or Observer, or to watch BBC news, without seriously risking my health.

I suggest that you read 'Guardians of Power', by the Davids Edwards and Cromwell, or its classic predecessor, 'Manufacturing Consent', by Edward Herman and Noma Chomsky, which demolish the idea of a 'liberal bias' in the mainstream media. You might also check out the frequent postings on the Media Lens web site. Historian Mark Curtis also lays bare this myth in his outstanding book, 'Web of Deceit' which gives a detailed and chilling account of atrocities carried out by successive British governments since the 1940's.

By Jeff Harvey (not verified) on 19 Nov 2006 #permalink

There are a number of systematic problems with Burnham et al. that either increase the uncertainty of the result or inflate the estimated increase in deaths:

Oversampling of More Violent Areas.

Violence in Iraq during and after the 2003 invasion has been very unevenly distributed. Two provinces, Anbar and Salah ad Din, with about 10% of Iraq's estimated population, accounted for 46% of coalition deaths. Violent deaths within each governorate were also unevenly distributed. The 2004 Iraq mortality study by Roberts et al. found more than 2/3 of their reported violent deaths within a single cluster out of 33 sampled, in the town of Falujah in Anbar province. If violent areas are oversampled, then violent deaths will be overestimated. This seems to be what actually happened.

The 2006 study used a 2004 UNDP population estimate for mid-year 2004. Unfortunately, at the time the estimate was prepared information to base it on was poor. The last nationwide census was in 1993. Further, there's reason to question the accuracy of the Saddam era censuses: the ruling party had strong motives for inflating the number of Sunnis at the expense of other groups. When a list of eligible voters was prepared for Iraqi elections using the UN oil-for-food ration system, it indicated that the 2004 estimate (as well as earlier official estimates by the Iraqi government) had significant problems. Assuming that roughly half of the Iraqi population is of voting age, an assumption consistent with the UNDP/ILCS 2004 survey, the December 2005 list of eligible voters indicates that earlier population estimates substantially overestimated the share of the population in the violent Sunni triangle and underestimated the population in the relatively peaceful Kurdish north and less violent parts of the south.

Even if the UNDP data was correct, the 2006 survey had distortions in the allocation of clusters. Anbar province, where the population from the 2004 UNDP estimate might call for 2.4 clusters out of the 47 sampled, received 3, and Wasit, a southern governorate that might have justified 1.7 clusters, received only one.
Further, there has been substantial movement out of and within Iraq since 2005. The UN is now estimating that nearly 100,000 Iraqis a day are leaving the country, and it seems likely that these are overwhelmingly leaving the most violent areas rather than the safest. There is also significant displacement within Iraq. Not all of that movement would change population estimates: in many cases Sunni departures in an area might be balanced by Shiite arrivals, and vice versa. But it seems likely that the internal displacements would also have some tendency to result in population growth in the safer parts of Iraq at the expense of the more dangerous.
Combining these factors could result in a substantial overestimate of violent death rates. If Iraqi violent deaths had a distribution by governorate similar to coalition deaths, and making fairly conservative assumptions of the impact of flight from and violent death in the more dangerous governorates, (500,000 dead or fled abroad by mid-year 2006, with the population shift proportionate to reported violence in each governorate) the cluster distribution by governorate used by Burnham et al. would overstate violent deaths by 15%. Sample weighting within each governorate could be biased by an equal or greater amount for the same reason, for a cumulative overestimate of over 30%.

This assumes fairly conservative assumptions for the impact of refugees and violent deaths on population distribution. Including some factor for internal displacement within Iraq, for example, would increase the likely overcount of violent deaths.

The absolute number of estimated deaths doesn't go down as much as the death rate, since using the voter list as a starting point gives a higher number of living Iraqis to multiply by the death rate. However, sample weighting is still a plausible explanation of a substantial overestimate of Iraqi violent deaths

Failure to Sample Two Governorates

Burnham et al. acknowledge that they did not sample two governorates in 2006, Dahuk in the north and Muthanna in the south, although doing so was part of the original plan. The failure to correct the error is somewhat puzzling, since both were among the least dangerous places in Iraq. With about 5% of Iraq's population, they accounted for only .2% of coalition deaths. Since neither was sampled, they assumed that there was no increase over the pre-invasion death rate, a decision they described as conservative. But was it? The 2004 Iraq mortality study found that death rates actually declined in the Kurdish north post-invasion. This is plausible, since the Kurdish governorates were engaged in ongoing armed conflict with the Saddam regime prior to the 2003 invasion, but relatively quiet afterwards.

Recall Bias

The 2006 study uses pre-invasion mortality as a baseline. This resulted in an unusually long recall period for the study: generally, other retrospective mortality studies have used recall periods measured in months rather than years. There is a danger that respondents will fail to remember the correct date of a prewar death, and not report it because they think it happened prior to the study period. The authors discount this possibility, since they got similar prewar death rates in both the 2006 and 2004 study. But that isn't necessarily significant because of the broad confidence intervals.

Consider this hypothetical example: actual preinvasion mortality is 8/1000/year. Recall error makes the perfectly sampled reported preinvasion mortality 6/1000/year for a 2004 survey and 5/1000/year for the 2006 survey. This is primarily because of respondents misremembering the date and thinking that some of the deaths happened before the study period. Postwar, death dates are also misremembered, but generally corrected when the death certificate is examined. Sampling error results in a sampled reported death rate of 5.5 per thousand in both surveys. The researcher falsely concludes that recall error isn't a problem because he got the same number both times.

While recall error can often lead to bias in the other direction because of telescoping of the recalled date, the examination of death certificates would tend to eliminate that problem in most cases.

Neither the unsampled governorates nor recall bias would affect the study's violent death estimate greatly, but they could easily have a significant impact on the study's estimate of changes in total and nonviolent deaths

It appears to me that you cannot attempt and analysis such as Will McLean's with single examples, but must consider all cases. For example, you might classify provinces as more violent and less violent. What is the over/under sampling across the two classes.

jre,

Did you actually read the whole article, or just the first few paragraphs?

Sewell's main point is this:

Bush said the United States would make every effort to minimize civilian casualties, and military leaders stress their intent to spare Iraqi civilians. Given this emphasis, you'd assume that the military and its civilian overseers would want to know how well they're succeeding. You'd be wrong.

Most of the article discusses how the military needs better data on civilian deaths, in order to learn how to minimize them. She rightly dismisses the Lancet study as not particularly useful for this purpose.

Ragout -
I read the whole thing. You'll note that I quoted the next-to-last paragraph. And I don't want to imply that Sewall does not care about tracking civilian deaths. Just the opposite -- as you point out, she deplores the fact that the US military has not done a credible job of estimating civilian mortality caused by the war.
However, she stops short of calling for the collection of better data, saying instead that we should not "expect the Pentagon or the president to keep a perfect score." And she wrongly dismissed the Lancet study -- the very best report available on the subject -- on the grounds that cluster sampling was a "flawed" approach because "war comes in pockets" and because the wide confidence interval meant that the study's authors had "qualified their conclusions." That is careless reporting, pure and simple.

Instead of complaining that Tim's perfectly valid criticism was made because he "took offense at a lack of praise for the Lancet study', why don't you focus on what we can all agree on -- that better data are needed. Something I think you and I and Sarah Sewall and Tim can all get behind. Something along the lines of

"We call on the US Department of Defense, the UK Ministry of Defence and the Iraqi Ministry of Health to conduct a careful study of Iraqi civilian mortality for the past five years, and to report the results."

Ron F:

Nearly 100,000 Iraqi refugees a day is indeed nonsense. What I *meant* to write was nearly 100,000 a month. Thank you for catching the error.

Eli Rabett:

You are correct that you can't do the analysis with single examples. I constructed a spreadsheet showing the number of clusters assigned to each governorate by Burnham et al., and then the number of clusters that should be assigned if certain assumptions were true. The assumptions were that the December 2005 voter list was a better guide to population distribution at that time than the 2004 UNDP estimate, that the rate of births and nonviolent deaths since then were as reported by Burham et al., that 500,000 Iraqis had either left the country or died between that date and the survey, and that the distribution of this population loss was proportionate to coalition deaths by governorate.

I then calculated what the impact of using cluster weighting based on those assumptions rather than those used by Burnham et al. would be, and applied that factor to actual coalition deaths by governorate. The result was that the Burnham et al. weighting by governorate inflated deaths by about 15%.

1) Thanks to Tim for the link. If anyone had comments or complaints about my analysis, I hope that you will post them here since Deltoid is clearly the central location for such discussion on the Internet.

2) Robert, what evidence do you have that I am "not a statistician"? Just curious.

And one more . . .

3) Thanks to Tim for providing a word count. For those not inclined to click, the short version is that the Lancet authors claim to have surveyed a family with a member killed by pre-war US bombing. Since few such casualties occurred, this is more evidence of unreliability for the underlying data. Since the Lancet team refuses to release the underlying data, there is no way for any outsider to determine how accurate the results are.

the short version is that the Lancet authors claim to have surveyed a family with a member killed by pre-war US bombing. Since few such casualties occurred, this is more evidence of unreliability for the underlying data.

My mind just blew. Extrapolating from one rare event to rubbish the reliability of the entire survey. That's statistically valid, isn't it?

The claim "Since few such casualties occurred, this is more evidence of unreliability for the underlying data" is to any statistician clearly nonsense I'd aver. Or perhaps David is about to stun me by justifying it. One more time anyway David: the way for the doubtful outsider to get a better idea where reality lies would clearly be for the outsider to arrange for another study to be undertaken. Evergreen.

"the Lancet authors claim to have surveyed a family with a member killed by pre-war US bombing. Since few such casualties occurred, this is more evidence of unreliability for the underlying data."

Mr. Kane
As a math prof. let me tell you that if any of my former students humiliated me the way you have just humiliated every math teacher you ever had, then I would kill them (if I had to rise from the dead and eat their brains to do it).

For any of my former students who are considering say anything that BRAIN DEAD, if you know what TCM stands for, then consider yourself on notice.

Now that I have that out my system, I feel a teaching moment coming on. Deal a bridge hand (13 cards out of 52). The odds of you dealing that particular hand are 52 choose 13 which is 476260169700 to 1 against, so you must not have really dealt that hand because it is too unlikely. All possibilities are very unlikely BUT ONE OF THEM HAS TO HAPPEN.

You just missed it you know. If there were two such deaths, then you might make the case that the cluster sampling was flawed because it would be very unlikely that both would be in the same sample. But RANDOM means you don't exclude anyone (even the guy who died getting his head stuck in a can of pork and beans).

Of course, if we're concerned that the report of a pre-war death by allied bombing is skewing the figures, we should exclude it as the Lancet excluded the Fallujah cluster - which would have the effect of reducing the pre-war death rate and therefore increasing the net increase in post-war fatalties.

By Ian Gould (not verified) on 19 Nov 2006 #permalink

elspi,

It seems to me that you are just wrong about this. I understand your point to be: suppose the Lancet study found that someone died from being hit by lightning. Although being hit by lightning is improbable, there are many unusual ways of being killed. If we ran the survey again, it might be pretty likely that someone would die in *some* improbable way, perhaps being hit by a meteor or getting his head struck in a can of pork and beans.

But in the Lancet survey, there are not many unusual ways to die. All deaths are recorded as falling in one of a few categories. Deaths from lightning, meteors, and tin cans would all be recorded as "non-violent deaths". In fact, being killed by a bomb before the war may be the *only* improbable way to die that would be recorded by the Lancet study. No others spring to mind.

So, a single pre-war bomb death is much stronger evidence for flaws in the Lancet survey than you claim (granted it may still be fairly weak evidence).

Ragout- you do agree that the US and its allies had been carrying out enforcement of no fly zones in the north and south of Iraq since the first Iraq war? DO you not recall reading in newspapers that american planes had bombed radar sites and other places that had offered potential threats? I do. Do you think they were manned by robots?

"So, a single pre-war bomb death is much stronger evidence for flaws in the Lancet survey than you claim (granted it may still be fairly weak evidence)."

There were two violent deaths in the pre-war part of the survey: One "air strike" and one "other explosion/ordnance".
What makes you think that the air strike is the unlikely one of those two?
All violent deaths are unlikely in a stable society. Having two violent deaths before the war was a little high, but when you chop a survey into small pieces, the confidence intervals becomes [0,\infty).
The law of large numbers only applies to large numbers (>>2 for instance).
If we want an accurate picture of the causes of violent deaths before the war, we need a sample size of more than 2.

And NO; one unlikely thing in a survey isn't evidence of flaws at all. NO unlikely things would be evidence of flaws (or rather fraud). Unlikely things happen ALL THE TIME. Two air strike deaths would have something else entirely (not a random unlikely thing but the same random unlikely thing TWICE).

Which part of 000000 is a random 6 digit number don't you understand.

Deaths from lightning, meteors, and tin cans would all be recorded as "non-violent deaths". In fact, being killed by a bomb before the war may be the only improbable way to die that would be recorded by the Lancet study. No others spring to mind.

I don't understand this argument. Are you saying that improbable deaths should be biased toward non-violence, and therefore recording a improbable and violent death is really improbable unless the survey is biased the other way? I think you'd need to quantify that to have a case.

There's little evidence that dying from pre-war violence is so wildly improbable that the survey shouldn't have recorded even a single death. As David Kane himself mentions in the aforementioned blog post, the US military reported the Coalition dropping 606 bombs on 391 targets in the pre-war period. Depending on the targets (the US said they were artillery installations, command centers, cable relay stations, etc. while Iraq said they included civilian buildings and crop fields), casualties could easily be in the high hundreds or low thousands.

By Anton Mates (not verified) on 20 Nov 2006 #permalink

Thank you guthrie; I stand corrected.
Getting bombed in Iraq before the war was not that unlikely. It is the other violent death that was unlikely.

Thats alright Elspi- I didn't include any data on pre-war bombings, but Anton has provided some, so that kind of makes my point much clearer. Besides, Iraq had a military, and even in Europe, with our modern and well trained military, there are still ordnance accidents. So I dont find such deaths too unlikely, although dont ask me the odds for finding 2 such deaths in a survey like the Lancet one.

As we've seen throguh out the psot-wear period, Iraq was riddled with hundreds of large, poorly maintained conventional weapons stockpiles.

The most likely cause of the second pre-war violent death is probably an accident at one of those depots.

By Ian Gould (not verified) on 20 Nov 2006 #permalink

elspi,

As I've already explained to you, there weren't 10 million possible responses to the Lancet survey, there were more like 10. Maybe you could make the case for a few dozen.

But now that I've read David Kane's post, I'm puzzled by the objections to it. Kane isn't suggesting that the survey be thrown out, he's suggesting taking a closer look at a few cases. You hardly need very strong evidence for that.

Let's suppose that the Lancet interviewers did record a detailed cause of death, as elspi imagines. And suppose that one of several hundred deaths was from a meteor strike. Wouldn't you take a few minutes to glance at the survey form? For example, to see if there were any other implausible answers, as there might be if a respondent didn't take the survey seriously? Hell, if I was running the survey and one death occured at age 106, I'd double check the survey form for typos.

It's hard to believe that Burnham & Roberts didn't double-check these pre-war bombing deaths in some way.

Anton,

600 bombs isn't very many. During the recent Israel-Lebanon conflict, it took about 10 air strikes to kill 1 person.

600 bombs isn't very many. During the recent Israel-Lebanon conflict, it took about 10 air strikes to kill 1 person.

What's your source for that ratio?

Regardless, I'm sure it's possible for several hundred bombs to kill very few people. But David Kane is arguing that it's sufficiently likely as to make it suspicious that Roberts et al. would pick up even one such casualty. That's a claim that needs backing.

Remember that the Israel-Lebanon conflict involved the evacuation of close to a million Lebanese and a quarter-million Israelis, so that most of the inhabitants of areas under heavy bombardment were no longer there to suffer casualties. Futhermore, much of the Israeli bombardment targeted non-manned infrastructure such as roads and bridges, whereas the pre-war US airstrikes targeted command centers, radars and cable relay stations, according to Lt. Gen. Moseley.

By Anton Mates (not verified) on 20 Nov 2006 #permalink

"he's suggesting taking a closer look at a few cases."

I am torn between two responses:

THE STUPID, IT BURNS.

Which part of "the law of LARGE numbers" don't you understand.
"A few" it turns out is not a large number.

elspi, burning up, argues that the law of large numbers means that one should never spot check cases with extreme values. All I can say is that I'm not going to hire elspi to do my survey work.

Anton,

In the recent Israel-Lebanon War, "The IAF ... attacked a total of around 7,000 targets." Since Israel killed about 1,000 people by all means, I conclude that on average it takes about 10 air attacks to kill 1 person. The quote is from here. Also, on a recent Crooked Timber thread brucer cited figures from Afghanistan, Vietnam and other previous US wars that were roughly similar.

*Sigh*. In this thread I see the usual hand-wringing denial for western complicity in mass murder during an illegal war against an enemy that posed no threat whatsoever to the U.S. In every Lancet thread in this site the same supporters of U.S. aggression and preventive wars pop up, claiming that the Lancet study is flawed, the JHU team are incompetent etc. *Sigh*.

What none of them say is how many civilians they reckon have died in Iraq owing to the utter destruction of the country by US-UK forces and the subseqent disappearance of any kind of social infrastructure. One hundred thousand? Two hundred thousand? How many constitutes a supreme crime for which the civilian leaders of the US and UK should be hauled before an international court and charged with crimes gainst humanity? More importantly, did Wolfowitz, Rumsfeld, Abrams, or any of the other war party in the Pentagon care at all about the prospect of mass carnage when they were planning this little adventure back in the 1990's? Wolfowitz is a student of the late cold war fanatic Paul Nitze (chief author of the infamous NSC Document 68 and the Gaither Report) and his 1992 piece, "Defense Planning Guidance" spells out exactly what the aim of this particular preventive war in Iraq should be about: securing access to and control over the country's oil. Forget all of the other crap that was used to justify it, as Wolfowitz was one of the two chief archtects for the war.

So how does the senseless butchery now add up? Its getting hrader every day for me to stomach the feeble attempts on Tim's site by those attempting to downplay the 'supreme international crime'.

By Jeff Harvey (not verified) on 21 Nov 2006 #permalink

It's silly to lump all air attacks into one category and assume they all kill x people per attack. It ought to depend on the target and how close people are to the target--if you characterized air attacks by weapons used and targets and civilian population density in the vicinity you could probably come up with a formula.

I'd doubt any figures for Vietnam, because the figures for civilian deaths in Vietnam vary by nearly an order of magnitude. As for Afghanistan, I wonder if we really know how many civilians are dying there.

But anyway, over 100 civilian deaths from 600 bombs dropped isn't unreasonable, and there'd be a roughly 5 percent chance of finding one of those 100 deaths in the Lancet survey, given that there were 13,000 people sampled in Lancet2 and that there were about 25 million people in Iraq in 2002.

By Donald Johnson (not verified) on 21 Nov 2006 #permalink

Jeff:

There's a popular bumper sticker in Seattle:

One death is a sin. Ten thousand is foreign policy.

Jus' sayin. But thank you for helping elucidate the idiotic mindset of the wargasm denialist set - America: f*ck yeah!

Best,

D

In the recent Israel-Lebanon War, "The IAF ... attacked a total of around 7,000 targets." Since Israel killed about 1,000 people by all means, I conclude that on average it takes about 10 air attacks to kill 1 person.

That's reasonable. However, as I mentioned before, Israel chose a different class of targets than the US, and (most importantly) most of the potential victims evacuated the areas of Lebanon which were under attack. After all, Israel destroyed something like 15,000 Lebanese houses and damaged over 100,000 more; the body count wouldn't have been anywhere near as low as it was if the inhabitants hadn't already fled. AFAIK there was no such mass evacuation in pre-war Iraq.

The quote is from here. Also, on a recent Crooked Timber thread brucer cited figures from Afghanistan, Vietnam and other previous US wars that were roughly similar.

I believe his numbers were for civilian deaths only.

By Anton Mates (not verified) on 21 Nov 2006 #permalink

Shorter thread:
Ragout: "Random = Representative"

Me: "No, Random = Random
Representative = not Random"

Ragout: "One of two prewar violent deaths doesn't seem representative to me,
so the sample wasn't random. Therefore logically Iraq has become a model of freedom and democracy and weighs the same as a duck."

Me: "It's a fair cop."

Crowd: "Burn him! Burn him!"

GWB: "Who are you who are so wise in the ways of science?"

Ragout: "I am Ragout, King of the Bible Code."

GWB: "My liege"

elspi,

Thanks for the nice summary of your argument:
"Random = Random Representative = not Random."
And you claim to be a math prof?

Random = Random, Representative = not Random."
Oh no, I missed a comma, you win.

Oh! Now I see, you weren't actually saying that "Random = not Random." That didn't seem any less sensible than any of your other "arguments," so I assumed that you meant what you wrote. It's awfully hard to understand what you're attempting to say when you insist on putting words in my mouth (for example, "representative").

So you are not stupid,
you were just arguing dishonestly.

"Nothing is more contemptible than a man who clings to an argument he knows to be false" Gauss

No, at first I thought that you were babbling incoherently. But your reply showed me that you were merely repeating the same error you've been making all along.

You may not believe it, but your clarification made your point clearer. I take it your clarifications are rarely so successful?