Lancet is one of the most esteemed medical journals in the world. So when last October the Lancet published an article claiming that 650 000 Iraqis had died as a result of the US intervention–a number that was tenfold above estimates of the UN or of other organizations–some people questioned the findings, and not only because they were for or against the war.

So today the UKTimes now has raised questions about the way the data was collected. They also question if the article was improperly reviewed in order to rush it in print prior to an American election.

To put the scandal into perspective, one should read an article in last week’s Scientific American magazine, which discussed why so many medical studies turn out to be wrong.

The problem is statistics: either your sample is biased, or you overestimate the possibility that the difference is a statistically chance rather than proof, or even that the data collected is wrong.

You know the phrase “Garbage in, Garbage out”? Well, in science, if the data you input is not accurate, maybe because the group was not typical, or because the question you asked was not the right question, you end up with the wrong results.

However, the most common error in medical articles are two: One: Lack of coordination between researchers, and Two: the tendency only to publish studies that agree with what they expected to find.

So if in medicine you study a population, you have to be aware of factors that bias your study. Is the group only upper middle class white males? Inner city poor people? Are blacks, whites, gays, straights, fat people, the elderly, smokers, or foreigners overrepresented in the sample? If so, you might not be able to assume the results are representative of the general population. And then there is error in collecting data. Often people answer polls with what they think you want them to say rather than what they actually do. (a lot more people go to church in polls than in reality, for example). And if the data is “too perfect”, you need to be suspicious. Is the data perfect because it was “cherry picked” and the bad data removed?
In the article about Iraqi deaths, the data was so much higher than other estimates that the authors should have questioned if the data was collected correctly.

And indeed, the UKTimes notes that there was a sampling error: The samples were from major streets. Since car bombs tend to go off on major streets, the death rate would be higher than in the general population.
Another expert pointed out that the study claimed each team did 40 interviews a day and that the interviews took 20 minutes each, and she questioned how anyone could do “continuous interviewing for ten hours despite 55C heat, this allows 15 minutes per interview, including walking between households, obtaining informed consent and death certificates.”

Indeed. Having lived in the third world, one problem is that the customs of greeting and socializing make data taking efficient, so there is a danger that one might just fudge the data a bit in order not to disappoint one’s employer. But I have a furthur question. Were those doing the interviews Sunni or Shiia? Did they or their relatives work for Sadam Hussein’s government? If so, in what capacity?

Once one gets past the probability of bad data, there were other mistakes that call into question the professional ability of the authors. These were mistakes in the article that fall into the “careless mistakes” category: using the terms “casualties” and “deaths” to mean the same thing (traditionally casualties is death plus injured), and badly drawn graphs.

Finally, other data such as the decrease in child mortality and the failure to find a corresponding huge increase in hospital treated wounded should have made the authors suspicious that the data was flawed.

All of this has implications that go far beyond the arguments for or against the war in Iraq.

For science is supposed to be about finding facts.

This leads us back the to the Scientific American journal article, which notes most studies that are later proven false is because there is lack of coordination between researchers (in this case, between the ivory tower writers and those actually paid to do an impossible number of interviews to get paid) and the fact that those writing the paper and the editor who rushed the study into print without sufficient review were oh too eager to publish it in time to influence an American election.

The politization of science has no place in scientific journals.

————————————–

Nancy Reyes is a retired physician living in the Philippines. Her webpage is Finest Kind Clinic and Fishmarket, and she writes longer medical essays on Hey doc Xanga Blog. She has published one article and done two other research projects that were not published.

Be Sociable, Share!