It is a sorry fact of science that many, many of the results reported even in peer-reviewed, published studies are wrong—by some accounts, most are wrong. By dumb luck (also known as statistical errors), something that seems to be associated with something else isn't; something that seems to cause something else doesn't; or something that seems to be the result of something else isn't. Alternatively, a study can fail to find evidence for something that, it later turns out, is indeed true. Both kinds of mistakes—false positives and false negatives—are well known to scientists, who take lots of precautions (not always successfully) to prevent them.
This would be bad enough if the areas where these mistakes cropped up were obscure, unimportant backwaters. But they're not. According to a new study (yes, I understand the irony), the more popular a science topic is, the more likely its findings are to be riddled with errors.
There are two main reasons for this, argue Thomas Pfeiffer of Harvard University and Robert Hoffmann of MIT in a paper in PloS One. The first (which doesn't exactly paint scientists in a good light) is that in really hot, popular fields, there is intense pressure to publish before the other guy does—to, as Pfeiffer and Hoffmann say, " 'manufacture' positive results by ... modifying data or statistical tests." (Not being idiots—or perhaps on the advice of their lawyers—they don't give examples of this, but let me offer the classic ones in which a scientist painted a mouse to make it seem that skin grafts had worked and, of course, the Piltdown fossil hoax.) The second way that a subject's popularity might increase its rate of errors is that when you have many, many researchers looking for something, the likelihood that some will find it through chance alone becomes greater. This is called multiple testing, and it's well known to let false positives creep in: "the more often a hypothesis is tested, the more likely a positive result is obtained and published even if the hypothesis is false," write Pfeiffer and Hoffmann. "In hot research fields one can expect to find some positive finding for almost any claim, while this is not the case in research fields with little competition."
The duo test their idea in the relatively uncontroversial field of protein interactions in yeast, seeing whether the popularity of the proteins thought to be interacting (yes, some proteins are more popular than others: for you protein groupies, actin and myosin win the popularity contest) affects the reliability of the findings. They evaluated about 6,000 published claims on a little more than 4,000 interactions. Long story short: the more popular a protein or its underlying gene (as gauged by how often it's the subject of a published paper), the less reliable are claims about it. "Interactions of highly popular proteins tend to be confirmed ... at much lower frequency than interactions of unpopular proteins," the authors conclude. Results "become less reliable with increasing popularity of the interacting proteins," something they find "disquieting."
Well, yes. They caution that "when interpreting results, the popularity of a research topic has to be taken into account."
I've been searching my memory bank for examples of this. Nominations, anyone? Off the top of my head, there are the many claims of links between a gene and a disease, which have now been shown to be wrong in a disturbing number of cases (as I blogged about a couple of years ago in the specific case of genes and heart disease, and as studies such as this and this have shown). There are also the uncounted number of claims about significant differences in the brains of men and women. I'll mention only the one about women supposedly having a larger corpus callosum connecting their two cerebral hemispheres, which is dead wrong. There are also numerous claims about genes and IQ, but let me mention one very high-profile one that seems to have fallen apart. And don't get me started on claims that amyloid plaques—the very definition of a hot topic—cause Alzheimer's disease.
So, reader beware. A scary number of scientific claims are wrong, and if a field is really popular the chance that a reported finding is in error is especially high.