Tuesday, the British Medical Journal published the second of its three-part series claiming that the 1998 study in The Lancet that sparked fears that the measles-mumps-rubella vaccine could cause autism was the result of deliberate fraud and profiteering. It’s far from the first time Andrew Wakefield, the paper’s lead author, had been accused of wrongdoing: Just last year, the U.K.’s General Medical Council ruled that Wakefield’s representations were “dishonest and irresponsible” and that he’d displayed a “callous disregard” for the suffering of his test subjects. (As a result, Wakefield lost his right to practice medicine.) It’s also not the first time claims about the dangers of the MMR vaccine have been shown to be unsupported by reliable evidence: For a decade now, comprehensive studies conducted by researchers around the world have supported the vaccine’s long safety record.
But even before a British investigative journalist named Brian Deer uncovered evidence of Wakefield’s malfeasance in 2004 and even before the numerous studies that resulted from Wakefield’s claims had been completed, there were ample reasons to regard the results from that initial Lancet study with skepticism. (Deer is also the author of the new BMJ series.) If the press had done its research at the time, or if it had avoided falling into the canard that “on the one hand, on the other hand” reporting is equivalent to objectivity, a worldwide vaccine scare that continues to have repercussions today might have been averted.
On February 26, 1998, journalists from London's dailies were invited to a press conference at the Royal Free Hospital. The occasion was the publication of a dense academic paper in The Lancet that speculated about the possibility that the MMR vaccine, gut disorders, and autism were related.
Knowing that the paper's findings would be controversial, the five experts who addressed the media agreed beforehand that they'd deliver one overarching message: Further research needed to be done before any conclusions could be drawn, and in the meantime, children should continue to receive the MMR vaccine. Once the tape recorders began rolling, however, Wakefield went dramatically off-script: "With the debate over MMR that has started," he said, neatly eliding over the fact that he was, at that very moment, the person responsible for igniting the debate, "I cannot support the continued use of the three vaccines given together. We need to know what the role of gut inflammation is in autism. … My concerns are that one more case of this is too many and that we put children at no greater risk if we dissociated those vaccines into three, but we may be averting the possibility of this problem."
Several years later, when the dean of research at the London School of Medicine called the study “probably the worst paper that’s ever been published in the history of [The Lancet],” he was merely verbalizing what many scientists had thought all along.
• Caren Zucker: Christmas With My Son With AutismFrom the outset, there were indications that Wakefield’s rhetoric did not match the reality. After an initial peer review raised questions about the quality of Wakefield's research and the soundness of his reasoning, the editor of The Lancet demanded the paper be rewritten and slapped an "Early Report" label above the title and on the header of each page. He took the even more unusual step of asking Robert Chen and Frank DeStefano, two American vaccine specialists at the Centers for Disease Control, to prepare an evaluation of Wakefield's paper that would appear in print. "Usually, when they publish a commentary, it's to extol the study, or show how it's advanced the field," DeStefano says. That was obviously not the case here. When he first read the paper, DeStefano says, his reaction was, "There really didn't seem to be that much there. It was kind of like, Why were they publishing the article?"
You did not need to have advanced scientific training to share DeStefano's confusion. Broadly speaking, there are three ways scientists collect data to test new theories. The best possible method is through a randomized clinical trial, in which researchers take a sampling of a population and arbitrarily test their hypothesis on one half while leaving the other half untouched. By engineering the test population, it is possible to control for other potentially mitigating factors.
Unfortunately, randomized clinical trials are not always feasible. Sometimes this is for ethical reasons: You can't determine what quantity of a given substance is toxic in humans by administering doses that could prove fatal. Other times, it's because of logistical problems: It's impossible to test the lifelong effects of living in a given environment by transplanting half the population. The second-best option is a case-control study, where investigators analyze a group that has been naturally subjected to the issue under examination.
The least convincing data come from the type of study Wakefield had conducted: a simple case series. Generally speaking, case series are starting points for new hypotheses—and most of the time, what at first blush looks to be significant is nothing more than the result of the random nature of the universe.
To understand why a case series is a tenuous place to hang your hat, take the example of gender. Even though the population as a whole is split almost evenly between boys and girls, there are countless examples of individual families with all boys or all girls, and there are many more examples of families where a series of children born in a row are of the same sex. Now, imagine that an alien is sent to Earth to learn about what types of human offspring are born to parents living in different states. The first couple he meets, Alexander Baldwin and Carolyn Newcomb of New York, has four children: Alec, Daniel, William, and Stephen. Based on that set of data—which is the equivalent of a single case series—the alien would assume that all Earthling children born in New York were boys (and that they all were actors with a penchant for appearing in the tabloids). That conclusion would, of course, be incorrect—but the only way to realize that would be to collect more data.
As Chen and DeStefano demonstrated with their 839-word evisceration, the shortcomings of the Lancet paper went far beyond the limitations of the way its data had been collected. There were the overarching problems with its entire premise: Hundreds of millions of children had received the measles vaccine since it was first introduced and the vast majority of them had no chronic bowel or behavioral problems and despite hypothesizing that the MMR vaccine led to IBD which led to autism, in most of the cases Wakefield cited, the behavioral changes preceded the bowel problems. There were methodological problems, the most glaring of which was selection bias: The parents who came to Wakefield, who was not a pediatrician and had never been clinically trained to work with children, did so because he was known as someone interested in connecting the MMR vaccine with inflammatory bowel disease. There were concerns about the reliability of the paper's data: In the three years since Wakefield first reported finding the measles virus in patients with IBD, "other investigators using more sensitive and specific assays, have not been able to reproduce these findings."
Finally, and most damningly, there was "no report of detection of vaccine viruses in the bowel, brain, or any other tissue of the patients in Wakefield's series." The entire report wasn't built on a house of cards—there wasn't any house to begin with. Several years later, when the dean of research at the London School of Medicine called the study "probably the worst paper that's ever been published in the history of [The Lancet]," he was merely verbalizing what many scientists had thought all along.
Seth Mnookin is the author of the just-released The Panic Virus (Simon & Schuster), from which this essay is adapted. Blog posts and other updates can be found at http://thepanicvirus.com and you can follow him on Twitter at @sethmnookin.