Academic Fraud

How Social Scientists, and the Rest of Us, Got Seduced By a Good Story

Neat stories are too good to be true. True stories are too boring to sell.

Almost two years ago, the field of social psychology was rocked by some astounding news: Diederik Stapel, one of its stars, had been faking his research. I don't mean that he'd been subtly altering figures to give better results, or maybe running through a series of increasingly implausible modeling assumptions until they delivered the results he'd expected. I mean he'd apparently given up doing experiments entirely. Instead he imagined experiments, imagined which results would look good, and then sat down at a computer and entered those numbers into a spreadsheet.

The New York Times Magazine has an incredible article this week describing what Stapel did, and how he did it. What's less clear is why he did it, or how he was able to get away with faking results for seven long years. To his credit, Stapel, unlike most such fraudsters, does seem to grasp that what he did was wrong, and that he alone was responsible for his misbehavior. It was a refreshing change from the complaints about witch hunts, bad childhoods, or the weakness of one's "standard operating procedures" that have accompanied most such revelations. When Stapel's parents try to excuse him by criticizing the system, he corrects them:

The unspooling of Stapel’s career has given him what he managed to avoid for much of his life: the experience of failure. On our visit to Stapel’s parents, I watched his discomfort as Rob and Dirkje tried to defend him. “I blame the system,” his father said, steadfast. His argument was that Stapel’s university managers and journal editors should have been watching him more closely.

Stapel shook his head. “Accept that this happened,” he said. He seemed to be talking as much to himself as to his parents. “You cannot say it is because of the system. It is what it is, and you need to accept it.” When Rob and Dirkje kept up their defense, he gave them a weak smile. “You are trying to make the pain go away by saying this is not part of me,” he said. “But what we need to learn is that this happened. I did it. There were many circumstantial things, but I did it.”;

Would that we all had the courage to accuse ourselves when we have done wrong. And yet, he doesn't offer any satisfying insight into his decision. This is his best attempt:

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. “It was a quest for aesthetics, for beauty — instead of the truth,” he said. He described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.

This is profoundly unsatisfying. What is the attraction of orderly falsehoods? Nor does it help answer the most interesting question, which is not, after all, "Why did he cheat?" Ambition is a sufficient explanation for that. What is harder to explain is why he thought this would work. How did he think he could cheat, and not get caught? Arguably that question points to a much larger problem than Stapel's malfeasance.

While Stapel's parents are wrong to excuse his cheating, they are right to some extent: the system produced this screwed up result, and Jonah Lehrer, and many of the other fraudsters who have taken us in over the years.

I don't say this to excuse Stapel, mind you: you shouldn't lie and cheat, no matter how great the incentives are to do so. Nonetheless, we should try not to incentivize cheating. It is the burglar who bears the culpability for walking through an unlocked door, not the homeowner. But it is still sensible to lock your doors when you leave the house.

The Times piece strongly suggests that the field of social psychology was leaving the doors wide open and a "Welcome, Burglars!" mat on the front porch. That's all too easy to believe if you've read Nobel Laureate Daniel Kahneman's scathing open letter to some of his colleagues about the sloppiness of their research practices. Well accepted effects are turning out to be hard to replicate outside of the labs of the people who discovered them. Social Science research is vulnerable to all manner of statistical shenanigans, and a number of academics seem to be exploiting those vulnerabilities, either accidentally or deliberately.

At the end of November, the universities unveiled their final report at a joint news conference: Stapel had committed fraud in at least 55 of his papers, as well as in 10 Ph.D. dissertations written by his students. The students were not culpable, even though their work was now tarnished. The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.

The adjective “sloppy” seems charitable. Several psychologists I spoke to admitted that each of these more common practices was as deliberate as any of Stapel’s wholesale fabrications. Each was a choice made by the scientist every time he or she came to a fork in the road of experimental research — one way pointing to the truth, however dull and unsatisfying, and the other beckoning the researcher toward a rosier and more notable result that could be patently false or only partly true. What may be most troubling about the research culture the committees describe in their report are the plentiful opportunities and incentives for fraud. “The cookie jar was on the table without a lid” is how Stapel put it to me once. Those who suspect a colleague of fraud may be inclined to keep mum because of the potential costs of whistle-blowing.

Get The Beast In Your Inbox!

Daily Digest

Start and finish your day with the top stories from The Daily Beast.

Cheat Sheet

A speedy, smart summary of all the news you need to know (and nothing you don't).

By clicking “Subscribe,” you agree to have read the Terms of Use and Privacy Policy
Thank You!
You are now subscribed to the Daily Digest and Cheat Sheet. We will not share your email with anyone for any reason.

Worse, the system seems to be set up to select for people who do. Take a look at Stapel's criteria for creating results:

The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

The system was rewarding a very, very specific thing: novel but intuitively plausible results that told neat stories about human behavior. Stars in that field are people who consistently identify, and then prove, interesting but believable results.

The problem is that reality is usually pretty messy, especially in social psychology, where you tend to be looking for fairly subtle effects. Even a genius will be wrong a lot of the time: he will invest in hypotheses that sound convincing but aren't actually true, or come up with data that is too messy to tell you much one way or another. Sadly, the prestige journals aren't looking to publish "We tested this interesting hypothesis, and boy, the data are just a mess!" They want a story, the clearer, the better.

Academics these days operate under enormous pressure to churn out high volumes of these publications. Hitting those targets again and again is the key to tenure, the full professorship, hopefully the lucrative lectures. Compeition is fierce for all of those things, and it's easy to get knocked out at every step. If getting good results is somewhat random, then all those professors are very vulnerable to a string of bad luck. The temptation to make your own luck is thus very high.

Again, I do not excuse those who resort to cheating. But as the consumer of these publications, we should be worried, because this system essentially selects for bad data handling. The more you manipulate your data (and there are lots of ways to massage your data so that it shows what you'd like, even without knowing you're doing it), the more likely you are to come up with a publishable result. Peer review acts as something of a check on this, of course. But your peers don't know if, for example, you decided to report only the one time your experiment worked, instead of the seven times it didn't.

It would be much better if we rewarded replication: if journals were filled not only with papers describing novel effects, but also with papers by researchers who replicated someone else's novel effects. But replicating an effect that someone else has found has nowhere near the prestige--or the publication value--of something entirely new. Which means, of course, that it's relatively easy to make up numbers and be sure that no one else will try to check.

Most cases are not as extreme as Stapel. But if we reward only those who generate interesting results, rather than interesting hypotheses, we are asking for trouble. It is hard to fake good questions, but if the good questions must also have good answers . . . well, good answers are easy. And it seems that this is what the social psychology profession is rewarding.

Nor are they the only ones. My own profession loves nothing more than a neat story, particularly if your thesis is backed up With the Amazing Power of Science. Which is how Jonah Lehrer got so successful. Like millions of other readers, I loved his books: he has a great narrative gift. In hindsight, I should have remembered that life rarely makes a great narrative.

I won't defend Lehrer either, though I can easily imagine the pressure he felt, if not the decision he made under that pressure. He'd signed a contract to write a book on a great topic that hadn't been well-covered before: creativity. Unfortunately, it seems that the reason it hadn't been well covered is that it's an impossible topic with very little solid evidence to discuss. The demand for neat, compelling stories was much greater than the supply of same. Jonah Lehrer filled that unfortunate gap--and that's why we bought his books.

Readers hate ambiguity. Write a blog post laying out the complexities of some issue and arguing that some question is hard, the data is muddy, that there are no answers and no obvious person to blame and I guarantee that at least one of the first five or ten comments will say something like this:

"I can't believe anyone wasted time writing this. What's the point? That life is messy and it's hard to tell who's right sometimes? Way to state the obvious."

But this is not a trivial observation: most of the time, it's the truth. It doesn't seem to be so obvious, either. If it were obvious, we'd show more affinity for the folks with messy, lifelike data, instead of the clear, neat stories that we actually love.

You know where you find a clear, neat story with no extraneous details? Fiction. And if we demand a steady stream of such stories from our journalists and scientists, we shouldn't be entirely surprised when fictions are what we get.