In June, a writer named Jonah Lehrer got busted for recycling material on a blog at the New Yorker. Lehrer, who specialized in writing about the brain, had been writing a blog called The Frontal Cortex for six years at that point; having just been appointed a staff writer at the New Yorker, he moved it to their web site, where he promptly cut and pasted material from old posts, as well as from magazine and newspaper pieces.
At the time, I just thought he was squandering a marvelous opportunity. When I was asked to comment on the situation, I wrote that some of the things Lehrer had done were uncool, while some were fairly harmless. But Lehrer himself acknowledged that what he was done was stupid, lazy, and wrong. So I figured he’d gotten the sort of school detention that wakes you up and keeps you from getting expelled.
Four months later, I’m struck by how wrong I was.
I’m quoted in the latest of a long string of articles about Lehrer’s misdeeds, a feature in this week’s issue of New York by Boris Kachkaa feature in this week’s issue of New York by Boris Kachkaa feature in this week’s issue of New York by Boris Kachka. Kachka talked to me for a long while, and it’s clear that he talked to a lot of other people–journalists and scientists alike. He’s ended up with the best account I’ve read of this sad, strange story.
A lot of the other stories and commentaries have been twisted to showcase people’s assorted bugaboos. I’ve lost count of how many times people fussed over Lehrer’s fancy jackets and haircut, as if they were tied up in his moral standing. If Lehrer had a mullet instead, it would not diminish his misdeeds. There was a fierce passion driving people to draw lessons from Lehrer’s story–lessons, I suspect, that they had already drawn and for which they were now just looking for evidence to confirm. In a rare misstep, for example, Reuters blogger Felix Salmon declared Lehrer the exemplar of all that is wrong with TED talks: “TED is a hugely successful franchise; its stars, like Jonah Lehrer, are going to continue to percolate into the world of journalism.” In fact, Lehrer has never given a TED talk. When you’re condemning a culture that promotes the distortion of facts to fit an easy story, it’s best not to distort the facts for an easy story.
In his densely reported piece, Kachka rightly sees two major aspects to this story: Lehrer’s own misdeeds and the culture that fostered and rewarded it.
I was willing to cut Lehrer some slack at first, but as the additional evidence came in, I wondered if I was making excuses for him. The breaking point came when I read about how he had warped a story about a memory prodigy, claiming that he had memorized all of Dante’s Inferno instead of just the first few lines. When someone noted the error, Lehrer blamed it on his editor, but kept on using the enhanced version of the story in his own blog and on Radiolab (which later had to correct their podcast). It’s easy to slip up with facts, but we have an obligation to admit when we’re wrong and not make the same mistake again. It would have been bad enough that Lehrer distorted the facts and continued to do so after having the facts pointed out to him. But he was also willing to damage other people’s reputations along the way. That’s when I signed off.
As for the other side of the story–the culture that fostered Lehrer–I appreciate that Kachka avoided silly sweeping generalizations–that all popular writing about neuroscience has become the worst form of self-help, that speaking about science in public is the intellectual equivalent of pole-dancing. Kachka instead reflects on the trouble that arises when a science writer reduces complex science to a glib lesson. He’s right to zero in on Lehrer’s 2010 New Yorker article “The Decline Effect and the Scientific Method” as an example of this error. For years, a lot of scientists and science writers alike have grown concerned that flashy studies often turn out to be wrong. But Lehrer leaped to a flashy conclusion that science itself is hopelessly flawed.
That makes for great copy (29,000 people liked the story on Facebook), for which I’m sure his editors were grateful. But Lehrer himself didn’t believe what he was writing. If scientific studies were fundamentally unreliable, then why did he continue to publish articles and a book full of emphatic claims about how the brain works–all based on those same supposedly unreliable studies?
The reality is more complicated. After Lehrer’s piece came out, the Columbia statistician Andrew Gelman was asked what he thought of it. “My answer is Yes, there is something wrong with the scientific method,” he wrote–adding (and this is crucial)–“if this method is defined as running experiments and doing data analysis in a patternless way and then reporting, as true, results that pass a statistical significance threshold.”
In other words, this is not a matter about which we should simply issue Milan-Kundera-like utterances, like Lehrer does in his article: “Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.” In fact, this is a matter of statistical power, experimental design, posterior Bayesian distributions, and other decidely unsexy issues (Gelman explains the gory details in this American Scientist article [pdf]).
Kachka understands there’s no easy way out of this dilemma, quoting Daniel Kahneman, the Nobel-prize-winning, best-selling Princeton behavioral economist: “There’s no way to write a science book well. If you write it for a general audience and you are successful, your academic colleagues will hate you, and if you write it for academics, nobody would want to read it.”
I put it to Kachka in a similar way, referring to writers like Lehrer: “They find some research that seems to tell a compelling story and want to make that the lesson. But the fact is that science is usually a big old mess.”
And the very way we choose to read about science makes it hard to convey that messiness. I will use my own work as an example of that failure.
In the current issue of Discover, I examine electroconvulsive therapy. I had about 1500 words to write about it, and so I only focused on a single study recently published in the Proceedings of the National Academy of Sciences. I think it’s an important piece of research, because it uses fMRI for the first time to look at what happens to the brain when ECT pulls people out of major depression.
But it’s also true that the study was necessarily small, that the particular method of fMRI they used is very new, that for now the study remains unreplicated, and that there’s a lot of debate in scientific circles (not to mention beyond) about some of the impacts of the treatment.
In the end, I probably oversimplified, leaving people with too much of a feeling that ECT is a perfect cure (it’s not) and an impression that we know exactly how it works (we don’t). But, to paraphrase Kahneman, there’s no way to write a science article well.
Still, the article I wrote was, I believe, the best of my options for discussing the subject. I didn’t have ten thousand words to use to explore its full complexity. I certainly wasn’t going to get many readers if I wrote a scientific journal paper. And waiting for fifty years to see if this research holds up seems like a worse option as well. So I had to fall short. Again. And I will take the criticism that my article triggers and try to do a better job the next time around.
I don’t mean to sound hopelessly fatalistic. Writers can either tackle this dilemma with eyes wide open, or they can look for a way to cut corners and pretend that the dilemma doesn’t exist. And readers can improve things too. When you find yourself captivated by someone talking to you about science in a way that makes you feel like everything’s wonderfully clear and simple (and conforms to your own way of looking at the world), turn away and go look for the big old mess.