How many people are "not everyone"? Some thoughts on scientific debates and smackdowns

You may have heard about the new paper on how people tend to pick friends who carry a similar gene variant. If true, it would be very cool. But in Nature, Amy Maxmen quotes scientists who don’t like the study at all:

“If this was a study looking for shared genes in patients with diabetes, it would not be up to the standards of the field,” says David Altshuler, a geneticist at the Broad Institute in Cambridge. “We set these standards after 10 years of seeing so many irreproducible results in gene-association studies.”

Because most genes have modest effects on behaviour or health, many scientists assume that thousands of SNPs — rather than six — need to be analysed before a correlation to any trait can be confidently made. Geneticists are often hard-pressed to find one SNP in a million that reproducibly correlates with a disease, says Altshuler. “It’s like the team bought six lottery tickets and won the megabucks twice — this is not how things work.”

Stanley Nelson, a human geneticist at the University of California, Los Angeles, agrees, adding: “It certainly is a provocative study — I would have loved to have seen it done with information from the rest of the genome.”

Kudos to Maxmen to dig deep, rather than reprint a press release. But I found it odd that these critics were relegated to the end of the article, introduced with the following:

But not everyone is convinced.

I find this an annoyingly vague phrase. If a hundred experts read a paper and pass judgment, what does it mean for “not everyone” to be convinced? It sounds like it could mean 99 out of 100 love it, or 0 out of 100, or anywhere in between. If the quotations in the article are a representative sample, “not everyone” means “almost no one.”  In addition to Altshuler and Nelson, Maxmen quoted one other scientist who said it was an interesting study, “assuming it’s right.” Hardly a ringing endorsement. If almost no one thinks much of the paper, shouldn’t that be the lead?

Maxmen faces a challenge for which there is no simple solution I know of. How does one report research? If it’s gone through peer review, is it enough to just explain the results and wait until other scientists test them? Or does one seek out a criticism from an expert in the field, simply to demonstrate that “not everyone” is in agreement? And if you contact a lot of people and they mostly say a paper is no good, can you go further and make their collective judgment the story?

That’s what I did recently when I reported for Slate on arsenic-based life: I got in touch with a bunch of scientists, almost all of whom criticized the paper. Is it true that “not everyone is convinced”? Well, yes, but it’s also true that not everyone can bench press five hundred pounds.

In that case, I stand by my journalistic decision–especially given the emails I’ve since gotten from a number of experts who read the article and agreed with it, as well as the absence of a spirited defense of the paper from someone who’s not a co-author.

But in other cases, the decision may be trickier. It’s easy to find biologists who will criticize papers in evolutionary psychology–see, for example, University of Chicago evolutionary biologist Jerry Coyne rage about some recent stories.

Coyne writes, “You can bet your sweet tuchus that had Carl Zimmer written something like this, it would have included a lot more bet-hedging.”

Leaving tuchuses aside (tuchi?), I read Coyne’s post and wonder, how should bets be hedged?  In the case of the friend-gene paper and the arsenic paper, the critics were getting into the details of how these kinds of studies are supposed to be done. But it seems that Coyne doesn’t think that evolutionary psychology can be done, period–or at least, he thinks the whole field is pretty lousy. [Update: This sentence is wrong. My apologies to Coyne.] And since his post, evolutionary psychologist Robert Kurzban has counter-attacked, observing that Coyne and others hold evolutionary psychology to an unreasonable standard that they do not impose on other areas of research. My instinct in such cases is to write about a debate, rather than a critique. But there’s no hard and fast rule about when a story shifts from one to the other. On that, everyone–not not everyone–should agree.