How Research-Driven Organizations Become Thought Leaders

Slate, Science & Research Dead-Ends

Are the big journals obliged to consider publishing replication studies for their highest-impact papers?

If yes, what does that say about researchers’ perceived dependency on those few journals to marshal attention and achieve impact?

Four researchers just published a long article in Slate on how Science refused to send their paper out for review. Boo-hoo, you might think.

But their paper isn’t just any paper: it’s a replication study of another paper published in Science a decade ago, which found that conservatives and liberals perceive threats (such as images of spiders) differently.

That 2008 Science paper got tons of media coverage and changed the way many people think about political differences — as hard-wired in the brain, not culturally shaped and thus malleable or responsive to reason and argument.

The new study’s authors got the cooperation of the 2008 study’s authors for their replication efforts, including using the images used in the original study. They also used more than four times the number of subjects as the original study.

Their findings? “No correlation between physiological reactions to threatening images…and political conservatism—no matter how we looked at the data.”

Science, however, told the authors that “the field has moved on” and that their study was “better suited for a less visible subfield journal.”

Science, not surprisingly, is getting mildly roasted on Twitter for not being very scientific in this case. As the Slate authors put it:

We believe that it is bad policy for journals like Science to publish big, bold ideas and then leave it to subfield journals to publish replications showing that those ideas aren’t so accurate after all. Subfield journals are less visible, meaning the message often fails to reach the broader public. They are also less authoritative, meaning the failed replication will have less of an impact on the field if it is not published by Science. 

I get it. But wait.

The whole idea of research “messages reaching the broader public” seems here to rest on the idea of journals such as Science grabbing media attention.

And how does it grab that attention? With studies like the 2008 paper — not with replication of those studies.

The business of replication — both just getting it done as well as publishing it and engaging the media — undermines the model of public science engagement through journal publication. 

Journalists hate retractions. They will quickly tire of publishing study results when they have to look over their shoulders for the replication. Too many failures to replicate will soon mean no more initial headlines.

And if it’s bad policy for Science not to publish replications of its splashy papers, it’s also bad practice for researchers and research institutions to rely on Science and its ilk to “reach” their key audiences, instead of building their own channels and authority to attract and retain those audiences.

The irony is: Slate isn’t likely to publish another article about how unfair Science is to researchers trying to replicate studies published in that journal.

You get to write that article only once. Then you have to do something else about the problem.

Do premier journals have the obligation to consider replication studies for their most talked about papers?

And what does that say about our dependency on those journals for attention and impact?

Four researchers just published a long article in Slate on how Science refused to send their paper out for review. Their paper isn’t just any paper: it’s a replication study of another paper published in Science a decade ago, which found that conservatives and liberals perceive threats (such as images of spiders) differently.

That 2008 Science paper got tons of media coverage and changed the way many people think about political differences — as hard-wired in the brain, not culturally shaped and thus malleable or responsive to reason and argument.

The new study’s authors got the cooperation of the 2008 study’s authors for their replication efforts, including using the images used in the original study. They also used more than four times the number of subjects as the original study. Their findings? “No correlation between physiological reactions to threatening images…and political conservatism—no matter how we looked at the data.”

Science , however, told the authors that “the field has moved on” and that their study was “better suited for a less visible subfield journal.”

Science , not surprisingly, is getting mildly roasted on Twitter for not being very scientific in this case. As the Slate authors put it:

We believe that it is bad policy for journals like Science to publish big, bold ideas and then leave it to subfield journals to publish replications showing that those ideas aren’t so accurate after all. Subfield journals are less visible, meaning the message often fails to reach the broader public. They are also less authoritative, meaning the failed replication will have less of an impact on the field if it is not published by Science.

I get it. But wait.

The whole idea of research “messages reaching the broader public” rests on the idea of journals such as Science grabbing media attention with studies like the 2008 paper — not on replication of those studies.

The business of replication — both just getting it done as well as publishing it and engaging the media — undermines the model of public science engagement through journal publication. Journalists hate retractions. They will quickly tire of publishing study results when they have to look over their shoulders for the replication.

If it’s bad policy for Science not to publish replications of its splashy papers, it’s also bad practice for researchers and research institutions to rely on Science and its ilk to “reach” their key audiences, instead of building their own channels and authority to attract and retain those audiences.

The irony is: Slate isn’t likely to publish another article about how unfair Science is to researchers trying to replicate studies published in that journal.

You get to write that article only once. Then you have to do something else about the problem.