John Timmer of Ars Technica recently announced that Ars had established a formal policy on how it reported retracted papers. The announcement has weight: Timmer (Ars’ senior science editor) is highly respected as one of the most rigorous, hype-resistant science reporters on the planet.
But will the policy really do any good? I don’t think so — not just because getting people to stop believing something is very, very hard, but because it doesn’t attack the root cause of the problem: breathlessly reporting papers in the first place as more than they are.
Is It Enough to Slap “PAPER RETRACTED” on a Headline?
Here’s what Ars’ new policy looks like in action for an article reporting a paper that was subsequently found to be based in part on fraudulent data:
Ars added “PAPER RETRACTED” to the headline and a prominent note in a grey box under the photo explaining the retraction. That’s it.
For another kind of retraction — a more innocent one based on faulty conclusions reached because of the bias in analytic software the authors used — Ars took the same tack: adding “PAPER RETRACTED” in the headline and an explanatory grey box. Nothing else.
This wasn’t enough for some commenters on the article announcing the new policy — I’ll let commenter GS sum up their objections:
The way I would handle a retracted paper at Ars: Add documentation of the retraction to the top of the original article, with the reasons, author(s) position if available, a link to the original retraction notice, and the events leading up to it as much as available. The headline: Retracted Paper: (original title). The modified article needs to be forced into the stream of articles as of the modification date since it documents a new event. Not all of this is obvious from the article.
The point seems compelling: the article is still there, still indexable by search engines, and a skimming or non-savvy reader might very well miss the retraction flags and still think the paper was valid.
Here’s Timmer’s response in the comments to GSM:
A number of people have suggested some variant on this, so I’m going to explain why we’re not doing it.
A number of studies have indicated that, when people are provided with information that shows common beliefs are false, they often actually end up having their belief in the original falsehood strengthened. This is in keeping with what we know about memory consolidation, which indicates that recalling a memory is a necessary part of consolidating it. So, rerunning the exact same article again, no matter how you flag it, runs the risk of doing the opposite of what you want: reminding people of information we now know is wrong.
Here, we’re hoping that providing information in a completely new context will make it easier for people’s memories to hang on to the idea that information was in error.
Why Journalism Doesn’t Like Retractions
One problem with this logic, of course (aside from the fact that it depends completely on journals to retract a paper, which as we’ve seen can be challenging for many big ones): the article above doesn’t provide information in “a completely new context.”
Saying that “rerunning the same article — no matter how it’s flagged — is worse than flagging the article but not rerunning it” doesn’t make sense. The erroneous reporting and information is still there for anyone to turn up in search; Ars just isn’t posting it afresh.
Which suggests another issue with Timmer’s explanation: it’s not entirely honest about the full context of Ars’ formal policy, which nests inside Ars’ conventional journalism business model.
Journalism doesn’t want to “rerun” old stories just because the underlying paper has been retracted. It doesn’t want to “rerun” old stories because they’re old, and journalism is in the business of new. “New” with regard to a paper might be a Twitter dustup about fraud or methods. If a retraction is part of that story, I’d bet Arts would make room to cover it somehow.
When Science is Just Papers
I’m not condemning Ars’ retraction policy. If we take seriously the research on repetition (even for the purposes of debunking) reinforcing erroneous information, we have to go into retraction exercises with seriously limited expectations about their effectiveness.
I am condemning Ars’ (and almost every other science media, along with big journals) for their complete dependence on media covering new papers and promoting “discoveries.” Retractions are noteworthy because headline science encourages a culture of individual papers and breathless “breakthrough” announcements. Everything is “moon or bust” because we don’t have a better evaluation system (like a research confidence index) to tell us what the science says instead of what just a paper says.
Science Magazine’s new list of the top scientific breakthroughs of 2019 is refreshingly (and surprisingly, because it’s Science) almost free of single-paper stories. It concentrates instead on techniques that changed our perspective (like the photograph of the black hole) and convergences of research onto discoveries (such as the supplement that stimulates beneficial gut bacteria).
Yes, it’s less newsworthy — under the current definition of that term — to cover replication; which is part of why replication doesn’t get done as often as it should. On the other hand, it’s hard to retract a convergence. Journalists and science communicators should wait — not just longer, but more often — until we have them.