Fast science in a pandemic — that’s a good thing, right?
Often yes. But sometimes no.
And right now, we have no idea how big that “no” is.
More than 10,000 research studies have been published on COVID-19 since January. More than 3,500 in preprint.
This one — authored by 11 psychologists — caught my eye last week:
“Psychological Science is Not Yet a Crisis-Ready Discipline.” Right from the abstract, it pulls no punches:
Psychology research on COVID-19, we argue, is unsuitable for making policy decisions. We offer a taxonomy that lets our science advance in Evidence Readiness Levels to be suitable for policy; we caution practitioners to take extreme care translating our findings to applications.
The authors say psychology needs to emulate — believe it or not — rocket science. That’s because NASA (unlike psychology) has a systematic ranking framework (from 1-9) for evaluating the quality of any new science, based largely on how rigorously the science has been applied and tested under real life conditions (its Technology Readiness Level, or TRL).
At TRL 6, testing in actual applications begins. At TRL 9, the science is “flight proven.” Psychology, the authors note, wouldn’t even reach TRL 1 — where basic principles have been reliably observed, reported, and modeled in simulation..
So, psychology isn’t ready for pandemic prime time. What else isn’t?
Well, that French preprint paper on the positive effects of hydroxychloroquine in COVID-19 patients — you know, the one Donald Trump apparently heard about. A letter to the Lancet on the dangers for people with COVID-19 of ibuprofen, ACE-inhibitors and ARBs prompted the French health ministry and WHO to warm about using these drugs. Those conclusions had to be walked back — but I (having not heard the news) was still telling my wife not to use ibuprofen over the weekend.
Then there’s the now-infamous preprint study Stanford researchers did showing COVID-19 virus infection rates in California’s Santa Clara County were far higher than anyone thought — which of course would mean the virus was far less deadly than thought. Everything about this study — its methodology, the tests it used, its conclusion, and its funding (in part from the CEO of Jet Blue, not exactly a disinterested party in ending the shutdown quickly) — was roasted online by statisticians and epidemiologists. Yet, as two journalism professors showed in a New York Times op-ed, the conclusions took off in Conservative Twitter and then conservation media, finding a huge audience within 24 hours.
Subhead of that Times op-ed: “How coronavirus research is being weaponized.”
Homolak et al looked at 3,631 articles on COVID-19 from PubMed, both peer reviewed and not, and found that submission-to-publication (SP) times for most journals had decreased by a factor of 10. The median SP turnaround time was five days; 11% of all articles had an SP time of less than 24 hours.
The authors are worried:
Even though we believe standard SP times are overstretched and extremely counterproductive for science in general, a massive reduction seen in the case of COVID-19 articles is more likely to be in correlation with poor information quality than with high peer-review process efficiency… Despite the tremendous effort, editing and peer-review, usually considered as foundations for verification of scientific soundness, in this context ended up as merely a shell of their original purpose. As a consequence, the quality of scientific content published during the peak of the COVID-19 crisis was of significantly lower quality and should be carefully reexamined in retrospect once the pandemic subsides.
So get ready for a COVID-19 replication crisis, if we make it that far.
To be clear: this is not just a preprint problem. Peer review, as Brian Nosek of the Center for Open Science told the Times, doesn’t guarantee good science, either.
And we just need to stop saying that the public and/or the press need to get more discerning about the science they consume/cover. We are living in, as WHO put it in one of its more constructive moments this year, “an infodemic.” Science needs to police its own.
I like this solution, put forth by Daniël Lakens in Nature: “Pandemic researchers — recruit your own best critics.” Subhead: “To guard against rushed and sloppy science, build pressure testing into your research.”
Lakens understands the problem: Calling out bad studies on Twitter isn’t enough. We need to build in criticism “before the data were collected and the results were shared… With preprints, the boldest peers will still criticize the work, but only after mistakes are made and, often, widely disseminated.”
Lakens advocates a “red team” approach “that integrates criticism into each step of the research process”:
A red team is a designated ‘devil’s advocate’ charged to find holes and errors in ongoing work and to challenge dominant assumptions, with the goal of improving project quality. The team has a role similar to that of ‘white-hat hackers’ hired in the software industry to identify security flaws before they can be discovered and exploited by malefactors. Similarly, teams of scientists should engage with red teams at each phase of a research project and incorporate their criticism. The logic is similar to the Registered Report publication system — in which protocols are reviewed before the results are known — except that criticism is not organized by journals. Ideally, there is a larger amount of speedier communication between researchers and their red team than peer review allows, resulting in higher-quality preprints and submissions for publication.
But, you say, I already send my papers around for comment before submission. Yes, but do you offer them financial rewards for finding errors in your “experimental design, materials, code, analyses, logic and writing,” as Lakens is now offering potential Red Team members for his forthcoming preprint? And to donate $100 to top charities for each new “critical problem” the team finds?
In other words, have you truly incentivized a peer group to swarm over every aspect of your research and pressure test it?
I didn’t think so.
COVID-19 hasn’t created a problem in sloppy research. Rather, it’s revealed a problem science has had all along: the pretense that peer review was somehow protecting us from poor science and/or overframed claims about the meaning and impact of individual papers.
A “red team” approach isn’t just made for moment. It could be an important part of science’s future, conferring competitive advantage for pressure-tested research. If your paper has made it through a proper red team, it should say so, and perhaps even be certified. Then we have one more reason to trust its findings.
In the interim, let’s root for the fruits of fast science — and as Lakens suggests: “Maybe it’s time for a ‘worst COVID-19 study competition.’”