Getting the public to read scientific papers has long been a fever dream of many scientists interested in public engagement — and the many consultants who claim to teach scientists how to write better.
Time to break the fever.
The science writer Carl Zimmer this week published a guide on “How You Should Read Coronavirus Studies, or Any Science Paper” in The New York Times.
The headline really should have been: “Why You Shouldn’t Read Any Science Paper.”
That’s because Zimmer’s piece is a catalog of the daunting obstacles lay readers of scientific papers face — bad writing, buried narratives, crucial information hidden in supplementary files, unreliability of preprints, unreliability even if a paper does make it through peer review and (probably most of all) a lack of reader grounding in statistics.
Of course. Because, after all, scientific papers are specialist communications. As in: written for other specialists.
In contrast to his catalog of obstacles (and contrary to the piece’s headline), Zimmer offers very little advice for the rest of us on how to fruitfully read scientific papers. But here’s the nub of it:
When you read a scientific paper, try to think about it the way other scientists do. Ask some basic questions to judge its merit. Is it based on a few patients or thousands? Is it mixing up correlation and causation? Do the authors actually present the evidence required to come to their conclusions?
In other words: Be more of a scientist.
By Zimmer’s advice, of course, one would have reasonably judged as sound The Lancet study published in May linking hydroxychloroquine with an increased risk of death — before more than 100 other scientists raised questions about the database the study’s authors used and The Lancet retracted the piece yesterday. (Do not take that last sentence, please, as an endorsement of hydroxychloroquine for anything other than prescribed uses or an endorsement of those who endorse off-label uses.)
I’m using The Lancet study as an example of how lay readers can easily go wrong because, well, so did Zimmer, as an example of how other scientists went wrong on that study – at least in peer review. (I love this quote from an NBC News article on The Lancet retraction: “A first-year statistics major could tell you about major flaws in the design of the analysis.” The peer reviewers must have slept through those classes. And 99.9% of lay people never took them.)
Google “how to read a scientific paper” and reams of guides appear, all trying to be helpful. Many of them are far more detailed than Zimmer’s, but not one that I’ve come across is practical for non-scientists — in terms of base knowledge or time to invest (unless you’ve baked all the sourdough you can eat and have nothing else to do). All of them reveal the huge gap between what scientists think is reasonable to undertake (e.g., “summarize the background in five sentences or less”) and what the rest of us think.
The part of Zimmer’s piece I found most discouraging is when he tells us that “making judicious use of social media” can help us separate the coronavirus research wheat from the chaff:
Leading epidemiologists and virologists have been posting thoughtful threads on Twitter, for example, laying out why they think new papers are good or bad. But always make sure you’re following people with deep expertise, and not bots or agents of disinformation peddling conspiracy nonsense.
In case you were having trouble with that “deep expertise” or “agents of disinformation” thing, David Leonhardt of the Times followed this up by collating a list of 24 trustworthy academic experts on COVID-19.
If you’re a journalist, “follow these people on Twitter” is the hammer for every nail — because your job as a journalist, increasingly, is to be on Twitter. If you don’t have that job, “be on Twitter more” just compounds your problem — and evades the responsibility science journalists have to highlight, help vet and contextualize the important new research for the rest of us who don’t have the time or expertise to do those things.
Imagine if Zimmer or another science writer for a major media outlet decided to summarize a COVID-19 paper a day (or the five most important of the week on Friday), covering:
- What open review on Twitter is saying about the paper;
- How the paper fits into and advances our existing knowledge on these topics;
- What this all means for us — how we should now act, what we should now see differently; and
- Who in power might need to respond to the new knowledge and what are the options for those responses.
Now that would be a useful resource and service.
The through-line here … is that the “scientific method” (and the associated publishing and distribution system) doesn’t equate to an increase in IMPACT. IMPACT is the thing; making a difference in the world. Because especially today, we have “alternative experts” — so expertise by itself is no longer enough. One has to expand from DATA to MEANING.
Part of my professional mission is to guide researchers to translate their research into compelling ideas, frameworks and solutions. And it’s clear to me that so much research needs to be much better written — not for my benefit, but simply to accomplish its modest goal of communicating with other specialists. Much clearer, much more compelling and, whenever possible, much more vivid.
But telling lay people to read — or how to read — the scientific literature in a crisis is like telling people trying to put out a house fire to go learn more about hydrodynamics.
For whatever reason — unfortunately, because many don’t want to do the work of translating their or others’ research into something more useful to nonspecialists — researchers and research communicators and (apparently) some research journalists still think sending us to the literature is a good idea.
It never was. It particularly isn’t now. We need to tell them what’s important and why and what they should do. We need to be the authorities.