How researchers get heard
Abstract lines

Avoiding Science’s Schlock Pile

What if researchers had to rate — publicly — the evidentiary robustness of the major points they made in books, opinion pieces…all their communications for non-specialists?

What if research communicators had to do same?

Oh, what a wonderful world. I think.

Christie Aschwanden reports for Undark on a new book by Stanford psychologist Jamil Zaki, “The War for Kindness: Building Empathy in a Fractured World,” which Aschwanden writes “is written in the breezy, accessible style typical of pop science bestsellers.”

With a crucial difference: Zaki tries to separate “The War for Kindness” from the schlock pile by numerically rating the scientific weight behind the book’s major claims.

Zaki didn’t rate them himself — that would be biased. He hired a Stanford psychology doctoral student, Kari Leibowitz, to subject each claim to an independent “miniature literature review.” Aschwanden reports that Leibowitz then “rated each claim on a scale of one to five (from weakest to strongest evidence) and wrote up a rational for that rating before sending it to Zaki for discussion.” Zaki rewrote some of the book based on Leibowitz’s ratings — removing claims that weren’t strong or shoring up other ones.

Leibowitz’s ratings were subjective, of course, but I find them compelling. Claims earning a five-rating, for instance, had “dozens of studies…often evidenced by many review papers and/or meta-analyses.” Fours had “very consistent results, but none or very few meta-analyses to back (them) up.” Threes had only a “handful of studies to support the claim or there was disagreement about it in the literature.” Crown, the book’s publisher, has made Leibowitz’s list of source material as well as the claims and ratings available on the book’s website.

Unfortunately, the experts Aschwanden quotes reacting to the ratings are generally skeptical a) it actually reflects whether the claim has scientific merit and b) if it can scale to other books while still retaining accuracy. They’re basically saying: we don’t know whether the underlying science is sound enough to support the claim, and the tool is too easy to game by unscrupulous authors. Aschwanden also wonders how many readers will be swayed by an index stuck in a book’s appendix.

These are fair points. But they come off as fatal friendly fire, which is odd coming from people otherwise alarmed by empty TED talks, replication crises and pop science claims too good to be true. Yes, we should be criticizing ideas like Zaki’s and Leibowitz’s — but to make them scalable and incorruptible. Research needs a scalable, incorruptible, widely accepted research confidence index; it’s a perfect project for a foundation to underwrite.

Takeaway: In the meantime, look at Leibowitz’s rubric and ask: how could I apply a research confidence index to differentiate and boost non-specialist confidence in my own work and the thought leadership of my organization?

Because science’s schlock pile keeps growing.