How Research-Driven Organizations Become Thought Leaders

Take Your Gloves Off

Every Tuesday, I do a TTLT — Tuesday Thought Leadership Teardown — for my mailing list. Here’s my latest.

Evidence or expertise alone don’t make your ideas compelling.

You make them compelling.

“You” meaning: your voice. Your style, coupled with a strong argument and point of view.

It’s one of the awful paradoxes research communications faces. Just as research teams get bigger, the world more and more demands authority manifested in personal brands.

AOC. Not IPCC.

Researchers and research-driven organizations can bemoan this sad turn in civic discourse and the erosion of science’s once unquestioned authority.

Or they can start writing, communicating and marketing better. A lot better.

Add voice to evidence — the evidence of research, and a researcher’s hard-won expertise — and you’re hittin’ it hard from the yard.

Take away voice, and you have…well, most of The Conversation, or The New York Times’ new feature “Studies Show.”

If you’ve been on this list for more than two weeks, you’ll know that I’ve been waiting a long time for something like “Studies Show” — an elite media feature invented to tell people what the body of research says on personal health and nutrition questions. To move beyond the confusing new-study-by-new-study drip method of science communication and give us the bigger picture of what the literature actually says.

In fact, I read writer Kim Tingley’s intro to the first “Studies Show” (published last week) with the kind of thrilled disbelief I usually reserve for one of my native Milwaukee sports teams having a once-in-a-three-decades turn of success (go Bucks!):

Sorting through the latest research on how to optimize your well-being is a constant and confounding feature of modern life. A scientific study becomes a press release becomes a news alert, shedding context at each stage. Often, it’s a steady stream of resulting headlines that seem to contradict one another, which makes it easy to justify ignoring them. “There’s so much information on chocolate, coffee, alcohol,” says Nicholas Steneck, a former consultant to the Office of Research Integrity for the U.S. Department of Health and Human Services. “You basically believe what you want to believe unless people are dropping dead all over the place.”

Scientific studies are written primarily for other scientists. But to make informed decisions, members of the general public have to engage with them, too. Does our current method of doing so — study by study, conclusion by conclusion — make us more informed as readers or simply more mistrustful? As Steneck asks: “If we turn our back on all research results, how do we make decisions? How do you know what research to trust?” It’s a question this new monthly column aims to explore: What can, and can’t, studies tell us when it comes to our health?

Be still, hopeful heart.

So why am I both really disappointed and still interested in “Studies Show”?

Because it turns out to be really boring — boring in a structural way that we in research comms need to pay attention to.

Specifically: The first installment of “Studies Show” is an explainer run amok.

It details the development of a body of knowledge — what we know and don’t know about the risks of alcohol use — and all the caveats therein, with a focus on a recent Lancet study that concluded any level of drinking alcohol is unsafe.

It’s all explanation, no authority.

To make it to the end of this piece, you have to have just had a Venti Iced Mocha or be really really interested in methodology. (The second of which, I assure you, most people are not.)

Researcher Aaron Carroll did this piece a few months ago (interestingly, for The New York Times), and it was so much better. I’ve written about Carroll before. I’ve also written about this specific Carroll piece.

But what is it that makes Carroll’s piece so much more compelling?

He knows exactly what he wants to do (use his expertise to help people) and exactly the way he can best do that (by writing plainly, with gloves off).

So his piece isn’t just explanatory. It has a strong point of view (about the limitations of meta-analyses; about how real people living their lives should interpret the conclusions of alcohol studies; about how scientists need different research tacks to give people better answers about how harmful low levels of alcohol consumption actually are).

Which makes it authoritative.

Which makes it thought leadership, not just science journalism.

Here’e research thought leadership — here’s voice:

Food is not medicine. Neither is alcohol. Alcoholism is terrible. There’s a balance, and we could spend lifetimes arguing over where the line is for many people. The truth is we just don’t know. If these studies are intended to drive population-level policy, we should use them as such, to argue that we might want to push people to be wary of overconsumption.

Too many people interpret them individually, however, with panic-inducing results.

There’s so much good journalism colonizing research thought leadership — publishing verticals like Vox’s Future Perfect, Axios, Atlantic Media’s City Lab, much of FiveThirtyEight — that I haven’t given up yet on “Studies Show.”

But its debut reveals the secret to the genre. Not more explanation. More authority. More voice.

Takeaway: Explaining isn’t enough. Develop a voice. Which is done by publishing, speaking and otherwise putting out content; paying attention to others who already do; and receiving expert guidance from professionals who understand narrative and audience.