The science about how to communicate research is eternally promising, tantalizing and…frustrating.
For this science to be helpful to communicators, its researchers need to
- Use study subjects that closely resemble the ones practitioners target;
- Provide findings that are close to shovel-ready; and
- Test whether and which communications strategies and tactics can prompt behaviors that would translate into real world impact.
But too often, it provides none of the above.
Case in point: the new Climatic Change paper I wrote about yesterday.
Its takeaway: Stories containing messages about climate change and the environment are more likely to lead to behavior change than data and facts alone.
That message fits with what most communicators already take to be gospel.
When you peel the paper apart a little bit, though, the clean takeaway gets more complicated.
No question: Across all three studies included in the paper, stories got more engagement than what the researchers termed “factual narratives.”
But…in the first study, the factual narrative was as dull as dishwater. It wasn’t written by a Joe Romm. It’s reasonable to argue whether it was even a narrative.
In the second study, the MTurk participants reported that they felt more “narrative transportation” (defined as activation of the audience’s “imagination through an empathic connection with the characters”) from those videos about climate change that scored higher in narrative structure by an independent panel.
But…these narrative-rich videos didn’t actually motivate the research subjects to donate time to further climate change research. (The researchers dismiss this finding because many MTurk workers are “professional survey takers and likely to be cynical about requests for participation outside the MTurk environment.”)
In the third study, videos with high narrative structure and downer endings prompted a lowering of heart rate in study participants. These videos also prompted subjects to donate more of their earnings to the non-profit producing the video than did low narrative structure videos.
But…the money the subjects could donate were supplemental earnings, earned per video they watched, and paid out in increments of 100 pennies.
Who wants 100 pennies? Or 500?
And all the subjects were recruited from a pool who live in and around “a university town.” Should we not suppose those subjects might already be biased to take action on climate change? What about subjects with cultural cognition that biases them against climate action?
Unlike the MTurk participants’ bias against donating more time, the researchers don’t discuss this potential bias.
I noticed the study getting a number of retweets yesterday, with the simple message: We knew it!
But…what does the study actually prove?
And what are the real-life takeaways for practitioners of climate science communication?
As I argued yesterday, the study treats all high-narrative structure stories equally.
But stories that foreground the audience as protagonist might be more effective than others. We just don’t know.
And, given the relevance of much science of research communications to actual communications, I wouldn’t bet on us finding out any time soon.