How researchers get heard
Abstract lines

Bad Research Communication: A Partial Taxonomy

There’s a strong happy-talk, booster culture in research communications, especially on Twitter — it’s all good! get out there and communicate!

That culture stands as a corrective to the still-common attitude in science that research communication is at best an afterthought and certainly nothing reputable scholars need invest in.

But that culture also produces a lot of misguided and downright bad research comms:

  • Overframed research communication — communication that makes a piece of research sound bigger or more important than it really is.
  • Mindless research communication — communication of research that, lacking in wider relevance, didn’t really need to be communicated to anyone outside the research community.
  • And pointless research communication — communication by researchers that — sometimes because it’s restricted to what their research says, and sometimes because the researcher isn’t being guided on how to think critically about their communication — misses a larger opportunity to be relevant to decision making (as well a prime chance to build their own authority). In other words: it fails to land a punch. To make a point.

The above short list by no means exhausts the taxonomy of misguided or downright bad research comms. Nor are “overframed,” “mindless” and “pointless” mutually exclusive categories. In face, here’s a great example of what I think might be simultaneously overframed, mindless and pointless research comms: Jon-Patrick Allem’s piece for The Conversation on his research discovering Twitter bots are tweeting out lots of misinformation about the health benefits of cannabis.

The research by Allem that underlies his piece gives us two pieces of insight:

  • Social bots (“automated Twitter accounts designed to produce content and engage with legitimate human accounts on Twitter) are posting to Twitter about cannabis-related terms and products with increasing frequency (although they added up to fewer than one in every seven tweets considered in his study).
  • Social bots are posting nearly twice as much as nonbot Twitter accounts about the health and medical benefits of cannabis, and many of those posts’ claims are unsubstantiated.

Since other social bots have previously carried other messaging — namely, for electronic cigarettes and anti-vaccine messaging — Allem argues that these bots are a potential public health threat, both because the health benefit claims for cannabis are wildly oversold and because cannabis can have detrimental impacts on your health.

Allem’s Conversation opinion piece, however, doesn’t address issues such as:

  • If those previous misinformation campaigns actually worked, and what would constitute “working”;
  • Is misinformation more effective than what nonbot (i.e., real people) are saying to each other about the same subjects?
  • Whether people (including teenagers) can readily identify bot messaging on Twitter;
  • Whether they trust bot messaging, and at what levels and about which issues;
  • Whether people change their minds about issues based on information they encounter on Twitter, and what might be a tipping point of exposure to misinformation about cannabis for a Twitter user;
  • Whether bot misinformation reinforces or has any impact on the cannabis use of Twitter users who are already using cannabis.

He instead makes the simple argument: Bots are spewing misinformation on Twitter; people use the Internet and Twitter to research information on health benefits and risks; we need to police this misinformation and develop elaborate health education campaigns to correct the misinformation.

We might need a public health campaign. But is it because social bots have entered the fray? Or because people hear what they want to hear to motivate and justify their use of cannabis? What kind of public health campaign might work against that kind of mindset?

Allem doesn’t address these questions in The Conversation piece. His insights (and research) thus seem wildly overframed, rather pointless and suspiciously mindless. The gestural solution (we need a public health campaign) is tacked on and lacks credibility.

Too much science communication — most of it overframed, some of it also pointless — feeds fear. Fear is great for getting headlines. It sucks for lighting a path forward.

Takeaway: ​The Conversation has editors —​ but they’re trying to generate headlines as well. If your organization or institution doesn’t have an editor who can tell your researchers which communications will build their authority and which are just overframed, mindless or pointless…you need to get one.