How researchers get heard

The Problems with Training

There are two kinds of research organizational leaders: those who want communications training for their organization’s researchers, and those who don’t believe training works.

Here’s the weird thing: More often than you’d think, they’re the same person.

The typical communications workshop for researchers can yield non-communication benefits (increasing cohort cohesion, making researchers feel more valued or improving a specific talk or presentation). But (in my experience) rarely does training make researchers durably better communicators, even if it makes them feel like they’re better at it. Training cannot be the answer to the structural problems in research communication — the lack of ongoing professional communications support being first among equals. In my more cynical moods, I would say that training is the distraction that prevents us from looking at those problematic attitudes and practices.

A recent study in Science Communication that the study’s authors then wrote about earlier this month in Nieman Lab proves my point, both intentionally and unintentionally. The authors (two journalists and a scientist, all veteran sci comms instructors) taught a 15-week three-credit sci comms class at the University of Connecticut for three years running to cohorts of STEM graduate students. The course included four weeks of grounding in basic concepts and approaches to improving your science communication skills and then 11 weeks of practice and analysis centered around video analysis of mock interviews the students did with journalism students. They were at least introduced to all the usual things — jargon identification and removal; identifying and identifying with audiences; messaging; metaphor and analogy; storytelling vs. explanation; and media training.

Let’s pause to note: That’s far more training than most researchers get in their careers.

At the beginning and again at the end of the course, the STEM grad students were recorded giving a short (>three minutes) explanation of the scientific method, and each were scored for various comms skills and qualities by undergraduates who weren’t in the course. A control group, mostly in the STEM grad students’ fields, were also recorded twice at the same interval giving their explanations of the scientific method — and those recording were also scored by the undergraduates. This was all repeated for each cohort in the three years the course ran.

Unsurprisingly, the undergraduates found no improvement between the control groups’ end-of-course scientific method explanation and its first attempts.

Perhaps shockingly, there was also no improvement for the STEM graduate students who had taken the course. There was also no difference between the scores given the STEM graduate students and the control groups. However, the STEM graduate students rated themselves as significantly improved in science communication skills and efficacy because of the course.

The results, the authors write with admirable understatement in Nieman Lab, “surprised us, and convinced us that it’s time to rethink how we assess whether science communication training works.”

There are a number of things to question about this study, starting with using an explanation of the scientific method as a baseline. The authors wanted a neutral topic to standardize the study’s results — one that wouldn’t trigger a polarized response in any of the undergraduates, and one that the STEM grad students should all be able to offer. But (and I’ve written about this before) motivation is so much of the game in improving research communications. If you give the grad students an abstract topic that they don’t feel an immediate stake in — and one that’s about as potentially engaging to the undergraduates as infrastructure used to be to the rest of us — you’ve automatically flattened the potential impact and the range of responses.

Second problem: Like still too much of research communications today, the study evaluated communications as a performance — the delivery of “clarity, engagement and credibility” to an unseen audience. This paradigm (all parts of it) has painted research comms into a tight corner, and it needs to be closely examined if not bagged and binned.

Maybe the biggest problem with the study — but maybe also its biggest point — is that it’s built around training, not around culture, and that training is too often used as a substitute for culture in research comms.

As a 20-year practitioner and now a consultant, I know that the only way researchers steadily improve as communicators throughout their careers is by being in or finding a culture that values communications and creates an ecosystem to nurture it.

That researchers should be trained to be “better communicators” under the assumption that they will walk out of training into situations without professional communicators to provide support (or with a one- or two-person communications team that they must share with an entire division) is both insane and utterly common.

(Let me be clear: I am very much in favor of short, sharp training for specific purposes, such as media and messaging training before launching a new research product. The motivation is, shall we say, usually at its highest during those windows. But I would never be hopeful that the lessons learned will stick unless there’s a culture in which they can be repeatedly used.)

I don’t expect research communications to change quickly, or much at all, in reaction to this study. But it’s good to have a study that explicitly points up some of the inner contradictions of training — how it usually just makes us feel better about ourselves while making the real problems around research communication slightly worse.