The 2021 season of Science+Story: The Podcast launches next week, starting with Hugh Possingham — the chief scientist of Australia’s Queensland state and one of the world’s foremost conservation scientists. Hugh’s brilliant and slyly hilarious. He’s also made communicating science and maximizing research impact for policy some of the highest priorities of his long and distinguished career.
In going over the interview yesterday, I came across one point I’d forgotten Hugh made but that pulled me up short when I heard it again. In essence: The real problem with science and impact isn’t more science. It’s that relevant science has already been done, years ago, and no one outside science knows about it.
Hugh recounted being asked again and again by politicians and other decision makers in Australia: Why haven’t you done a study on X or Y? And his response, invariably, would be: We did. That was published 17 years ago. We did that one 13 years ago. And so on.
Why is that? I asked. Why don’t academic scientists bother communicating more widely what they find and what know? He said:
Scientists in some senses — although the general public think we sit in this ivory tower, protected from the realities of the world — they are evaluated quantitatively in a ruthless fashion. So every single person at the University of Queensland — or any university — their boss knows their papers, their citations, their PhD completions, their grant incomes. And they’re all numbers, and they can all be ranked. Who else is subjected to that quantification of performance other than athletes? And these are the things they will then pursue. Those four things. The things we’re talking about — which is ‘did you change how the world works?’ — we haven’t worked out how to turn that into a number.
What about altmetrics? In a literature review of what altmetrics indicate, Thelwall (2020) found strong support that they provide early evidence a piece of research will have impact in the research world — but limited support that altmetrics can assess that research’s wider, societal impacts. You might have also seen this study on how faculty demonstrate impact, which is not encouraging about the acceptance of altmetrics among faculty, especially established faculty. So, no.
What about telling stories of impact? Hugh quickly sketched the problems with that approach, at least for academics in Australia:
We have stories. It works a bit. Many of the most highly regarded scientists in universities … are well-known for having impact. But then that also means that they have to have the people around them to sell and sprout those stories and make sure people know those stories exist. Australia is a country of Tall Poppy Syndrome — people get cut down if they run around telling everyone what they’ve achieved. So there’s not a lot of incentive to do that either. It would be nice if we could get some idea of how to quantify and disseminate impact so it was part of the incentive structure and system … that would drive scientists to disseminate their information much more quickly.
Takeaway: For-profits don’t have this problem (at least in theory). As Peter Drucker said, businesses exist to create customers — so the business has only two basic functions: marketing and innovation. And marketing is “the distinguishing, unique function of the business.”
If your research-driven organization exists to create change in the world — however indirectly — marketing and communicating your research is also a distinguishing, unique function of your business model.
Quantifying the results of marketing and communications— in terms of increased collaboration, policy relevance, public awareness of issues that bear on public health and well-being, etc. — is very difficult, because attribution is hard. It’s a mix of story-gathering, digital analytics and assiduous CRM tallying.
But if you don’t do it, you’ll need a staff full of Hughs — researchers who also feel an intrinsic pull to communications, and who will do comms on top of what’s already a number.