How Research-Driven Organizations Become Thought Leaders

What is Research ‘Authority’?

It isn’t just citations, and it isn’t just where you’ve been published, and it isn’t just about being fresh and accurate and right. Those might feed into it. But first: It’s who you can talk with and why they listen to you.

Being heard today means being heard consistently — it means having authority with an audience, being granted that authority. It means being seen and accepted into a community — from Energy Twitter to the dozen ranchers you’re talking with on a rainy March night about soil carbon sequestration. As recent times keep teaching us, data and evidence alone won’t separate you from all the other “thought leaders.”

“Authority = expertise + trust,” in the elegant definition of the consultant Philip Morgan. Trust, that is, of a wider audience (exact width negotiable) the expert wants to reach.

But how do you build trust? It seems like grace — a state that’s deeply felt and highly beneficial, but not easy to attain (and easy to lose).

Here was my answer until this week: “An authority is an expert who has become a primary and ongoing source of insights that an audience trusts as relevant and valuable.” I thought that captured the process, the growth into authority nicely — the steady feed of insight from the expert, the valuing and acceptance of those insights opening the door to a certain status with the community.

Until, that is, I read science and society researcher Gail Cardew’s new piece in Nature headlined “People will not trust unkind science.”

The subhead for Cardew’s article makes her hypothesis plain: “A mean and aggressive research working culture threatens the public’s respect for scientists and their expertise.” (Translation: Research is full of assholes, and people don’t trust assholes. I’d argue the recent history of American political culture has taught us otherwise. More on that in a minute.)

Cardew first cites a a grim new report from Wellcome on the hazards of research culture. Of the more than 4,000 researchers Wellcome interviewed, surveyed and workshopped with (mostly in the UK, mostly academics), it found that only 33% viewed research culture as “positive,” while 55% found it “negative.” Go deeper and things get David Fincher-dark:

  • 75% said creativity in research is being stifled by an emphasis on generating research impact;
  • 78% said “high levels of competition have created unkind and aggressive research conditions”;
  • 61% had witnessed bullying or harassment in the workplace, while 43% had experienced bullying or harassment;
  • 47% had had another researcher take credit for their work;
  • 31% had received professional help for depression or anxiety;
  • 61% had felt pressured by a supervisor to produce a particular result;
  • Only 14% agree that “current metrics have had a positive impact on research culture.”

Not many in research (outside of management) would raise a single eyebrow hair of surprise at these findings, or the sentiments behind them, or that research culture might retard research creativity in ways both obvious and perniciously obscure. As Wellcome’s director Jeremy Farrar wrote, “If we aren’t working in a culture that’s creative, inclusive and honest, then we’re not getting the best research.”

But Cardew (who was until recently director of the science and education program at the Royal Institution of Great Britain in London ) goes even farther: All this bad behavior at the lab bench, she argues, is bleeding out into public engagement that turns the public off.

As evidence, she first talks about scientists whom she’s watched condescend to or hector the public during Q&As. I’m not trivializing such bad behavior; but I’ve also seen plenty of murderously uncollegial researchers invest their talks and presentations with enough Public Empathy to turn audiences into puddles of warm pink goo — only to come off stage and immediately reinsert their fangs.

But then Cardew quotes a 2009 study that looked at people who owned houses on two brownfield sites in the UK and who they trusted for accurate information on the possible risks of the industrial pollutants potentially contaminating their properties. Tom Stafford, one of the study’s authors, wrote about it in a 2016 blog post headlined “Why don’t we trust the experts?

This is where Cardew’s argument gets interesting. On a scale of 1-5 (5 being the highest), the householders Stafford said they had a mean 4.06 level of trust in the scientists’ communication of the pollution risks — with almost 80% responding “5.” (Property developers, by contrast, scored a mean of 1.47.)

But then Stafford drops a bomb from the paper — public trust does not flow primarily from expertise:

“The results…show that expertise is not the strongest determinate of who is trusted. Instead, people trust those who they believe have their best interests at heart. This is three to four times more important than perception of expertise.”

For instance, another group, termed “friends and family” by the authors, scored a mean of 3.62 in terms of whether the householders trusted them on communicating , even though they scored much lower on the scale of expertise about the pollution risks.

To drive this point home, look at these two graphics — first, the difference between how the “high trust” group perceived the expertise of scientists vs the “low trust” group:

and below, the difference between the high trust and low trust groups in whether the group of scientists “has my interests at heart”:

“So here we have a partial answer to why experts aren’t trusted,” Stafford writes in his blog post:

They aren’t trusted by people who feel alienated from them. My reading of this study would be that it isn’t that we live in a ‘post-fact’ political climate. Rather it is that attempts to take facts out of their social context won’t work. For me and my friends it seems incomprehensible to ignore the facts, whether about the science of vaccination, or the law and economics of leaving the EU. But me and my friends do very well from the status quo- the Treasury, the Bar, the University work well for us. We know who these people are, we know how they work, and we trust them because we feel they are working for us, in some wider sense. People who voted Leave do suffer from a lack of trust, and my best guess is that this is a reflection of a belief that most authorities aren’t on their side, not because they necessarily reject their status as experts.

These YouGov survey results of Brexit referendum Remain and Leave supporters confirm Stafford’s guess — Leave supporters had deeply negative net trust levels of many groups conventionally cited as pillars of society:

So: expertise doesn’t automatically lead to trust, and (of course) neither do relevant and valuable insights if your audience gets the sense you don’t have their interests at heart.

For some demographics and circles such as Stafford’s, delivering your expertise is enough to code as “I care about you and what you care about.” But other demographics and circles need more than your insights. They need a signal of whether you’re on their side. They’ll warm to what you have to tell them only if they have the sense that you are warm to them.

I gently chided Philip Morgan in a private email for sending his email subscribers a survey asking them “Who is an authority for you?” without defining “authority.”

But there is a certain wisdom in allowing the term to have a bit of flex. For the researcher who wants to make impact with non-specialists, authority is, indeed, “expertise + trust.” “Trust,” though, doesn’t flow from expertise — not for everyone. If we want to become authorities, our starting point needs to be — even before having strong arguments and white space and a hunger to contribute insight — a good understanding of the markers of trustworthiness our intended audiences will recognize and embrace.