I’m going to guess your answer would be “no” if you’re a research communicator or science writer/journalist. Which would make the new piece “The Myth of the Impartial Machine” from Urban Institute’s Data@Urban team required reading, followed by recurrent night frights as you realize you have no idea how to detect ML or AI bias or know whether you’ve promoted or covered studies with such bias…which probably means most of the studies you deal with, given the spread of machine learning and predictive algorithms in research today.
“The Myth of the Impartial Machine” makes painfully clear that you can’t just take researchers’ word their data and algorithms are bias free; they must take steps to ensure that. Some of those steps, say authors Alice Feng of Urban and Shuyan Wu of the State of Rhode Island, include: diversifying the teams working on ML problems, bias training, and being transparent about how your machine learning models work and which accuracy metrics they optimize.
My two cents: As with fact-checking, being clear about the fairness of your organization’s use of machine learning in its research — say, through a certification process — could well be a competitive marketing advantage. As well as just fair, period.