How do you justify that the study you’ve just published (or that a staff researcher has just published and that you’re being asked to promote) is sound?
Well, it was published in a peer-reviewed journal, wasn’t it? Isn’t that enough justification?
If it’s health research, probably not, according to a new article in this morning’s New York Times by health and health policy researcher Aaron E. Carroll.
It’s a bracing read. Carroll’s piece asks why, after decades of research into the problem, childhood obesity is still a major public health problem in the United States. “Part of the problem may be flawed research,” he writes, drawing on a new paper in the journal Pediatric Obesity that lays out 10 common scientific errors sometimes found in obesity research.
We read “scientific error” and we think: oh, honest mistakes, easily correctable.
Carroll has written about this before, and so have I: these mistakes are more often than not caused by structural (read: financial) forces in science (as well as human nature) that drive overframing of results, finding positive results at all costs, and flawed experimental design that biases the production of positive results. “These are certainly true with respect to obesity [research],” Carroll writes, “but they’re true for almost all health research.”
The Pediatric Obesity paper authors write in their abstract that “We hope that compiling these errors in one article will serve as the beginning of a checklist to support fidelity in conducting, analyzing, and reporting childhood obesity research.”
One must ask: don’t we need such checklists for all research?
Carroll’s larger point: All these positive study results have fooled us into thinking childhood obesity can easily succumb to simple interventions — while obesity rates steadily climb:
Major problems like poverty can’t be overcome with a couple of workshops in a school or a doctor’s visit. Obesity is a major societal problem that probably requires a major societal response. We can’t allow our desire to make things better lead us to accept lower-quality research that might convince us otherwise.
Do you automatically look to promote every paper as a breakthrough of some sort?
I often debate with researchers the merits of applying their expertise to social problems through content that’s accessible to non-specialists. Such content simplifies, according to these researchers. We need to stick to the research.
The debate I think we should be having more often in research communication: does your research mean what you say it means — and how can the rest of us better come to trust that assertion?