Charles Yang, who writes the excellent Substack ML4Sci (covering AI and machine learning applications to science and engineering), has been writing for a year in that newsletter about the “missing abstraction level” between “low-level (peer-reviewed articles) and high-level (pop-sci articles) communications channels.”
Let’s call this level “the translation level,” to begin to emphasize how useful such a function would be.
The translation level would render research findings and insights into usable and findable forms — usable and findable for all of us, not just for scientists or university researchers with paywall access and no allergies to jargon.
The translation level would also add the wider context to findings that “pop-sci” content (journalism, etc.) should add, avoiding the distorting “discovery” hype so prevalent in the single-study promotion machine of news media and big science.
For Yang, the translation level would excel in a crisis like COVID-19. It would rapidly and expertly review preprints; quickly aggregate useful papers and identify research trends; and synthesize and communicate “what the science is saying” to the media and public. (In a Medium post, Yang details the case for this function and how it might begin to grow organically in the absence of institutional support.)
Yang himself is a natural translator. You see this immediately in his Substack — for instance, his recent issue on how the latest DeepMind advance in game-playing algorithms (which now can master games without knowing the rules to them) could have applications across a range of scientific and engineering fields with unknown dynamics. Translation: This is big. The better an algorithm’s ability to adapt to situations with complex, shifting or unclear rules, the better its ability to adapt to the real world.
But as Yang does his translation work, the translation of science he needs to draw upon doesn’t exist — at least, outside of deks and blog posts. (He relies upon an exceptionally lucid blog post by the DeepMind team as much as he does the Nature paper in which they announced their finding to science.) Such translation is fast becoming the norm, Yang suggests, at least in machine learning: “Just publish for prestige/credentialing, and then write blog posts that people will actually read”:
I’ve found that most of the world’s technical yet accessible knowledge is hidden in blog posts, slide-decks, and youtube lectures (but rarely in scientific papers, at least not in any usefully comprehensible format that makes it easy to find what you’re looking for).
To long-time science communicators, that statement might inexplicably feel like depressing, even damning. But is it? The infrastructure of science was never designed to, as Yang puts it, “build infrastructure that can help improve people’s situational awareness of published literature.” The slew of preprints is just making it worse. And Twitter review, he adds, is too prone to mob rule and gaming.
So we need that translation level. Without translation, your work is cut off from the world — including experts in other fields who could benefit from your insights and findings and collaborate with you.
But no one right now (except if you have angels like Yang in your field, or someone like me to help you tell your story) does it for you.
Takeaway: The first step to building the translation level: Provide your own translation.
For your (or your organization’s) last paper, new paper and next paper, write three Twitter threads, blog posts or Medium posts translating each.
In clear language a non-scientist older relative could understand, explain why the problem your research addresses is important, why previous attempts have failed to address it, why your approach is different, what you learned and what possibilities your findings open up — and for whom.
If you run into trouble, read the DeepMind MuZero blog post for a model.
And send me what you come up with.