How researchers get heard
Abstract lines

ChatGPT & Public Expertise: Just Get Started

ChatGPT from OpenAI debuted more than a month ago, and you’re probably already sick of it, and that’s just wrong, because it can really help what so many public experts struggle with: getting started with writing.

I blame Twitter for this. The conversation about the tool never moved past a) ChatGPT is just another AI gimmick that makes fatal errors, b) ChatGPT is the thin edge of AI world domination, or c) ChatGPT proves we are on the cusp of a glorious world of fully automated things done for us we never liked doing all that much anyway (e.g., writing).

What too few people have been talking about: How to use this tool now, for your work as an expert. Wharton’s Ethan Mollick has been one of the few — he has been writing about ChatGPT’s utility at his Substack. I especially recommend these short pieces:

For a day or two, I was a skeptic about ChatGPT. I changed my mind after I asked it to draft testimony one of my clients might give to a Congressional committee holding hearings on the idea of a new national lab for the Mississippi River Basin, and ChatGPT gave back to me a perfectly serviceable first draft — in less than 10 seconds. (I’d already asked it to write 1) a syllabus for a course on the history and practice of public expertise, and 2) an opinion piece on why public expertise is more than just science communications. ChatGPT gave me first drafts of both in under 30 seconds — with the syllabus containing readings, assignments, summaries for each of the 12 sessions and the lecture for the concluding session.)

ICYMI: ChatGPT is a very advanced language processing AI program that uses machine-learning algorithms to process vast amounts of text data written before this year (everything from books to webpages to Wikipedia) that inform not just the content but also the format of its answers. So when you ask it a question in writing, it responds (in writing) with answers that are often astonishing in their speed, fluency, and detail — and very funny.

It’s far from perfect — it still often makes fundamental mistakes in accuracy or devolves into what some have termed “hallucinations,” or confident-sounding narratives whose facts are, in fact, totally made up because AI . (It also can rather easily be jailbroken to advise you on how to do antisocial things, despite programming guardrails to prevent it from doing so.) So you’d be a fool to use it for research. In addition, its first answers to your initial queries often hew to the mainstream thinking on that question.

But, as Mollick demonstrates in “The Mechanical Professor,” ChatGPT can already do parts of a college teacher’s job — construct a syllabus, write a lecture, and argue both sides of an opinion piece — at levels of competency that give experts the one thing they often have the most trouble with: getting started.

As Mollick points out, we often approach AI tools “in a way that highlights their weaknesses, not their strengths.” We dwell on its howler inaccuracies that, as experts, we can easily correct for, while ignoring the things it does better than a human being. One approach Mollick suggests is to “treat it like a magic intern with a tendency to lie but a huge desire to make you happy.” That desire means you can and should experiment with the prompts you ask ChatGPT to get the best starting draft possible, and then continue to ask it for tweaks and variations until it gets the product to a starting point that’s fit for you to work with.

After I saw its first draft of the opinion essay on public expertise vs. science communications, I realized it didn’t contain any examples. When I asked it to include two examples of scientists or researchers who were effective public experts, it quickly added Jennifer Doudna as an example of a public expert who shapes the ethical and social dimensions of technology (CRISPR-Cas9 gene editing) and Naomi Oreskes as an example of a public expert great at promoting public engagement with science. The draft was still fairly rundimentary — but those were two really good examples whose careers I might have taken an hour or more on my own to settle on and then summarize.

Mollick advises public experts to go even further with it in his new piece, headlined “How to…use ChatGPT to boost your writing.” Too many people, he argues, are treating the bot like a human interlocutor, instead of giving it elaborate and specific prompts and drilling down on those until we get we want:

Don’t ask it to write an essay about how human error causes catastrophes. The AI will come up with a boring and straightforward piece that does the minimum possible to satisfy your simple demand. Instead, remember you are the expert and the AI is a tool to help you write. You should push it in the direction you want. For example, provide clear bullet points to your argument: write an essay with the following points: -Humans are prone to error -Most errors are not that important -In complex systems, some errors are catastrophic -Catastrophes cannot be avoided

But even these results are much less interesting than a more complicated prompt: write an essay with the following points. use an academic tone. use at least one clear example. make it concise. write for a well-informed audience. use a style like the New Yorker. make it at least 7 paragraphs. vary the language in each one. end with an ominous note. -Humans are prone to error -Most errors are not that important -In complex systems, some errors are catastrophic -Catastrophes cannot be avoided

As Mollick points out, ChatGPT’s still-occasional mistakes mean that you have to know when it’s making a mistake, which means experts have an edge in using it. And since ChatGPT can help you get over the hump of generating what Anne Lamott immortally termed your “shitty first draft,” it would be a mistake not to explore how it might help you generate public expertise much faster than the rest of the field.