This blog is provided by the IPR Behavioral Insights Research Center
We’ve started to get glimpses into what AI means for the comms and PR professions, and not all the news is good…
As AI has burst onto the scene, no one has missed the fact that it will have huge ramifications — not only in our work lives, but for society at large. This is especially true in terms of privacy, regulation, and information access — not to mention the very definitions of originality, authenticity, and art. There is talk of a “Cambrian explosion” in terms of AI’s impact on our world, due to its exponential enablement of new technology and generative outputs. But what specifically does it mean for professional communicators?
AI’s newer generative language models, such as ChatGPT, are transforming the business landscape in ways that have huge knock-on effects for the profession. The push of a button is all it takes to create original content, mimic narrative styles, paraphrase texts, and in some cases, design compelling rhetoric that can sway and motivate. It’s patently clear that this is going to be a game changer and relieve at least some of the more tedious tasks in comms. Not to mention the fact that it’s really fun to use. Why, then, does it make us feel so uneasy?
In my book, published last year (a lifetime ago in AI terms), I emphasized that generative AI poses a significant threat to the communication and PR professions. In terms of day-to-day content production — such as press releases, corporate announcements, and basic journalism — ChatGPT and its variants can produce convincing and accurate copy in a fraction of the time it takes a human being. This is especially threatening to entry-level and junior communications roles. But the fact is that this type of technological encroachment as inevitable as taxes and death. Just as robotic chefs and robotic surgeons and AI-powered legal contracts will soon become commonplace, many communication tasks will be outsourced to machines. This will undoubtedly create crises for employment and financial security — and it will also create crises around the purpose and meaning of apprenticeships and on-the-job learning — not just for comms, but for all impacted professions.
The good news is that communicators have an ace in their pocket. We are still a long way from surrendering organizational and leadership communication to a machine, at least for the time being. This is for two different reasons:
1.) There is a dark side to these technologies
Even in the earliest days of GPT-2 and GPT-3, researchers found that generative AI excels at generating disinformation — a phenomenon that Politico’s AI reporter Melissa Heikkilä cleverly described as “filling the swamp.” Generative AI is alarmingly effective at crafting slick-sounding messages — from QAnon conspiracy theories and climate change denial to extremist narratives and radical ideologies.
With this kind of gloomy research, however, we can glean many useful insights. Because AI is so effective at “filling the swamp,” it means that the structure of disinformation itself must be formulaic or algorithmic. IPR published a valuable primer on how to detect disinformation, and these clues, along with academic research into AI, may help us learn how to reverse engineer it.
2.) Generative AI’s output is decontextualized
ChatGPT is a generative technology, but that doesn’t mean that the content it generates is meaningful or even relevant. In Alan Turing’s imitation game, a series of questions can determine if one is interacting with a human or a machine, but the results of the game depend not only on the ability to give correct answers, but on how closely the answers resemble those that an actual human would give. In other words, the machine eventually reveals itself. For the time being at least, for any issue of real consequence, ChatGPT is an imitation and not the real McCoy.
For these reasons, I believe that generative AI presents an opportunity for the communication profession to grow, even as it increasingly threatens many livelihoods. It may, in fact, be because of the threat that it forces change. It does this by providing an impetus to up our game and focus our time and energy on how to develop more perceptive, sincere, thoughtful, and yes, human communication — that can only be done by people. This is easier said than done, but it will mark the next era of the profession. And provide an even greater reason for corporate communicators to have a seat at the table in the organizations they serve.
Buchanan, B., Lohn, A., Musser, M., & Sedova, K. (2021, May). Truth, lies, and automation: How language models could change disinformation. Center for Security and Emerging Technology. https://cset.georgetown.edu/public ation/truth-lies-and-automation/
McGuffie, K., & Newhouse, A. (2020). The radicalization risks of GPT-3 and advanced neural language models. Center on Terrorism, Extremism, and Counterterrorism, Middlebury Institute of International Studies at Monterrey. https://www.middlebury.edu/institute/sites/www.middlebury.edu.institute/files/2020-09/gpt3-article.pdf
Laura McHale is the Managing Director and Leadership Psychologist Expert at Conduit Consultants Limited. McHale is an expert in assessments, leadership, and team effectiveness. She is the author of Neuroscience for Organizational Communication. She currently serves on the IPR Behavioral Insights Research Center Board of Advisors.