Generative AI tools like ChatGPT and Claude AI have caused a flutter of cautious optimism amongst communications professionals. Could the days of drafting tedious copy, counting clips and performing media analysis be over? The prospect of each PR pro having a robot underling is a tempting one. But research suggests it’s wise to proceed with caution.

While generative AI holds great promise, it also comes with risks around accuracy, security, plagiarism, and trust. A recent Salesforce survey suggests a third of IT leaders view generative AI with skepticism, citing security risks and bias as their main concerns.

But it’s easy to see those risks offset by rewards. Sprout Social and Harris Poll say 97% of leaders expect AI will let their teams analyze social media data more efficiently than ever, while 87% expect increased investment in the technology in the next three years.

It’s a balancing act. Generative AI can make us all smarter, faster, and more focused on high-value work. But without thoughtful policies and safeguards in place, this autonomous helper could end up doing more harm than good. So how can communicators explore the upsides of AI, while avoiding the pitfalls? Here are some best practices to integrate generative AI safely and effectively:

Have a Clear AI Strategy and Policies
We created an agency-wide Generative AI policy at Highwire, which puts us in the minority. According to The Conference Board, as of August 2023, less than 50% of organizations have even begun to work on company-wide guidance, and only 26% have published a policy on Generative AI use.

For that reason, we’re often called on to help our clients to shape their own guidelines. In many cases, the golden rule for communication professionals is to safeguard confidential or proprietary information from misuse by AI platforms.

We are privileged to have access to this sort of material on a daily basis, but dropping it into ChatGPT could have disastrous consequences and serious legal implications. And that’s just one area of concern… there are dozens more.

Take the time to conduct an AI readiness assessment across your team. Identify appropriate vs. prohibited uses for AI based on your strategies and needs. Develop clear guidelines and guardrails around AI usage, get leadership buy-in, and train your staff. Setting expectations from the start prevents problems down the road. We’ve released guidance that may help, as we believe we all need to learn together.

Maintain High Standards of Quality and Accuracy
Human judgment, research skills, and critical thinking remain vital in our industry, even with AI in the mix. Fact check anything produced by AI thoroughly before disseminating it externally. Watch for inaccuracies (which AI platforms sometimes describe as ‘hallucinations’), internal inconsistencies, and plagiarism. AI cannot replace diligent communication professionals who adhere to high standards of quality and accuracy. AI also lacks the context, tacit knowledge, and nuanced understanding that humans bring. Be wary of anyone trading quality for speed.

Practice Transparency with Stakeholders
If leveraging AI to create content for a client – either internal or external – it’s vital to disclose this openly rather than passing off machine-made work as human-created. Make sure you have explicit permission and that all parties understand the technology’s capabilities and limitations. Clarify any legal implications around ownership of AI-generated intellectual property. Transparency builds trust.

Start Small and Stay Supervised
Limit initial testing of AI to low-risk scenarios and non-critical draft materials. Closely oversee the AI system during this pilot phase and provide frequent feedback to refine its performance. Never let the humans give up control.

As confidence grows, gradually expand the types of tasks and scenarios where use of AI makes sense. But keep a human supervisor involved rather than handing over the reins completely. We’ve prepared handy Risk Maps to help understand the range and impact of AI on common communication tasks.

With the right balance of caution and curiosity, communicators can explore how AI can augment their capabilities while avoiding potential downsides. Treat generative AI as an exciting new toolkit, but don’t relinquish the human creativity, strategic thinking and quality control that remain essential to high-impact communications.

James Holland co-leads Highwire’s Digital practice – awarded Digital Agency of the Year in the 2023 SABRE Awards from PRovoke Media. His journalistic background spans a decade in editorial. Since moving agency-side, James has created digital strategies and creative campaigns for Walmart, Vanguard, Microsoft, Cisco, IBM, Intuit, Mercedes-Benz, Vodafone, Telefonica, Lenovo and more.

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter