Cayce Myers AI disclosure blog

This blog is provided by the IPR Digital Media Research Center

When should artificial intelligence (AI) use be disclosed? Recently the journalism world was rocked when Sports Illustrated, once heralded as one of the best examples of journalistic excellence, was revealed to have published articles and product reviews written by AI while passing them off as human-created content. The public and industry backlash against the magazine’s dishonest and deceptive practices demonstrates the importance of ethics in AI use and how poor decisions in content production can lead to long term consequences. 

The ethical issues surrounding artificial intelligence are not totally new. They are rooted in longstanding questions about transparency, trust, and the fears of technology itself. There is a palpable concern in the public relations industry that AI will eliminate jobs, or, at the very best, radically transform them. There is also a sense of professional uncertainty around use itself. When is it appropriate to generate content using AI? What are the consequences of using AI for activities such as brainstorming, editing, or fact-checking? How do practitioners ensure that information generated on AI platforms is accurate? What should clients and readers know about how the content is made? Is there a threshold for disclosure? Is disclosure even necessary as AI use becomes an industry norm?

In public relations practice, transparent communication is an ethical mantra that has been around for decades. Transparency literature certainly advocates for all sorts of disclosures. However, practical issues make the application of transparency sometimes easier to talk about than implement. For instance, does a PR practitioner have to disclose AI use that was used for brainstorming? What if the AI platform was used for editing content? What if AI was tangentially used to get background information? It is determining when that appropriateness threshold is met that creates an area of debate for the PR profession. As a field, we have yet to create a definitive answer.

The Argument for Disclosure
Disclosure is frequently identified as a best practice because people ought to know who is writing and creating content.  It reveals bias, honesty, and upholds the tenets of transparency.  In a field that values authentic communication, purely AI-created content loses the humanity that good communicators provide.  Like the byline in an article or content information in a press release, disclosure of AI use provides a level of accountability for the PR practitioner or firm.  As disinformation becomes a continuing issue in communication, disclosure of AI use also positions public relations practitioners as good faith actors in combatting fake news.  It shows that the field values its audiences, clients, and society, and honors their trust in public relations as a profession. 

Disclosure of AI use also provides a starting point for client conversations.  Some clients may not want their work completed using AI for a variety of legitimate reasons.  Generative AI systems operate on a continuum of openness.  Clients, particularly in healthcare, may not want their proprietary information used on a platform that could cause disclosure of private and proprietary information.  Generative AI use in healthcare communication provide unique issues, particularly in a field that has detailed privacy regulations such as HIPPA. Clients may also be reluctant to accept AI-generated work, considering the intellectual property issues that it could pose, including innocent copyright infringement and the lack of copyright protection given to purely AI-generated work.  Disclosure helps the audience, clients, and practitioners know where they stand regarding content creation.  It facilitates discussions while also maintaining the ethical transparency expected of the field.

The Argument *Against* Disclosure
This heading appears with asterisks because almost no one argues that AI use should never be disclosed.  The nefarious use of AI to create deceptive and misleading content is never acceptable.  No professional public relations practitioner would ever argue for such a position publicly.  However, there are questions about when revealing AI use is necessary, especially in situations where the use is tangential or minimal in comparison to the content created. 

Consider the following examples:
— A PR practitioner writes a news announcement for a client and then runs that announcement through ChatGPT to determine if there are logical inconsistencies in the piece. 
—  A PR firm working on a new PR campaign brainstorms using generative AI ideas about logo creation and infographics but does not use the actual produced work product. 
— A designer uses AI to edit a photograph to enhance the colors and slightly edit the photo for clarity.   

All these examples present scenarios about the degree of which AI use warrants disclosure.  There is a difference between using AI to build complex content with proprietary information and using AI to brainstorm potential campaign ideas.  Afterall, there are many tools used in the PR practitioner’s daily work such as spell check, photo editing software, and the internet that aren’t disclosed because its use is assumed. Perhaps these scenarios raised more questions than answers, but it is something PR practitioners and the industry writ large will have to grapple with.

Will the Law Require Disclosure? 
Much of the contemporary conversations around AI have been rooted in fear of its power.  AI is a tool that can facilitate deception, discrimination, defamation, and infringement at a faster rate than before.  Deception is particularly troublesome because the ease of AI creates a lowered barrier to entry – anyone with a smart phone or computer and a free subscription can produce a lot of bad content.  Because of this, the government has entered the discussion on AI disclosure.  For example, AI is regularly used in hiring decisions.  Currently New York City, Maryland, and Illinois have laws that mandate the disclosure of AI use in employment screening processes.  NYC’s law requires annual bias audits for the AI system. [i]   Illinois’ law regulates the use of AI in job interviews where the AI platform evaluates facial expressions and answers to score the job candidate.[ii] Maryland’s law, passed in 2020, mandates that employers seek permission to cross-check applicant’s faces against facial recognition databases.[iii]  At the federal level, the Algorithmic Accountability Act of 2023, is an attempt by Congress to address some of the same issues addressed by states and cities. While not law, it signals a direction in how lawmakers view the excesses of AI use in job decisions, and how there is a concern about the privacy and discrimination applicants may face in a truly AI-driven process.          

Lawmakers are also concerned about AI platforms disclosing when content is AI-generated.  There is currently a bipartisan bill in the U.S. Senate called the AI Labelling Act,[iv] which would require disclosure of AI chatbots and content.  There is a House version of this bill as well.  If passed, it would require AI platforms to label its outputs with disclosures that they are AI-generated.  The disclosures would be embedded in the metadata of the content.  However, industry is also addressing when to disclose certain content.  For example, Google, and later Meta, require disclosures for all AI-generated political ads in 2024.  The requirement is global, and Meta’s policy specifically mandates disclosure when AI is used to create synthetic people or events.  Similarly, YouTube is requiring disclosures and content labels of AI content for realistic videos.  Their blog post announcing the new policy specifically mentioned that AI-generated content disclosure is particularly important for “sensitive topics,” such as elections, public health, and conflict.

The 2024 U.S. presidential election also raises concerns about disinformation generated from AI.  However, this is a global problem.  In October 2023 the impact of disinformation created by AI was on full display in Slovakia where Michal Šimečka, a political candidate, was shown on Facebook making comments about rigging the election.  The image and voice were AI-generated deepfakes published within 48 hours of the election when Slovakian law mandates a quiet period by candidates and the press.  Because the timing of the deepfake was within 48 hours of the polls opening, combatting the disinformation was extremely difficult and demonstrated the type of crisis disinformation can produce within the democratic process.  In the U.S., the Federal Election Commission (FEC) recognizes the problem of AI, specifically deepfakes, within political campaigns.  However, concrete legislation on combatting deepfakes is still under development federally, and only three states, Washington, Minnesota, and Michigan, have laws that directly address the use of AI in elections.  These laws vary in their approach to AI use in campaign content.  Washington requires a disclosure of AI use, while Minnesota mandates a ban on deepfakes within 90 days of the election.[v]  Michigan requires a ban of “materially deceptive media” 90 days before an election, and mandates disclosure of media “manipulated by technical means.”[vi]  The effectiveness of these laws and potential federal regulation remain to be seen.  AI and deepfake technology’s insidious impact likely materialize in 2024 globally because the actors involved in creating this information will not likely operate within the boundaries of any law.

So… Should PR Practitioners Disclose AI Use?
The answer likely depends on content, but overall, it is best to disclose when in doubt. Disclosure is an issue of integrity for the PR profession and its work. As the Sports Illustrated example shows, failing to disclose opens organizations and practitioners up to severe criticism.  PR practitioners should welcome the opportunity that this technological issue provides. Given the protracted partisanship in Washington, D.C., and the slowness of federal agency regulation, it is likely the public relations industry, and not the government, that will decide the practice of AI disclosure.  PR practitioners are well positioned to comply with these types of disclosures given their expertise in digital communication and the ethical awareness the field has for transparency and trust.

Disclosure presents an opportunity for public relations practice to positively lead in the era of disinformation through commitments to transparency.  It is important to note that laws provide a baseline for behavior, not a ceiling.  Proper disclosure of AI use provides for a heightened standard for the industry and profession.  It seems that even ChatGPT agrees.  When prompted with “should PR practitioners disclose their use of AI-generated content” it replied, “Overall, disclosure of AI use in PR is not only a best practice for maintaining ethical standards and trust but also important for navigating the evolving landscape of digital communication and AI technology.”[vii]

[i] “Local Law 144 of 2021: Automated Employment Decision Tools (AEDT),” New York City, 2021. Accessed from https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page.
[ii] Illinois General Assembly. “Artificial Intelligence Video Interview Act.” 820 ILCS 42. Accessed December 5, 2023. https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68.
[iii] “Labor and Employment – Use of Facial Recognition Services – Prohibition,” sponsored by Delegates M. Fisher, Adams, Arentz, D.E. Davis, Howard, Miller, and Qi, enacted under Article II, Section 17(c) of the Maryland Constitution, Chapter 446, 2020.
[iv] United States Congress. (2023). [Bill Number: S.2691 – 118th Congress]. Retrieved December 20, 2023, from https://www.congress.gov/118/bills/s2691/BILLS-118s2691is.xml.
[v] SB 5152, Section 1, [604.32], 2023-24 Session, Cause of Action for Nonconsensual Dissemination of a Deep Fake Depicting Intimate Parts or Sexual Acts; SB 5152, 2023-24 Session, Defining Synthetic Media in Campaigns for Elective Office, and Providing Relief for Candidates and Campaigns.
[vi] Public Acts of 2023, Act No. 265, Michigan Legislature, 102nd Legislature, Regular Session (Approved November 30, 2023; Filed December 1, 2023; Effective February 13, 2024).
[vii] ChatGPT. “Should PR Practitioners Disclose Their Use of AI-generated Content?” OpenAI ChatGPT, December 5, 2023.

Cayce Myers, Ph.D., LL.M., J.D., APR is a professor and director of graduate studies at the Virginia Tech School of Communication. He is the Legal Research Editor for the Institute for Public Relations. He can be reached at mcmyers@vt.edu.

In full disclosure, AI was not used in writing this piece except for the final quote.

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter