ChatGPT has created awe and concern in the communication industry since its introduction in November 2022.  Part of the professional worry is that generative artificial intelligence (AI), such as ChatGPT, can lead to a diminishment of human writing, an automation of content, and, perhaps most concerning, an elimination of professionals.  The awe of this new reality is that applications such as ChatGPT are smart….well, smart enough to cause a real impact on the PR profession. Jobs may be eliminated or modified permanently to embrace this new technological reality.

Despite the operational and philosophical implications of generative AI, the issues surrounding this new technology present some real questions, particularly legal questions.  This blog addresses three major areas PR practitioners need to be aware of in using generative AI:  privacy, intellectual property, and bias. From this analysis, this article attempts to anticipate the evolving future of non-human content production in the always-prescient field of public relations.

Does AI create privacy risks? Yes, especially when users disclose proprietary information.
Generative AI applications like ChatGPT utilize user inquiries to help with crafting content.  There is a growing concern over the privacy of users who are providing large quantities of data to facilitate a better user experience.  This has implications for the user and the user’s employer.  There is a potential for a user to disclose proprietary information to a generative AI application, which could include intellectually protected information, such as trade secrets.  Because generative AI can review content as well as create it, there is the opportunity for professionals to input large amounts of proprietary content that get saved into the AI application.  As such, client confidence may be breached and proprietary content may suddenly be available to the public, free of charge.

What are the IP issues of AI-generated content?  Potentially…a lot.
Content generated by generative AI is culled from innumerable sources that are input into the AI system.  That poses two scenarios of potential infringement.  The first is an unintentional infringement of other owner’s content that is used to generate the new AI-produced text.  The second is infringement created by separate duplicative requests to the language generation model.  This issue is compounded with other generative AI applications that can produce artistic content and images.  While lack of intent may mitigate damages in a copyright claim, it does not absolve the infringer from legal responsibility. Because of that, unintentional infringement may proliferate in industries that widely use AI for content creation.

A more complex issue concerns the copyright protection afforded to AI.  Because AI-produced content is not human-generated, it arguably does not have protection under U.S. copyright law.  For copyright in the U.S., there are two basic requirements 1) originality of the work and 2) fixation in a tangible medium.  Because AI produces the work without human input, the basic requirement of copyright is absent making AI content either unprotected public domain work or a derivative work of protected copyrighted content.  This is complicated by the issue of citation, or lack of citations, in AI-generated content.  While proper citation is an issue in the arena of plagiarism, the lack of citations can actually create scenarios where otherwise conscientious practitioners unwittingly engage in copyright infringement.

The U.S. Copyright Office (USCO) is reluctant to award copyright protection to AI-generated content.  However, there is the potential for AI-created content to receive copyright protection if the level of human input reaches a certain threshold.  That issue is a matter for the USCO and the courts to continue to examine over the coming years, and its answer may lie in a case-by-case analysis.  What is certain is the USCO has made great strides in 2022 toward streamlining the copyright claims process with the Copyright Claims Board, a voluntary board that hears copyright infringement claims that are less than $30,000.  2023 promises to be a year where the USCO may develop more specific approaches to AI-generated content and the threshold of human productivity that results in legal protection.

Can AI be biased?  In short, yes, but it depends on how you use it.
The algorithm that ChatGPT uses is based on a massive quantity of data input into the system.  Ultimately the content that is input into the system is subject to the potential biases of those who are inputting the data.  The Equal Employment Opportunity Commission (EEOC) has concerns about bias in employment practices generated by AI releasing an agency-wide initiative promoting “algorithmic fairness” in 2021.  In January 2023, the EEOC held a panel presentation on AI and discrimination calling the use of AI and automated systems a “new civil rights frontier.”  Outside of employment bias, there is also the concern of what bias can occur unintentionally in content.  Those using tools like ChatGPT should verify and proofread content carefully to ensure that the material is congruent with ethical and legal regulations with regard to discrimination and diversity, equity, and inclusion (DEI) initiatives.

How will legal AI issues affect PR practice?  It depends on the courts and PR practitioners.
As AI continues to evolve and the use of generative AI continues, there are more novel legal questions that may arise.  For instance, if AI creates two identical texts, can one user sue the other for copyright infringement?  Could AI-produced content contain defamatory statements, and, if so, who would be held legally responsible, the AI or the individual who utilized it?  Right now the answers to those questions are murky at best, and are the types of questions courts may be grappling with for the next few years.

For public relations practitioners, generative AI applications provide a lot of promise because they can improve communication content and serve as a personal editor, proofreader, and sounding board.  However, these applications present novel challenges not seen before in the communication and legal sphere.  Because of the uniqueness of this new technology, PR practitioners should be more deliberate and reflective in their use of AI.  There are so many potential pitfalls that are unintentional, particularly around IP infringement, that PR practitioners need to make informed decisions rooted in verifying content ownership and accuracy.  Because this is an evolving legal landscape for AI, practitioners should also be more vigilant in keeping up with legal trends in this quickly evolving field.

One thing that is fairly certain in the uncertain world AI creates is that generative AI is here to stay.  Just try logging on to ChatGPT and you may get a notice to come back later because it’s at capacity.  Technological innovation doesn’t go backward, and public relations practice has already acknowledged that ChatGPT and other AI programs like it can be transformative to the industry.


Cayce Myers, Ph.D., LL.M., J.D., APR is the Legal Research Editor for the Institute for Public Relations.  He is the Director of Graduate Studies and Associate Professor at the Virginia Tech School of Communication.

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter