Download Full Article (PDF): Deconstructing: Artificial Intelligence Regulation

This research brief is provided by the IPR Digital Media Research Center

Introduction

Artificial Intelligence (AI) has been a disruptive force within the communication industry.  Regulations of this new technology have yet to keep pace with the technological development of generative AI.  However, within the United States, the President, Congress, federal agencies, state legislatures, and municipal governments have attempted to provide a framework to regulate AI.  These regulations attempt to strike a balance between allowing the technology to grow and guarding against issues of disinformation, discrimination, and privacy violations. This article examines the current trends in U.S. AI regulation pointing out the legal and regulatory philosophies that guiding early attempts to manage generative AI platforms.  The article concludes with suggestions for PR practitioners to navigate the evolving parameters of AI regulation. 

ARTIFICIAL INTELLIGENCE: THE COMMUNICATION ISSUE OF THE 2020s

The power of generative artificial intelligence has sent both awe and fear for those with knowledge-based careers, such as public relations. Looking at the trade presses and seminars in the field, the issue of how do we use artificial intelligence (AI), how does AI help us with communication strategy, and how will AI potentially make public relations practitioners obsolete, are common questions. Generative AI’s disruption to communication is analogous to the creation of the internet. When the internet was put in public domain for use in 1993, there was trepidation by some organizations to become part of the online revolution. The beginnings of online growth saw some organizations rapidly adopt the new technology, while others were more cautious. By the late 1990s the proliferation of the internet led to the dot com bubble and the eventual crash of those companies in the early 2000s. From that event, regulation of the internet proliferated in the 2000s, and led to the current status we operate in today.

The internet’s evolution is illustrative of how AI regulation is likely to develop. The technology is rapidly evolving and there is uncertainty in how it will be implemented. Managers and communicators share a mutual interest and skepticism of the real benefit of AI. This is also accelerated by the democratization of AI tools. Utilizing machine learning and generative AI does not necessarily require custom software. And barriers to AI use, such as hardware, software, machine learning models, data, and expertise data scientists, are more available with costs trending downward for organizations. That means that AI as a tool is gaining more traction in a variety of work settings, large and small.

This situation presents a difficult position for lawmakers and industry organizations who are seeking to regulate generative AI in this early phase. Too much regulation can stifle the growth of an important new technology. No regulations would potentially facilitate a free-for-all development of generative AI that can result in unintended adverse impacts on user privacy, increase of discrimination, and the loss of intellectual property. This article examines existing and proposed U.S. laws and regulations on AI and provides suggestions for how professional communicators practicing in the U.S. can navigate this fast-paced and evolving technology.

What Does This Mean for U.S. Based PR Practitioners?

Giving public relations practitioners precise measures for navigating their communication work is difficult given the state of flux of AI regulation. At this stage the legal system is porting out where the problem points are in AI, with privacy, discrimination, and disinformation being major areas of concern. Going forward, PR practitioners should be aware of three major issues.

1. EXPECT REGULATORY CHANGE FROM MULTIPLE LEVELS OF GOVERNMENT.

U.S. law is in a state of flux, and that means that as the technology of AI evolves so will the law. Federal agency law is likely to address the particular issues of AI in communication, so practitioners should pay close attention to FTC regulations in the area. That agency is concerned over many of the topical issues in communication, namely disinformation. However, U.S.-based practitioners increasingly communicate in a global marketplace, which may have laws that differ to that in the U.S. For instance, the European Union GDPR regulates data privacy, which has major impact for the construction of AI platforms. Understanding the evolving landscape of AI regulation means looking at U.S. federal, state, and local law, but is also requires a global perspective.

2. COMBATTING DISCRIMINATION AND FAKE NEWS ARE MAJOR DRIVERS OF REGULATION.

AI regulation has increasingly focused on discrimination and false information. At the basis of artificial intelligence is human knowledge. That knowledge has been developed over thousands of years and contains inaccuracies, biases, and other disinformation that can be replicated by AI. The bottom line is AI is only as good as the data it uses to generate content, so it is important for professional communicators to be wary of the accuracy of any exclusively generated AI content. As a business, public relations firms and in-house functions have a unique opportunity to discuss bias and accuracy of information with clients and employers, because so much of the law is rooted in transparency. PR professionals have worked with issues of organizational transparency since the dawn of corporate PR, so regulations, like that in New York City, that mandates disclosure of algorithm use and potential bias lends itself well to the transparent practices of communication.

3. PR PROFESSIONALS NEED TO DEVELOP AN ORGANIZATIONAL OR INDUSTRY STANDARD TO DEAL WITH EVOLVING AI.

AI technology will evolve faster than the laws that regulate it. Because of that, public relations professionals will need to establish professional standards and norms for AI use. Those conversations need to happen now, and need to continue to happen as AI’s place in the field becomes more solidified. This conversation should include frank discussions around ethics, organizational reputation, transparency, and business goals. Ethical guides for industry provide a framework for difficult discussions about implementing AI. However, these discussions must consider both the deliberate and unintended consequences of AI use. These conversations may also include industry standards in niche subfields. For example, AI guidelines have already been established
in some sectors, such as in engineering and healthcare. If a professional is practicing in one of these areas, these standards can serve as a guidepost for communications as well.

For more information, download the full report HERE.

Cayce Myers, Ph.D., LL.M., J.D., APR is the Legal Research Editor for the Institute for Public Relations.  He is the Director of Graduate Studies and Associate Professor at the Virginia Tech School of Communication.

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter