On May 29, 2020, President Donald Trump tweeted “Revoke 230!” to his more than 80 million Twitter followers. The tweet is a curious one because rather than focusing on a general political message it speaks directly to a section of the Communications Decency Act, specifically Title 47 United States Code section 230(c).  How could a U.S. Code subsection passed in 1996 become a political rallying cry in the presidential election of 2020? After all, until a few weeks ago section 230 of the CDA was relatively unknown outside the circles of policy wonks, media lawyers, and communication law professors.  The answer lies in the political matrix of fake news, social media removal policies, Twitter tags, and a growing criticism on the right of censorship of conservative messaging.

Of course, Trump’s tweet was referring to his executive order issued the day before which called for a shakeup of how social media is regulated in light of what Trump described as an assault on conservative voices online.  While this is absolutely a political issue, it is one that is much more complex than some may think.   For anyone in the communications profession, section 230 is extremely important, and calls for its revocation are likely not going to go away anytime soon.  PR practitioners should take note, because this debate over section 230 is going to impact how the internet may work in the future and how PR is going to be practiced.

What is Section 230 and Why is it Controversial Now?
At issue with the new executive order is section 230(c) of the CDA, which  protects content providers, such as websites and social media sites, from liability for what third parties post.  Passed in 1996, the CDA is credited with allowed the internet to grow and take its current form.  In fact, the purpose behind the statute is stated in 47 U.S.C. §230(a)(3), which states, “The internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.” This was achieved by a “minimum of government relation.” Specifically, section 230 states that an “interactive computer service,” e.g. website or a social media site, is not a “publisher or speaker” of content meaning that they are largely immune from liability for content posted by third parties. (Communications Decency Act, 1996).

These protections for websites and social media are viewed as essential to their success because it protects them from litigation that would make their management, delivery, and regulation of content extremely difficult.  Of course, the CDA was passed in a time when the internet was very different than today (no smartphones, no real social media, no high-speed internet access).  However, its purpose was clear—it was created to let the internet grow organically without the threat or fear of litigation slowing and atrophying the process of innovation.  A classic example of the power of the CDA is found it its immunity from lawsuits arising from third party posters, such as a defamatory comment posted on a social media site.  While the author of that defamatory content would be legally responsible and potentially could be sued, the social media site itself would be immune from litigation.  This is not to say that CDA lets websites do nothing.  Within the statute, in section 230(c) it calls for “Good Samaritan Blocking and Screening of Offensive Material,” which is later defined under the 47 U.S.C. §230(c)(2)(A) as “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content.  This regulation was really about protecting children from bad online content (Communications Decency Act, 1996).

While the creation and growth of the internet are closely related to section 230 it has not been without criticism from the left, right, and center.  There were early legal challenges to section 230 from groups that wanted to hold websites accountable for the removal of content.  Such was the case in the case of Zeran v. AOL, a case heard in the U.S. Court of Appeal for the Fourth Circuit in 1997.  In that case, the Fourth Circuit held that section 230 of the CDA did provide immunity for internet service providers for the content posted by third parties (lawsuit against AOL was barred by CDA 230 even though AOL did not take down inflammatory content posted by an anonymous user). Zeran v. AOL is a watershed case for section 230 because it solidified its power to provide immunity for third party content.

Criticism of the CDA, however, is based on the broader immunity that websites and social media provide.  Since Zeran v. AOL, the amount of third-party content posted on websites has proliferated because of social media.  One issue is section 230’s immunity takes away the ability to hold social media and websites responsible for the content it maintains.  Removal of content on these sites sometimes is extremely difficult to achieve, oftentimes a complaint has to be made, and then the social media site has an internal process to evaluate removal requests (and may still end up denying them).  These denials are within the right of the social media or website platform, and their processes are largely left up to their discretion.  Critics claim that this immunity fosters content such as “fake news,” conspiracy theories, and deep fakes that have caused a decrease in trust in news in general and contributed to a decline in public discourse.

Of course, the internet has changed dramatically since the CDA’s passage in 1996.  The advent of social media has changed the process and substance of how people communicate.  It has given a platform to many groups, previously marginalized in society.  However, social media has also given rise to a new “public forum” for speech.  The role of the host platforms of social media sites has now become the hosts for many hot button political communications, not just among citizens, but among elites.  Compounding this issue is that social media sites are private businesses, not the government, and are therefore not subject to the type of First Amendment laws that protects most speech.  This is why social media sites can develop policies that regulate content and, in some cases, even remove content.  The regulation of content that is offensive or even “hate speech” for instance, is not an area regulated by American law, but is largely regulated by social media sites.  Critics claim that these regulations on social media content are frequently arbitrary, and politically biased.  It begs the question of whether these social media sites have stepped into a realm where they are now the arbiters of public discourse, choosing sides, and determining who gets to be heard and who doesn’t.

What Does Trump’s Executive Order Say About Social Media?
President Trump issued his executive order concerning social media on May 28, 2020.  Rooted in criticism of social media and its alleged censorship of conservative voices, the executive order specifically addresses section 230 of the CDA.  Social media immunity and alleged suppression of conservative content, however, had some high-level attention in the months before Trump’s executive order.  On February 19, 2020, the U.S. Department of Justice (DOJ) convened a workshop on CDA section 230.  At that workshop Attorney General William Barr stated that the protection given to companies under the CDA was based on an internet reality of the mid-1990s.  He argued that the “substance” of websites and message boards had changed and that the protections given in the CDA, while important during the early days of the internet, maybe inappropriate in an era where social media companies are large, wealthy institutions (Department of Justice, 2020). Later, on May 26, 2020, Twitter provided a tag to one of Trump’s tweets that claimed voter fraud resulted from mail-in ballots.  The following day on May 27, 2020, the U.S. Court of Appeals for the D.C. Circuit upheld a District Court’s dismissal of a case involving Freedom Watch and conservative commentator Laura Loomer who sued Google, Facebook, Twitter, and Apple.  Specifically, Freedom Watch and Loomer accused Google and the social media platforms of suppressing conservative speech, but the District Court and Court of Appeals of the D.C. Circuit held that the plaintiffs had failed to make colorable legal claims.

Trump’s Executive Order on Preventing Online Censorship is viewed as both a legal and political statement.  Its substance is now a political topic, and no doubt will be brought up by the President before the election in November.  In it, he argues that social media suppress conservative communications and that the increased role of content removal, suspension of accounts, and now the new approach of tagging content makes these sites “content creators” who engage in “selective censorship” of conservative voices. The result of this change in approach, according to the president, is that section 230 does not protect a “publisher” of content (White House, 2020).

The order contains four points the administration is taking concerning social media and websites.  First, it instructs the Secretary of Commerce in concert with the U.S. Attorney General through the National Telecommunications and Information Administration (NTIA) to petition the Federal Communications Commission (FCC) to provide clarification of section 230(c) and what actions, specifically suppression or removal of online content, can constitute removal of immunity protection under section 230.  Second, federal departments and agencies must examine their own spending on social media sites and other online platforms.  The DOJ will examine the “viewpoint-based speech restrictions” of each site and determine the level of harm caused by each.  Presumably, this approach is done to evaluate whether the federal government should continue using these platforms for paid promotions.  Third, complaints about online censorship collected by the Tech Bias Reporting tool will be sent to the DOJ and Federal Trade Commission (FTC).  The FTC then will be able to take action on the deception of commerce, even in cases where the companies are immunized under section 230 of the CDA.  Last, the U.S. Attorney General will establish a working group to enforce state laws regarding “unfair or deceptive acts or practices.”  This working group will be gathering information about users that are specifically scrutinized by online platforms, algorithms that foster content suppression, the application of host sites policies regarding content, the use of biased third-party contractors, and the comparative ability of users to earn income from sites (White House, 2020).

The impact of this executive order is somewhat unknown.  The order does not revoke the immunity of section 230 for social media and websites.  The ability of an executive order, and for that matter the FCC, is likely not going to change the protection afforded in a federal statute like section 230.  Enforcement and investigations of bias by the federal government agencies will likely take place, but the effects of those are unknown as this order is still recent.  What we do know is that the Executive Order on Preventing Online Censorship places social media squarely in the middle of a larger national political discussion.  While some commentators argue that the impact of the order is legally muted, the political impact may be very consequential.  It signals a change in approach to social media regulations, and is one is the first, in what may become many, calls to reform social media companies and the immunity they currently enjoy.

How Does This Affect Public Relations?
Any regulatory change of social media will affect anyone in communication practice, and this executive order will likely be challenged in courts by a variety of organizations.  Public relations practitioners in the immediate future will likely see no change in how they practice digital communication.  Platforms may start having political connotations (whether they like it or not).  The decision to fact check and comment on user-generated content is controversial.  After all, users select who to interact with based on their own preferences.  Having a social media company provide unsolicited editorial comments on content presents a host of problems including alienating users, creating questions of the validity and biases of the fact-checker, and perhaps introducing a level of apprehension among users who fear that their post may be fact-checked by an unknown site coordinator.  Conversely, the lack of controls on content online has given rise to the problems associated with fake news and conspiracy theories that have a detrimental impact on American discourse and society at multiple spheres (even spheres outside politics). With the decision of Twitter to place tags on tweets and Facebook’s approach to not regulate content as severely (at least at the time of this writing) could create a fractured social media landscape where speaking on certain platforms automatically sends a political statement.

Support for immunity social media platforms is eroding in some sectors and may eventually change.  It is difficult to think about Trump’s executive order outside a political context, especially during an election year.  However, on the horizon, there is an increased awareness of the policies and approaches of social media toward user content.  Users likely have experienced or know someone who has experienced the take-downs of certain content.  There is a heightened criticism of social media privacy policies and the use of personal data.  There is a real concern about the insidious nature of fake news and conspiracy theory websites that promote disinformation.  Meanwhile, there is an increased acknowledgment across the political spectrum that social media is a space where a lot of society’s discussions are taking place.  Given the criticism of immunity from the Trump administration and conservatives, the issue of immunity will likely become part of a larger policy debate.  While the issue is now packaged in a left-right dichotomy as time progresses there may be some level of support for reform, and section 230 which was passed in a technological reality of nearly 25 years ago may be one area that is addressed first.  One thing is certain technology is changing and the power of social and digital media is increasing.  As with anything, as prominence grows so does criticism.  Section 230, or more simply 230, has become sloganized, and with that come to a new debate about the power of social media sites to regulate content, the impact of social media content in political discourse, and the limits of protection given to any “interactive computer service.”

Cayce Myers, Ph.D., LL.M., J.D., APR is the Legal Research Editor for the Institute for Public Relations. He is an associate professor and Director of Graduate Studies at Virginia Tech Department of Communication where he teaches public relations. Email him at mcmyers@vt.edu or follow him on Twitter @CayceMyers.

References:
Department of Justice, Attorney General William P. Barr Delivers Opening Remarks at the DOJ. Workshop on Section 230: Nurturing Innovation or Fostering Unaccountability?. (2020). Retrieved from https://www.justice.gov/opa/speech/attorney-general-william-p-barr-delivers-opening-remarks-doj-workshop-section-230

Communications Decency Act, 47 U.S.C. §230 (1996).

White House, Executive Order on Preventing Online Censorship. (2020). Retrieved from
https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/

Freedom Watch, Inc. v. Google, Inc., No. 19-7030 (D.C. Cir. May 27, 2020).

Zeran v. AOL, 129 F.3d 327 (4th Cir. 1997).

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter