Elon Musk’s purchase of Twitter has set the world abuzz with talk about what free speech means on social media. Musk’s purchase and philosophical stance on social media content has made him a lightning rod in the global discussion of freedom of speech, disinformation, community standards, fake news, opinions, and alternative facts. This has prompted celebrities and rank-and-file users of Twitter to discuss whether or not they are staying on the platform. On April 25th Musk even addressed this controversy, tweeting that he hoped his “worst critics” stayed on the platform “because that’s what free speech means.”
Regardless of how Twitter evolves under Musk’s ownership, there is a larger issue that has been playing out in social media concerning disinformation and fake news. As IPR’s 2020 Disinformation in Society Report shows, disinformation plays an insidious role within society undermining democracy and the political process which leads to a breakdown in trust in the government and institutions that are meant to safeguard citizens’ rights. All of these issues are interconnected to the issue of social media and digital management. What should social media companies do, what are community standards, what are the rights of a user online, and what obligation, if any, does a Big Tech corporation owe society?
One way these issues have been addressed is through the increasingly robust social media regulations enacted by the European Union. On April 25, 2022 the European Union Parliament and E.U. Member States reached an agreement on a proposal entitled the Digital Services Act (DSA), which was proposed by the European Union Commission in December 2020. The DSA specifically targets disinformation by mandating there be internal processes by Big Tech companies with more than 45 million users, or 10 percent of the E.U. consumer population, to flag and take down speech that is considered illegal in the European Union (such as hate speech).
The DSA is part of a series of regulations meant to increase accountability of Big Tech, and protect users from deceptive and manipulative information practices. This includes mechanisms to flag suspicious content, the creation of “trusted flaggers” to identify problematic content, an ability for users to challenge the management of content in judicial proceedings, access of “vetted researchers” to key data of used by the largest platforms, and increased transparency for website algorithms that manage data. Other provisions of the law provide for “crisis protocols,” which includes the display of crisis related content, during emergency situations, such as wars or pandemics. Other aspects of the new rules provide for transparency for algorithms and use of sensitive data, such as race or sexual orientation, for targeted advertisements.
The implementation of the DSA is not final. It must still be approved by the co-legislatures and, if approved, will be implemented in the European Union. It would go into effect January 1, 2024, or 15 months from final adoption, whichever is later.
This development of the DSA is emblematic of larger Big Tech issues the E.U. has tackled in recent years. The European Union has also shown an interest in regulating the use of “dark patterns” in social media. This phenomenon includes tactics, such as information overload in registrations, that limit or prevent a users’ autonomous and informed decision making. The E.U. in March 2022 even wrote guidance on these “dark patterns” in social media and how to avoid them.
Criticism of the DSA centers around its implications for free speech. While the regulation does not go as far as providing filters for certain content, it does provide more teeth in regulatory consequences. In total, the new regulations provide a powerful new enforcement scheme for violations. The penalty for Big Tech can be significant, with the upper cap of fines being 6 percent of a company’s global turnover in the previous fiscal year. Enforcement of this new regulation falls to the European Commission. However, the implications for Big Tech are not clear. The E.U. has been highly active in the area of social media regulation, such as the General Data Privacy Regulation (GDPR) implemented in 2018 and the Digital Markets Act (DMA) agreed upon by the European Parliament and EU Member States in March 2022.
What are the implications for this new regulation on public relations and communication practice? Time will tell, but it is certain that social media platforms are going to have to become more transparent in their use of data, and more vigilant in their regulation of speech within the E.U. In the U.S., the implications of the DSA may be more subtle. The First Amendment of the U.S. Constitution and federal statutory laws, such as the Communications Decency Act section 230, provide more protection for social media platforms. These laws, especially the First Amendment, make it difficult for the U.S. to import an American version of the DSA. However, the so-called “Brussels Effect,” where E.U. regulations becomes de facto practices because of international commerce, may provide a mechanism for the normalizing of E.U. legal standards within social media management in the U.S.
For public relations practitioners and communicators this new law demonstrates the legal space where the debate on disinformation exists. It also shows that while in the U.S. the political movement for combatting the excesses of social media and fake news have stalled, the E.U. has created a new legal reality that is trying to solve what is perceived to be a global problem of misuse of powerful communication tools. As communicators predict the new realities of Musk’s ownership of Twitter, it is perhaps more important to watch the ways governments are stepping in to solve communication problems for the 2020s.
Cayce Myers, Ph.D., LL.M., J.D., APR is the Legal Research Editor for the Institute for Public Relations. He is the Director of Graduate Studies and Associate Professor at Virginia Tech’s School of Communication where he researches and teaches public relations and communication law.