Citation[]
Jason A. Gallo & Clare Y. Cho, Social Media: Misinformation and Content Moderation Issues for Congress (CRS Report R46662) (Jan. 27, 2021) (full-text).
Overview[]
Some Members of Congress are concerned about the spread of misinformation (i.e., incorrect or inaccurate information) on social media platforms and are exploring how it can be addressed by companies that operate social media sites. Other Members are concerned that social media operators' content moderation practices may suppress speech. Both perspectives have focused on Section 230 of the Communications Act of 1934 (47 U.S.C. §230), enacted as part of the Communications Decency Act of 1996, which broadly protects operators of "interactive computer services" from liability for publishing, removing, or restricting access to another's content.
Social media platforms enable users to create individual profiles, form networks, produce content by posting text, images, or videos, and interact with content by commenting on and sharing it with others. Social media operators may moderate the content posted on their sites by allowing certain posts and not others. They prohibit users from posting content that violates copyright law or solicits illegal activity, and some maintain policies that prohibit objectionable content (e.g., certain sexual or violent content) or content that does not contribute to the community or service that they wish to provide. As private companies, social media operators can determine what content is allowed on their sites, and content moderation decisions could be protected under the First Amendment. However, operators' content moderation practices have created unease that these companies play an outsized role in determining what speech is allowed on their sites, with some commentators stating that operators are infringing on users' First Amendment rights by [[[censor]]ing speech.
Two features of social media platforms — the user networks and the algorithmic filtering used to manage content — can contribute to the spread of misinformation. Users can build their own social networks, which affect the content that they see, including the types of misinformation they may be exposed to. Most social media operators use algorithms to sort and prioritize the content placed on their sites. These algorithms are generally built to increase user engagement, such as clicking links or commenting on posts. In particular, social media operators that rely on advertising placed next to user-generated content as their primary source of revenue have incentives to increase user engagement. These operators may be able to increase their revenue by serving more ads to users and potentially charging higher fees to advertisers. Thus, algorithms may amplify certain content, which can include misinformation, if it captures users' attention.
Congress has held hearings to examine the role social media platforms play in the dissemination of misinformation. Members of Congress have introduced legislation, much of it to amend Section 230, which could affect the content moderation practices of interactive computer services, including social media operators. In 2020, the Department of Justice also sent draft legislation amending Section 230 to Congress. Some commentators identify potential benefits of amending Section 230, while others have identified potential adverse consequences.
Congress may wish to consider the roles of the public and private sector in addressing misinformation, including who defines what constitutes misinformation. If Congress determines that action to address the spread of misinformation through social media is necessary, its options may be limited by the reality that regulation, policies, or incentives to affect one category of information may affect others. Congress may consider the First Amendment implications of potential legislative actions. Any effort to address this issue may have unintended legal, social, and economic consequences that may be difficult to foresee.