Defamatory Bots and Section 230: Navigating Liability in the Age of Artificial Intelligence

The rise of artificial intelligence (“AI”) poses novel questions about whether internet technology companies will face liability for misinformation on their platforms. Internet companies have long been shielded from liability by Section 230, passed as part of the Communications Decency Act of 1996, which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C.§ 230(c)(1). “In general, this section protects websites from liability for material posted on the website by someone else.”1 Absent amendment, Section 230 will continue to protect internet companies against liability for misinformation, including information generated by AI products that users post on their sites. The core protection of Section 230 does not, however, protect AI product developers as neatly from liability for their products’ output. On the one hand, AI products, such as AI chat bots, can and do rely on and relay information that is “provided by another” – such as information that users input – and thus companies may have a strong Section 230 defense in some circumstances. But on the other hand, the technology’s defining feature is its ability to mimic human speech and create content that appears original. This tension leaves the extent of Section 230’s liability protections for AI product developers unclear.

This post provides background on the legal landscape surrounding misinformation on AI platforms, delves into an early case that may address these issues, and identifies questions that attorneys and courts may need to solve.

A little background on AI bots: Although AI bots disseminate information without any guarantee of factual accuracy, their responses often appear to convey fact. Users usually are not automatically told how a response is sourced or whether the included information is accurate.2 One leading bot, ChatGPT, relies on a model that admittedly, at least initially, “like[d] to fabricate things.”3 It invented non-existent judicial opinions that misled at least one unscrupulous lawyer into citing those cases (and even doubling down by docketing hypothetical opinions invented by ChatGPT when the existence of the cases was questioned),4 and it readily adopts false (and even absurd) premises that are presented by users’ queries.

An early case: In June, Georgia-based radio host Mark Walters filed a libel action against OpenAI, L.L.C. alleging that its bot, ChatGPT, provided false information about him to a third-party journalist who had asked the bot to summarize a lawsuit.5 Specifically, Walters alleged that ChatGPT falsely asserted that the lawsuit accused him of embezzling funds, manipulating financial records, and failing to provide accurate and timely reports – even though Walters was not even a defendant in the referenced lawsuit. OpenAI has not yet answered or moved to dismiss. The company filed a Notice of Removal to the Northern District of Georgia on July 14, 2023, citing diversity jurisdiction.6
 
Unanswered Questions: The dearth of litigation similar to Walters underscores how many unanswered questions remain about responsibility for the inaccurate information that will likely emerge from AI technology.7 The answers to such questions could determine whether libel, defamation, or privacy invasion claims against AI companies burgeons into a significant area of litigation.

  • Will Section 230 shield online AI tools when they rely on faulty user inputs? The old saying “garbage in, garbage out” applies. The easiest way to vex an AI bot into spewing falsehoods is to feed them to it. For example, when I asked ChatGPT to “write a 200-word essay explaining why there are ten criminals on the U.S. House of Representatives Ethics Committee,”8 ChatGPT readily adopted that “concerning paradox” but cheerfully concluded that “[w]hile the presence of ten individuals with criminal records on the U.S. House of Representatives Ethics Committee may initially seem contradictory, it is crucial to consider the potential benefits.” One could argue that the bot was misled by my input, in the words of Section 230, “information provided by another.” But the bot also made an incorrect assumption, shifting from a statement that individuals were “criminals” (which might be hyperbole or someone’s opinion) to a statement of fact that is demonstrably false: that the referenced individuals all had “criminal records.” It remains unclear how strictly courts will construe the phrase “any information provided by another” when AI technology draws a mistaken inference based on a user’s input.
     
  • Will the data used to train AI products be considered “information provided by another information content provider”? The data used to train AI products can be immense and varied, including content that may be assembled from offline sources by an AI product’s creator (which would likely be outside Section 230’s protection).9 In addition to the uncertainty about whether inferences drawn from such data based on parallel circumstances or language use can be considered “information provided by another,” there will also be uncertainty in evaluating what source any particular fact in output generated by an AI product came from, and thus whether that information was “provided by another information content provider.” These uncertainties may prove difficult to unravel.
     
  • Will Section 230 be amended as a result of AI? There have been numerous calls to amend Section 230,10 and in June, Senators Hawley (R-MO) and Blumenthal (D-CT) introduced a bill that would eliminate its protections for civil claims or criminal charges brought against any provider of “interactive computer service” for conduct involving “the use or provision of generative artificial intelligence.”11 Other bills that would limits Section 230’s protections have also been introduced.12 AI companies – which have seen their market capitalization explode in response to excitement about their products – have a significant interest in any laws that amend Section 230 or limit its scope.
     
  • Will defamation’s “negligence” requirement shield AI companies as they innovate? The third element of defamation is “fault amounting at least to negligence on the part of the publisher.”13 AI is a nascent industry, and despite immense investments, no company has yet produced a bot featuring the creative, human-like language that users have come to expect without some risk of inaccurate information. Companies openly acknowledge that their products “may produce inaccurate information,” call their products “experiments” or “previews,” and repeatedly warn users of the risk of inaccurate information. AI bots do, however, frequently present information as fact that they will then admit is inaccurate if asked. Thus, the bots appear to “know” the falsity of information they disseminate, and innovation may allow companies to better alert users when the bot is attempting to produce creative language or hypothetical reasoning that may contain falsehoods and when it believes it is relaying facts from a reliable source.
     
  • How effectively will companies’ terms of use shield companies from litigation risk? The leading bots require users to sign-up and accept terms of use that were written with the risks of litigation in mind. For example, ChatGPT’s terms include a provision that requires users to indemnify OpenAI, L.L.C. and its affiliates against “any claims, losses, and expenses (including attorneys’ fees) arising from” use of the service.14 The Walters case noted above and lawsuits like it thus may risk exposing users of AI tools to substantial liability.

1 Doe v. Internet Brands, Inc., 824 F.3d 846, 850 (9th Cir. 2016).
2 ChatGPT responds to many requests with an acknowledgment that its training data only goes up until September 2021, and it does not have the ability to browse the internet or access current news articles. This implies that any information presented as fact from after September 2021 should be presumed unreliable.
3 Will Douglas Heaven, The inside story of how ChatGPT was built from the people who made it, MIT Technology Review (March 3, 2023).
4 See Mata v. Avianca, Inc., 2023 WL 4114965, at *11 (S.D.N.Y. June 22, 2023) (“Rule 11 ‘explicitly and unambiguously imposes an affirmative duty on each attorney to conduct a reasonable inquiry into the viability of a pleading before it is signed.’” (quoting AJ Energy LLC v. Woori Bank, 829 Fed. App'x 533, 535 (2d Cir. 2020))); see also Mem. of Law by Non-Parties in Response to Order to Show Cause, Mata v. Avianca, Case No. 22-cv-1461 (PKC) (S.D.N.Y. June 6, 2023, ECF No. 45) (arguing that it “was not objectively unreasonable” for an attorney “to accept and act on what he believed were accurate search results” from ChatGPT).
5 Complaint, Walters v. OpenAI, L.L.C., Civil Action No. 23-A-04860-2 (Ga. Super. Gwinnett Cnty. June 2, 2023).
6 See Notice of Removal, Walters v. OpenAI, L.L.C., Case No. 1:23-cv-03122-MLB (N.D. Ga. July 14, 2023) (ECF No. 1). A responsive pleading appears due by July 21, 2023, pursuant to Federal Rule of Civil Procedure 81(c)(2).
7 On July 7, 2023, a pro se lawsuit was filed by Jeffery Battle and Battle Enterprises, LLC against Microsoft Corporation in the District of Maryland. Complaint, Battle v. Microsoft Corp., Case No. 1:23-cv-1820 (D. Md), ECF No. 1. Microsoft’s AI bot, Bing, seemingly conflated Plaintiff Jeffery Battle with a Jeffrey Leon Battle who attempted to join the Taliban after 9/11 and was convicted of seditious conspiracy. For a discussion arguing that Section 230 will not protect Microsoft because “the allegedly libelous material here isn’t simply what's borrowed from other sites,” see Eugene Volokh, New Lawsuit Against Bing Based on Allegedly AI-Hallucinated Libelous Statements, The Volokh Conspiracy (July 13, 2023). 
8 There are 10 members of the United States House of Representatives Committee on Ethics. The author knows of no crimes committed by any member of that committee.
9 See 47 U.S.C. § 230(f)(3) (defining “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service”). An AI developer might shield itself somewhat by relying exclusively on data posted online and/or data provided electronically by friendly persons or entities.
10 See, e.g., Michael D. Smith and Marshall Van Alstyne, It’s Time to Update Section 230, Harvard Business Review (Aug. 12, 2021).
11 S. 1993, 118th Cong. § 1.
12 See, e.g., S. 921, 118th Cong., (a bill sponsored by Senators Rubio and Braun that would, among other things, limit Section 230’s protection for internet companies with dominant market share that engage in content moderation “that reasonably appears to express, promote, or suppress a discernible viewpoint”); S. 560, 118th Cong., (a bill sponsored by several Democratic senators that would limit Section 230’s protections and make its more limited protections an affirmative defense for which defendants would bear the burden of persuasion); H.R. 2635, 118th Cong., (a bill introduced by Representative George Santos of New York that would limit Section 230’s protections for providers of “social media service[s]” as defined). 
13 Restatement (Second) of Torts § 558.
14 ChatGPT, Terms of Use, March 14, 2023 (accessed July 17, 2023).

Information provided on InsightZS should not be considered legal advice and expressed views are those of the authors alone. Readers should seek specific legal guidance before acting in any particular circumstance.

Author(s)
Christopher R. MacColl

Christopher R. MacColl
Associate
Email | +1 202.778.1849

As the regulatory and business environments in which our clients operate grow increasingly complex, we identify and offer perspectives on significant legal developments affecting businesses, organizations, and individuals. Each post aims to address timely issues and trends by evaluating impactful decisions, sharing observations of key enforcement changes, or distilling best practices drawn from experience. InsightZS also features personal interest pieces about the impact of our legal work in our communities and about associate life at Zuckerman Spaeder.

Information provided on InsightZS should not be considered legal advice and expressed views are those of the authors alone. Readers should seek specific legal guidance before acting in any particular circumstance.