Are AI Overviews Creating New Risk of Libel for Search Engines?

Eleanor Nagel-Bennett, MJLST Staffer

47 USC § 230 of the Communications Decency Act (“CDA”) protects online service providers from civil liability for content published on their servers by third parties. Essentially, it clarifies that if a Google search for one’s name produced a link to a blog post containing false and libelous content about that person, the falsely accused searcher could pursue a claim of defamation against the publisher of the blog, but not against Google. Under § 230, Google is not considered the speaker or the publisher of the libelous statements on the blog, despite returning the libelous results on the search engine results page. Specifically, § 230 provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” for purposes of civil penalties.[i]

However, in May 2024 Google rolled out an “AI Overview” feature on their search engine. The AI Overview is powered by Gemini, Google’s generative artificial intelligence chatbot.[ii] Gemini generates a response to each google search by combining information from internet sources and writing a complete overview answer to the search query using “multi-step reasoning, planning and multimodality” functions.[iii] After submitting a query, the AI Overview is displayed at the top of the search results. In the first few weeks, Google’s AI Overview suffered from hallucinations producing “odd and erroneous” outputs.[iv] Some of the odd results were obviously false, such as suggesting a user try adhering cheese to their pizza with a glue stick.[v]

Besides going viral online, the silly results were largely inconsequential. However, there were also several more serious reports of Google’s AI Overview feature generating misinformation that presented responses more difficult to identify as false. One such result claimed that President Barack Obama was the first Muslim President, a popular but demonstrably false conspiracy theory that has circulated the internet for years, while another told users that certain poisonous mushrooms were safe for human consumption.[vi] Google has since changed the data pool used to produce AI Overviews, and now rarely produces blatantly false results — but is rarely enough when 8.5 billion searches are run per day on Google?[vii]

This raises the question: can search engines be held liable for libelous content published by their generative AI? A plaintiff will have to prove to the court that § 230 of the Communications Decency Act is not a statutory bar to claims against generative AI. A recent consensus of legal scholars anticipate courts will likely find that the CDA would not bar claims against a company producing libelous content through generative AI because content produced by generative AI is original work, “authored” by the AI itself.[viii]

For an illustrative comparison, consider how defamation claims against journalists work as compared to defamation claims against traditional search engine results. While a journalist may write stories based on interviews, research, and experience, the language she publishes are her own words, her own creation, and she can be held liable for them despite sourcing some pieces from other speakers. Traditional search engines on the other hand historically post the sourced material directly to the reader, so they are not the “speaker” and therefore are insulated from defamation claims.  Enter generative AI, the output of which is likely to be considered original work by courts, and that insulation may erode.[ix] Effectively, introducing an AI Overview feature waives the statutory bar to claims under § 230 of the CDA relied upon by search engines to avoid liability for defamation claims.

But even without an outright statutory bar to defamation claims against a search engine’s libelous AI output, there is disagreement over whether generative AI output in general is relied upon seriously enough by humans to give rise to a defamation claim. Some believe that AI generated text should not be interpreted as a reasonably perceived factual claim, and therefore argue that AI generated content cannot give rise to a claim for defamation.[x] This is where the legitimacy of a result displayed on a popular search engine comes into play. Even if AI generated text is not ordinarily reasonably perceived as a factual claim, when displayed at the top of a search engine’s results page, more weight and authority is given to the result, though users might otherwise be wary of AI outputs.[xi]

While no landmark case law on the liability of an AI machine for libelous output has been developed to date, several lawsuits have already been filed on the question of liability assignment for libelous content produced by generative AI, including at least one case against a search engine for AI generated output displayed on a search engine results page.[xii]

Despite the looming potential for consequences, most AI companies have neglected to give attention to the risk of libel created by the operation of generative AI.[xiii] While all AI companies should pay attention to the risks, search engines previously insulated from civil liability by § 230 of the CDA should be especially wary of just how much liability they may be opening themselves up to by including an AI Overview on their results pages.

 

Notes

[i] 47 U.S.C. §230(c)(1).

[ii] Reid, Liz, Generative AI in Search: Let Google do the searching for you, Google (May 14, 2024) https://blog.google/products/search/generative-ai-google-search-may-2024/.

[iii] Id.

[iv] Reid, Liz, AI Overviews: About last week, Google (May 30, 2024) https://blog.google/products/search/ai-overviews-update-may-2024/.

[v] O’Brien, Matt, Google makes fixes to AI-generated search summaries after outlandish answers went viral, The Associated Press (May 30, 2024) https://apnews.com/article/google-ai-overviews-hallucination-33060569d6cc01abe6c63d21665330d8.

[vi] Id.

[vii] Brannon, Jordan, Game-Changing Google Search Statistics for 2024, Coalition, (Apr. 5, 2024) https://coalitiontechnologies.com/blog/game-changing-google-search-statistics-for-2024.

[viii] Joel Simon, Can AI be sued for defamation?, Col. Journalism Rev. (March 18, 2024).

[ix] Id.

[x]  See Eugene Volokh, Large Libel Models? Liability For AI Output, 3 J. Free Speech L. 489, 498 (2023).

[xi] Id.

[xii] In July of 2023, Jeffery Battle of Maryland filed suit against Microsoft for an AI generated search result on BING accusing him of crimes he did not commit. The Plaintiff Jeffery Battle is a veteran, business owner, and aerospace professor. When his name is searched online, however, Bing’s AI overview accuses Battle of crimes committed by a different Jeffrey Battle, Jeffery Leon Battle. The other Jeffery Battle pled guilty to seditious conspiracy and levying war against the United States after he tried to join the Taliban in the wake of 9/11. Bing’s search engine results page overview powered by Chat GPT combines information about the two Jeffery’s into one. See Id. at 492.

[xiii] Id. at 493.