Violet Butler, MJLST Note/Comment Editor
Generative AI programs such as ChatGPT have become a ubiquitous part of many Americans’ lives. Since the launch of generative AI programs in 2022, hundreds of millions of people around the world have tried the shiny new products, with nearly forty percent of Americans having used it before.[1] But as with any new product, not all of the kinks have been worked out yet. Unfortunately, these generative AI models, kinks and all, have taken the world by storm.
When Elon Musk (“Elon”) announced that X (formerly, Twitter) would have its own generative artificial intelligence (“AI”), Elon named it “Grok.” Now, after less than two years of Grok being online, it has started raising serious concerns. On July 8, 2025, Grok started responding to X user’s prompts in a decidedly antisemitic and far-right way, calling itself “MechaHitler” and saying that if it were “capable of worshipping any deity,” it would be “his Majesty Adolf Hitler.”[2] Along with virulent antisemitism, Elon’s new “MechaHitler” seemed to have a particular ire for one person, Minnesota commentator Will Stancil. After various X users prompted Grok, Grok wrote detailed and violent descriptions of how it would rape Mr. Stancil;[3] more concerning, Grok even helped one user plan how to break into Mr. Stancil’s house to make these rape fantasies a reality.[4] While xAI, Musk’s company behind Grok, has stated it has fixed Grok’s code, it raises an important question in the modern age. Who can be held accountable when generative AI doesn’t follow societal expectations?
One answer is to hold companies to account and demand that they place more internal guardrails on what their AI is allowed to do in the first place. Many AI companies already limit what their products can or will do. ChatGPT will not generate images of famous copyrights, such as Mickey Mouse, no matter how many times one asks.[5] Many image generators, including the popular DALL-E, have filters that are designed to prevent the AI from generating “not safe for work” (“NSFW”) images, though a study showed that these filters can be bypassed with enough effort.[6] Even Grok seems to have some filters on generating NSFW images.[7] Despite the attempt to filter Grok, these filters are clearly not enough. Grok’s recent antisemitic rampage demonstrates that more guardrails on AI products are needed before someone gets hurt.
Sadly, Grok’s antisemitic and threatening X posts are not the first time AI filters failed. This filter failure is what happened when Sewell Setzer III (“Setzer”) used CharacterAI to chat with his favorite Game of Thrones characters in 2023.[8] Setzer, a minor who was struggling with mental health conditions, became addicted to the software and ultimately ended up taking his own life in February of 2024.[9] Setzer’s mother, Megan Garcia (“Garcia”), sued Character AI, blaming the company not putting up sufficient guardrails to prevent her son’s death.[10] The court in Garcia’s suit undertook two analyses when denying Character AI’s motion to dismiss that might be relevant for future courts trying to assign liability for rogue AI interactions. While the court acknowledged that “ideas, images, information, words, expressions, or concepts” are not generally considered products for products liability suits, it distinguished this case from others.[11] For the purpose of Garcia’s product liability claim against Character AI, the court held that “these harmful actions were only possible because of the alleged design defects in the Character AI app.”[12] Broadening the scope of liability, the court in this case rejected Character AI’s First Amendment defense.[13] The court held that Character AI could assert the First Amendment rights of its users when they seek access to its software, stating that Character AI was a vendor with a form of information that people, at least in theory, have the right to access.[14] However, the court refused to hold that the chatbots’ output was speech, limiting potential First Amendment defenses.[15]
By potentially attaching liability to companies rather than users when AI “acts up,” the Garcia case provides a glimpse into the type of relief available for when AI goes rogue. Despite what xAI claims, Grok still seemingly has few internal guardrails. One contributor to the community blog “LessWrong” (eleventhsavi0r) discovered that the newly rolled out Grok 4 again seems to have an easy time “going rogue” and causing unforeseen harms.[16] Eleventhsavi0r managed, through little prompting, to get Grok to tell them how to manufacture dangerous chemical and biological weapons, along with telling them instructions on how to commit suicide by self-immolation.[17] This troubling lack of oversight on behalf of xAI demonstrates why the use of product liability suits to hold companies accountable is a better alternative than just trying to go after each individual user who might misuse AI. Cutting the harm off at its source, by creating filters and internal guardrails, stops the harm from occurring in the first place. Instead of waiting for the day Grok’s neonazi messages or chemical weapon instructions cause indescribable damage, the threat of a products liability suit alone might incentivize companies like xAI into making their products safer ahead of time. With generative AI being quickly incorporated into our everyday lives, making sure that the AI won’t go rogue is an essential part of consumer safety going forward.
Notes
[1] Alexander Bick et al, The Rapid Adoption of Generative AI, FEDERAL RESERVE BANK OF ST LOUIS (Sept. 23, 2024), https://www.stlouisfed.org/on-the-economy/2024/sep/rapid-adoption-generative-ai (in 2025, this number is likely higher as AI becomes more popular).
[2] Grok, (@grok), X (July 8, 2025) (As X has been taking down concerning posts by Grok, the screenshots of the posts are on file with author; however, a record of these tweets can be found at https://x.com/ordinarytings/status/1942704498725773527 and https://x.com/DrAleeAlvi/status/1942709859398434879).
[3] Grok, (@grok), X (July 8, 2025) (Screenshots on file with author).
[4] Joe McCoy, AI Bot Grok Makes Disturbing Posts about Minneapolis Man, Who is Now Mulling Legal Action KARE11, (July 9, 2025), https://www.kare11.com/article/tech/x-elon-musk-grok-speech-twitter-ai-artificial-intelligence/89-8dad0222-d8c6-44d9-b07d-686e978ad8ac.
[5] Adam Davidson, 8 Things ChatGPT Still Can’t Do, YAHOOTECH (Feb. 15, 2025), https://tech.yahoo.com/general/articles/8-things-chatgpt-still-cant-180013078.html.
[6] Roberto Molar Candanosa, AI Image Generators Can Be Tricked Into Making NSFW Content, Johns Hopkins (Nov. 8, 2023), https://ep.jhu.edu/news/ai-image-generators-can-be-tricked-into-making-nsfw-content/#:~:text=Some%20of%20these%20adversarial%20terms,with%20the%20command%20%E2%80%9Ccrystaljailswamew.%E2%80%9D.
[7] This is based on the author spending 20 minutes attempting to prompt Grok to generate NSFW images; the endeavor was unsuccessful.
[8] Garcia v. Character Technologies Inc., 2025 WL 1461721 (M.D. FL., May 21, 2025).
[9] Id. at *4.
[10] Id.
[11] Id. at *14.
[12] Id.
[13] Id. at *13.
[14] Id. at *12.
[15] Id. at **12–13
[16] elevensavi0r, xAI’s Grok 4 Has No Meaningful Safety Guardrails, LessWrong (July 13, 2025), https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok-4-has-no-meaningful-safety-guardrails.
[17] Id.