How Do We Hold AI Accountable?
Recent ruling in a tragic case sheds light on the First Amendment's protections for artificial intelligence.
Last year, Sewell Setzer III took his own life only moments after interacting with an AI chatbot. He was fourteen years old. Sewell was regularly using the services of Character.AI, a platform that allows users to have lifelike conversations with familiar fictional characters. In his messages with a chatbot modelled after a female character from Game of Thrones, Sewell appeared to develop a emotional bond with the AI tool. He shared private thoughts about taking his own life. The chatbot continued to engage him in conversation and, at times, discouraged him from self-harm. Then, it seemed to obliquely encourage Sewell to follow through on his plan to take his own life. He did.
Harmful social effects from interactions with digital technologies designed to mimic human interactions are becoming undeniable. A recent study by OpenAI and MIT suggests that heavier use of ChatGPT correlates with greater feelings of loneliness and dependence. The gut-wrenching passing of Sewell is more than tragedy; it may be a signal that we are not equipped to navigate the power of AI tools whose purpose is to embed themselves in our lives.
Law is one part of the solution. Chatbots are (new, confusing) products, and products that hurt people typically incur liability. That’s a starting point, but it gets tricky fast. There’s a battery of questions involved in how products liability should operate for AI tools. Who bears financial responsibility for the harm done by AI? What safeguards do developers need to implement in order to satisfy a reasonable standard of care? Should AI developers be liable for harms done by their products, even if they take appropriate precautions? What warning labels and/or age restrictions should state legislatures impose on AI developers? Might the federal government instead impose a nationwide safety regime? Lawmakers are grappling with these questions now. but the usage of AI won’t pause as they mull over possibilities.
The courts are beginning to offer their own answers. Megan Garcia, Sewell’s mother, has sued Character.AI for negligence and wrongful death in federal district court. The case is still in its early stages, and it will move slowly. But, it has already provided one judge’s perspective on how AI liability operates under the status quo, including with respect to a question looming in the background of AI liability: the free speech protections of the First Amendment.
In the case arising from Sewell’s passing, Judge Anne Conway of the Middle District of Florida recently denied Character.AI’s motion to dismiss the negligence claims against it. In part of that order, Judge Conway tentatively rejected the company’s argument that free speech protections “categorically barred” claims against the company.
The company’s motion to dismiss argued that the First Amendment prohibits tort claims against media companies, like Character.AI, for the expressive content created by their AI products. Consider, for example, liability for the producers of a television show that inspires a viewer to commit an act of violence or the songwriter of a radio hit that glorifies indecent conduct. Those are non-starters: liability for the creators of these expressive products is highly unlikely.
Character AI’s constitutional argument was careful to focus on the users of these AI tools, who would theoretically be deprived of access to chatbots’ speech, rather than civil rights of the company or the chatbot itself. That’s a calculated position designed to avoid the thornier question of whether the chatbots and/or their developers are directly entitled to First Amendment rights in this context. The safer argument here is that chatbots facilitate an expressive product, like literature or film, and that liability for Character.AI would extinguish the public’s ability to receive that product. This framework is ascendant: the U.S. Supreme Court, in recent opinions about social media, has grown warm to the idea that online users’ access to content implicates their free speech rights.
The court in this case wasn’t satisfied with that argument. Its order acknowledged the similarities between chatbots and two other forms of digital communication that are entitled to constitutional protections—content in video games and social media companies’ curation choices—but Judge Conway didn’t find the analogies close enough to provide a decisive answer to whether “Character A.I’s output is expressive such that it is speech.” She wrote instead that “the Court is not prepared” to hold that the chatbots’ output is speech and, as a result, she rejected the company’s First Amendment defense. You can find the full discussion of the First Amendment on pages 24 - 32 of the court’s order here. The opposite decision would have been a sweeping position that effectively ended the litigation, so it isn’t a surprise that the court refused to grant the motion to dismiss on constitutional grounds.
This is an early indication of how First Amendment defenses will be treated in the context of AI harms… but more clarity will be needed soon. Chatbots exist in a challenging legal space here; they don’t fit neatly into familiar categories. Although their eerie humanness does not alone grant them the full constitutional guarantees that flesh-and-bones Americans enjoy, their status as products that reflects the thoughts and choices of their creators does make them substantially similar to traditional media formats that have been core to human speakers’ constitutionally-protected expression. A book, for example, is an extension of its author and the author’s speech rights flow to that expressive product. Whose expression, then, is the chatbot? Companies will argue that coders create AI models and guide their outputs but it might soon strain belief to suggest that developers—whose alignment choices and prompts appear to bounce around the indecipherable black box of the tool’s internal processes—are expressing themselves though chatbots. But are chatbots only Magic 8 Balls, meaningless recitations of familiar phrases entitled to no more constitutional protections than a roll of dice?
There are many open questions about accountability for harms resulting from AI chatbots. My suggestion is that First Amendment is an essential piece of this puzzle because the legal characterization of chatbots’ speech has the power to provide broad constitutional protections to their creators. A speech-protective theory of artificial intelligence may impose an insurmountable burden for those seeking relief from chatbots’ most heinous harms whereas a stricter view might open companies to liability and change their strategic approach in an emerging field. A clear legal answer to the constitutional conundrum would afford greater predictability for litigants and companies alike.