1. Home
  2. |Insights
  3. |Artificial Intelligence: The Rapidly Evolving Landscape of AI Class Action Litigation Has Become a Wild, Wild World

Artificial Intelligence: The Rapidly Evolving Landscape of AI Class Action Litigation Has Become a Wild, Wild World

Publication | 01.15.25

Artificial intelligence (AI) has been finding its way into business for some time, but that trend was dramatically accelerated with the arrival of generative AI (Gen AI), which can create new content on its own. The release of a relatively easy-to-use version of Gen AI in late 2022 was followed by the rapid adoption of the technology—and not long after, by the arrival of class action lawsuits centered on AI.

To date, these AI-related class actions have primarily involved various content creators suing companies that create and sell Gen AI tools for copyright infringement. “These lawsuits cover the input and output sides of AI,” says Warrington Parker, managing partner of Crowell & Moring’s San Francisco office. “On the input side, visual artists, musicians, and authors are alleging that the use of their works to train AI is infringing on their copyrights. On the output side, they are saying that AI can essentially recreate their original work.” For example, instructing AI to create a portrait in the style of a certain artist could lead the technology to produce an exact or near-exact replica of one of the artist’s works.

One of the first of these copyright infringement cases was Andersen et al. v. Stability AI, filed in early 2023. Here, a group of visual artists alleged that the training of AI tools offered by Stability and other companies not only infringed on their copyrights, but also created right of publicity and Digital Millennium Copyright Act violations. Dozens of other similar class actions soon emerged, including lawsuits involving writers such as Michael Chabon, Laura Lippman, Sarah Silverman, and Ta-Nehisi Coates. In several cases, courts have struck down some of the broader claims, such as the DMCA violation in Andersen, but left the copyright infringement claims in place.

This type of litigation is still in the early stages, and courts will likely need to keep grappling with the issue for some time. As case law evolves to define the issue more clearly, additional guidance may come from the Federal Trade Commission as well. The commission, which has already been pursuing cases of fraud involving AI, has signaled that it is also interested in the issue of whether using creator content to train AI could be an unfair business practice. When an artist’s work is used to develop AI tools, the FTC has noted, “not only may creators’ ability to compete be unfairly harmed, but consumers may be deceived when authorship does not align with consumer expectations. A consumer may think a work has been created by a particular musician or other artist when it is an AI-created product.” The FTC may well put out guidelines on the issue in the coming year.

“The landscape around AI and copyright is still uncertain,” says Parker. But in this environment, companies that are training AI “should budget out the risks of what they are doing and determine whether they are using someone else’s copyrighted materials and how to address the issue around that going forward.”

Up Next: Consumer Class Actions

These copyright class actions brought by creators have developed quickly, but they are just the tip of the iceberg. Looking ahead, we are likely going to see a growing focus not only on AI and copyright infringement, but also on the broad impact that AI has on consumers as the technology shows up in more and more aspects of their daily lives. “We have yet to see consumer class actions in the AI space, but they can be expected to emerge soon,” says Parker. When they do, he says, they are likely to fall into two categories:
AI-based selections and bad AI decisions.

“Selection cases are those involving things like hiring decisions and decisions to extend credit and loans,” says Parker. “These could arise any time a company has AI picking one person over another based on a number of variables, where plaintiffs could make a claim on the basis of age, race, or gender.” Such lawsuits could be based on AI being biased because it reflects the biases or inaccuracies of the data it ingests. Thus, companies should assess their training content and their AI tools for such biases. “Some of this is already being imposed in some jurisdictions, he says. “In New York, for example, you have to certify that the AI system you use is not biased.”

The landscape around AI and copyright is still uncertain. [Companies training AI] should budget out the risks and determine whether they are using someone else’s copyrighted materials and how to address the issue around that going forward.

— Warrington Parker

Selection-related class actions could also arise from a company not really understanding why or how its AI tool is making the decisions it makes, and therefore being unable to explain it to consumers. That’s not uncommon with the technology, since it is essentially a “black box” that can more or less train itself, typically relies on very complicated calculations, and, once trained, operates with little or no human intervention.

Meanwhile, companies’ increasing reliance on AI to handle customer-facing interactions could also lead to consumer class actions stemming from the technology’s fallibility. “There is an assumption today that AI is rational, reasonable, and makes great decisions, but that’s a falsity,” says Parker. In activities ranging from chatbot responses to product returns, customer refunds, and dynamic pricing, AI—which has been known to “hallucinate”—could come up with “answers that just don’t make sense,” he says. And when those answers determine things such as who gets a refund or who gets charged what with dynamic pricing, it could lead to the unfair and possibly illegal treatment of customers.

It’s still unclear exactly how AI-related consumer class actions will emerge and evolve. But companies should “at least be attempting to put their arms around issues with a risk analysis. What are the risks associated with AI? What are the risks we don’t know about—and how can we learn about those?” asks Parker. “A company’s customers may number in the millions and be located across 50 state jurisdictions—and with a bunch of class action law firms ready to go, I expect it to be a wild, wild world out there for companies using AI to serve consumers.”

 

To read more from Litigation Forecast 2025: What Corporate Counsel Need to Know for the Coming Year, visit here.