States are Taking Action on Artificial Intelligence. It is a Trend That is Likely to Continue.
What You Need to Know
Key takeaway #1
State Attorneys General are taking significant action on A.I. Paying close attention to AG activity in this space is vital to all sectors.
Key takeaway #2
The meaning of the law in this area is not settled, and companies would do well to understand the different kinds of risks they may face.
Key takeaway #3
Privacy, intellectual property, and cyber security are the first areas of A.I. in which concerns have been raised, but they won’t be the last.
Client Alert | 6 min read | 01.22.25
Artificial intelligence is now a mainstay in our daily lives. It’s in our phones and computers. It helps us draft emails and learn math. It recommends purchases and guides our online searches. It’s everywhere—and every sign suggests that it’s here to stay.
Unsurprisingly, the Federal Government has shown a lot of interest in artificial intelligence as well. Congress has held hearings on various A.I. topics. In late 2024, the White House Office of Science and Technology and former President Biden issued a Blueprint for an A.I. Bill of Rights and Executive Orders to guide agency policy. Just days before leaving office, former President Biden issued an Executive Order aimed at building large-scale data centers and clean power infrastructure to safely develop A.I. In his first day in office, President Trump rescinded Executive Order 14110, providing for safe, secure, and trustworthy development of A.I. Members of Congress have introduced A.I. bills into committee, on topics as various and critical as national security, intellectual property, online personal safety, and education. But action directed at consumers has been minimal.
The States are a bit of a different story. Over two dozen states have enacted laws addressing nonconsensual sexual deepfakes. Nearly as many have enacted laws regulating the use of A.I. in political advertisements. Tennessee has an “ELVIS Act,” which regulates the use of technology that can reproduce an artist’s voice or likeness. And California has passed a Transparency Act, which will require certain AI tools to include watermarks in their outputs so that people know how the results were created. In short, the States have taken a more active role to regulate how artificial intelligence can be used in everyday life. However, there is as much variability in the State laws regulating A.I. as there are different States in the nation. From a business’ perspective, it can be difficult to comply with a patchwork of requirements.
State legislatures are not the only branch of government taking action. Executives—State Attorneys General in particular—have also begun regulating A.I. through consumer-facing laws. No example makes this clearer than the case against Clearview AI, a facial recognition startup that has allegedly been collecting photographs of individuals from the internet without approval. After years of hard-fought litigation, the parties had reached a settlement in principle in June 2024. Then in mid-December 2024, a bipartisan coalition of Attorneys General from twenty-two States and the District of Columbia requested leave to oppose the settlement agreement, on the theory that the settlement would provide consumers with no meaningful relief from the wrongs that had been committed. The creative settlement solution (giving plaintiffs a stake in the ongoing business) is now uncertain, with a hearing for final approval of the settlement set for January 30, 2025.
State Attorneys General Action on A.I.
Both Democratic and Republican State Attorneys General have not been shy about weighing in on the rise in A.I. and other advanced technologies. Indeed, State Attorneys General have taken direct action to regulate A.I. by issuing guidance on how A.I. interacts with various state laws and enforcing those laws when A.I. companies violate them. For example. Massachusetts Attorney General Andrea Campbell was one of the first AG’s to issue an advisory to her state on how Massachusetts consumer protection laws apply to A.I. The advisory provides specific guidance to developers, suppliers, and users of A.I. regarding their obligations under Massachusetts consumer protection, anti-discrimination, and data privacy laws. In short, the advisory makes clear that these laws apply to emerging technologies, including A.I. Similarly, late last year, former Oregon Attorney General Ellen Rosenblum issued guidance on A.I. to companies doing business in Oregon. Like the Massachusetts advisory, the Oregon guidance explained that its state’s Unlawful Trade Practices Act, Consumer Privacy Act, and Equality Act each apply to A.I. platforms. In a sign that new state laws governing A.I. may emerge, former AG Rosenblum noted that her guidance was merely a starting place and that the guidance will likely need to be updated, “depending on what relevant legislation is passed in the 2025 Oregon legislative session.”
The latest State Attorney General to take steps to regulate A.I. is New Jersey. Earlier this month, New Jersey Attorney General Matthew Platkin, in coordination with the state’s Division on Civil Rights, launched a new Civil Rights and Technology Initiative, aimed at addressing the risks of discrimination and bias stemming from the use of A.I. and other emerging technologies. As part of the new Initiative, AG Platkin issued guidance to members of the public and business about how New Jersey’s Law Against Discrimination—the nation’s oldest anti-discrimination law—applies to A.I. AG Platkin also announced the creation of a Civil Rights Innovation Lab, designed to “leverage technology responsibly to prevent, address, and remedy discrimination.” The Attorney General’s announcement was an outgrowth of New Jersey Governor Phil Murphy’s Artificial Intelligence Task Force and its 2024 Report to the Governor on Artificial Intelligence.
State Attorneys General are doing more than just issuing guidance. They are also suing A.I. companies for alleged violations of their state laws. In September 2024, Texas Attorney General Ken Paxton reached what has been described as the “first-of-its-kind” settlement with a A.I. healthcare technology company to resolve allegations that the company made false and misleading statements about the accuracy and safety of its products. The AG’s investigation into the company revealed that the companies’ deceptive claims about its products put the public at risk when major Texas hospitals provided the company with patient healthcare data so that an A.I. product could summarize patients’ conditions and treatment for hospital staff. The settlement requires the company to “accurately disclose the extent of its products’ accuracy,” and ensure that hospital staff using its products “understand the extent to which they should or should not rely on its products.”
Key Takeaways
A.I. may be here to stay and there is almost certain to be both federal and state legislation and policy regulating it. How it will be regulated, what issues will be the most pressing, and who will do the regulating are all open questions. For now, the only certainty is that the courts will have a say in the validity of any of the attempts to regulate A.I. With that in mind, it is important to remember that:
- The law is changing, sometimes rapidly, and close monitoring of the changes is essential.
- State Attorneys General are taking significant action on A.I. Paying close attention to AG activity in this space is vital to all sectors.
- The meaning of the law in this area is not settled, and companies would do well to understand the different kinds of risks they may face.
- Privacy, intellectual property, and cyber security are the first areas of A.I. in which concerns have been raised, but they won’t be the last.
Crowell & Moring, especially its State Attorneys General practice, will continue to monitor congressional and State executive branch efforts to regulate AI. Our lawyers and public policy professionals are available to advise clients who want to play an active role in the policy debates taking place right now or who are seeking to navigate AI-related concerns.
Insights
Client Alert | 3 min read | 01.22.25
DOJ Settles PPP Case Based on Economic Necessity Certification
On December 18, 2024, the U.S. Attorney’s Office for the Western District of Texas announced a $680,000 False Claims Act (FCA) settlement with Lafayette RE Management LLC (Lafayette) in connection with the real estate investment firm’s receipt of a Paycheck Protection Program (PPP) loan at the height of the pandemic. Crowell has previously reported on DOJ’s steady pursuit of PPP cases which have resulted in FCA settlements based on issues such as affiliation (discussed here) and ineligibility under the program’s rules (discussed here), but the Lafayette settlement is the first time that the government has intervened in a case based on the economic necessity certification that all PPP borrowers had to make on the initial loan application.
Client Alert | 3 min read | 01.22.25
Recent HSR Enforcement Actions Offer a Harsh Reminder That “The Rules Are the Rules”
Client Alert | 2 min read | 01.22.25
Trump Issues Executive Order Directing Drastic Clampdown on Offshore Wind Leasing
Client Alert | 2 min read | 01.22.25