1. Home
  2. |Insights
  3. |Gov. Newsom Vetoes AI Bill but Leaves the Door Open to Future CA Regulation

Gov. Newsom Vetoes AI Bill but Leaves the Door Open to Future CA Regulation

What You Need to Know

  • Key takeaway #1

    Gov. Gavin Newsom has vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

  • Key takeaway #2

    It marks a departure from other jurisdictions, with the EU AI Act coming into force and restrictions starting in 2025.

  • Key takeaway #3

    Gov. Newsom has promised to work with the Legislature, federal partners, technology experts, ethicists, and academia to develop more empirically informed AI safety regulations in CA.

Client Alert | 3 min read | 10.02.24

On Sunday, September 29, 2024, California Gov. Gavin Newsom vetoed SB 1047, a bill to enact the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. Although the bill passed the California Assembly and Senate, it generated significant controversy and debate within the tech community. The Center for AI Safety, Elon Musk, the L.A. Times editorial board, and San Francisco-based AI startup Anthropic all supported the bill; while Meta, OpenAI, and House Speaker Nancy Pelosi opposed it as hindering innovation.

In his official statement to the California State Senate, Gov. Newsom praised the bill’s intent and goals, but still vetoed it claiming that it lacked empirical analysis of AI systems and capabilities and because it applied overly “stringent standards to even the most basic functions.” Governor’s veto message to Sen. on Sen. Bill No. 1047 (Sept. 29, 2024) (“SB 1047 Veto Message”).  According to Gov. Newsom, “[a] California-only approach may well be warranted—especially absent federal action by Congress—but it must be based on empirical evidence and science … Given the stakes—protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good—we must get this right.” Id.

If passed, the bill would have imposed new safety requirements on developers of covered AI models, including the implementation of a prompt full shutdown feature, the submission of a written safety-and-security protocol to the state Attorney General, a mandate to annually retain a third-party auditor, and a prohibition on using a covered model in ways that could create an “unreasonable risk” of causing or enabling a “critical harm.” Sen. Bill No. 1047 (2023-2024 Reg. Sess.) § 3. “Critical harm” was defined as something which could result in mass casualties or comparably grave harms to public safety and security. Id. The bill would only have applied to the largest developers in the industry, with covered models defined by the costs of the computing power required to train or fine-tune the model.

This veto is not the end of potential California regulation. Gov. Newsom said that he is “committed to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward.” SB 1047 Veto Message. But in the meantime, AI developers are left to self-regulate and attempt to prepare for the next wave of proposed legislation.

Federal regulation has thus far been limited to broad policy frameworks and guidelines such as the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and the SAFE Innovation Framework for AI Policy. Each of these frameworks provides general guidance, cautions, and aspirations for AI use cases, but does not create mandatory requirements or regulations with which developers must comply. In 2023, a more aggressive bipartisan legislative framework, the Blumenthal & Hawley Comprehensive AI Framework, was proposed to create a new federal agency and AI licensing regime, but it has not yet been passed. On a state level, 31 states, Puerto Rico, and the Virgin Islands have adopted some form of resolution or enacted legislation. These state regulations range from general frameworks, to data privacy regulations, to prohibitions on certain AI uses like anti-algorithmic discrimination protections.

In contrast, the European Union recently adopted the EU Artificial Intelligence Act, a comprehensive regulatory framework that classifies AI according to risk, prohibiting some models entirely and subjecting others to added safety requirements for risk management, data governance, technical documentation, record-keeping, human oversight, and quality assurance. The AI Act will take effect in stages over the next six years, with the initial ban on certain AI systems coming into effect in February 2025. Additional rules on General Purpose AI take effect in August 2025, with more rules applying in the months thereafter. As with the General Data Protection Regulation laws, there is extra-territorial reach, which means that the EU AI Act can apply to developers, users, and distributors based outside of Europe.  This creates a complicated regulatory and compliance landscape for multi-national or global companies operating in Europe and the U.S.

We would like to thank Meaghan Katz for their contribution to this alert.

Insights

Client Alert | 1 min read | 10.03.24

The Dockworkers’ Strike and Safeguarding Your Rights

At midnight on October 1, 2024, the International Longshoremen’s Association launched a labor strike that effectively shut down all ports from Maine to Texas after being unable to reach agreement on terms for a new labor contract with the United States Maritime Alliance.  This strike may impact virtually all industries that rely on maritime shipping, either directly or indirectly. ...