The Future is Here: Senate Judiciary Committee’s Oversight of AI and Principles for Regulation
Client Alert | 8 min read | 08.04.23
On July 25, 2023, the Senate Judiciary Committee held its fifth hearing this year on artificial intelligence (AI). This is the second hearing held by the Subcommittee on Privacy, Technology, and the Law, and it highlighted the “bipartisan unanimity” in regulating AI technology.
Overview
Chairman Richard Blumenthal (D-CT) opened the hearing by recognizing “the future is not science fiction or fantasy. It’s not even the future. It’s here and now.”
Last week, the Biden administration secured voluntary commitments focused on managing the risks posed by artificial intelligence. Blumenthal commended the Administration for recognizing the need to act, but noted that the commitments made were unspecific and unenforceable, hence the need for further action by Congress. While recognizing the “good” in AI, Chairman Blumenthal opened his introductory statements by noting the need to address the “fear” in the public domain. Throughout the hearing, Blumenthal was adamant about the need for a proactive regulatory agency that invests in research to create countermeasures against autonomous AI systems while ensuring that innovation continues to prosper. Specifically, the Senator welcomed ideas for establishing a licensing regime, a testing and auditing regime, legal limits on usage in scenarios such as elections or nuclear warfare, and transparency requirements for the limits and use of AI systems.
Ranking Member Josh Hawley (R-MO) gave a shorter statement, identifying his main priorities as workers, children, consumers, and national security. He confidently stated that big tech companies would benefit from the rise and development of AI, as they did in the rise of major social media platforms, but marked his concern for how its development would affect the American people.
Together, Chairman Blumenthal and Ranking Member Hawley promoted their bipartisan bill, the “No Section 230 Immunity for AI Act” (S. 1993), which was introduced as the first bipartisan bill in the Senate to put safeguards around AI development.
Senator Amy Klobuchar (D-MN), Chairwoman of the Subcommittee on Competition Policy, Antitrust, and Consumer Rights, also made a short opening statement. She urged quick action, mentioned bipartisan work from Senators Chuck Schumer (D-NY) and Todd Young (R-IN), and warned that “if we don't act soon, we could decay into not just partisanship but inaction.”
Expert Witness Testimony
Dario Amodei, Chief Executive Officer, Anthropic, San Francisco, CA
Amodei is the CEO of Anthropic, a “public benefit corporation” that is developing techniques to make AI safer and more controllable. Amodei testified on Anthropic’s work in constitutional AI, a method of training AI to behave according to specific principles, as well as early work on adversarial testing of AI to uncover bad behavior, and foundational work on AI interpretability. Amodei warned of short and long-term risks, ranging from bias, misinformation, and privacy to more existential threats to humanity, such as autonomous AI. He also argued that “medium-term” risks, such as misuse of AI for bioweapon production, are a “grave” threat to national security, and that private action is not a sufficient mitigation technique. Accordingly, Amodei suggested three regulatory steps:
- Secure the AI supply chain to maintain a technology lead while keeping technologies out of bad actor hands.
- Implement a testing and auditing regime for new and more powerful system being released to the public.
- Fund agencies such as the National Institute for Standards and Technology (NIST) and National AI research resources, which is crucial for measurement.
Yoshua Bengio, Founder and Scientific Director of Mila – Québec AI Institute, Professor in the Department of Computer Science and Operations Research at Université de Montréal
In his opening statement, Bengio quoted former expert witness Sam Altman: “if this technology goes wrong, it could go terribly wrong.” He testified that the AI revolution has “the potential to enable tremendous progress and innovation,” but also entails a wide range of risks, including discrimination, disinformation, and loss of control of superhuman AI systems. Bengio noted that estimates for when human level intelligence could be achieved in AI systems is now within a few years or decades. Bengio defined four factors on which the government can base its efforts—access, alignment, raw intellectual power, and scope of actions—before recommending that the following actions be taken “in the coming months” to protect democracy, national security, and the collective future:
- Coordinate agile national and international frameworks and liability incentives to bolster safety, including licenses, standards, and independent audits.
- Accelerate global research endeavors focused on AI safety to form essential regulations, protocols, and government structures.
- Research on countermeasures to protect society from rogue AIs.
Stuart Russell, Professor of Computer Science, The University of California, Berkeley
Russell’s testimony centered on a core tenet of his research - artificial general intelligence (AGI) and how to “control” AI systems. He questioned how humans can maintain power over entities “more powerful than ourselves.” He explained that the field of AI has reached a point in which an AI’s internal operations are a mystery, even for computer scientists and those who train the systems. While underscoring the importance of predictability for AI, he argued that there is no trade-off between safety and innovation. Russell made the following recommendations:
- There should be an absolute right to know if someone is interacting with a person or with a machine.
- Algorithms should not be able to decide to kill human beings, especially in nuclear warfare.
- A kill switch, or “safety brakes,” must be designed into AI systems and activated if systems break into other computers or replicate themselves.
- Systems that break regulatory rules should be recalled from the market.
Social Media
All members that spoke during the hearing, including Senators Hawley, Blumenthal, Klobuchar, and Marsha Blackburn (R-TN), mentioned the unintended harm caused by social media on the public, particularly children. They suggested that lawmakers need to chart a different course than the congressional delay, dismissal, and inaction on creating regulatory guidelines during the development of social media platforms to get ahead of the potential threats of AI.
Election Threats
While noting that lawmakers do not want censorship, Senator Blumenthal directly asked witnesses about the immediate threat of AI for the integrity of the electoral system, given the upcoming 2024 Presidential election. The witnesses identified misinformation, external influence campaigns, propaganda and deepfakes as immediate dangers. Bengio also recommended against releasing pre-trained large AI systems. All three witnesses recommended implementing watermarks or labeling on content in audio and visual campaigns, including requiring social media companies to restrict account use to human beings that have affirmatively identified themselves.
Labor Exploitation
Senator Hawley entered an article from the Wall Street Journal into the record, chronicling the “traumatizing” work contractors in Kenya were required to perform for a generative AI company, which included screening out descriptions of violence and abuse. The Senator maintained the need for labor reform in the industry and pushed for high paying jobs for American workers, structures for training, and incentives that enable them. Senator Blumenthal agreed, advocating that the industry focus on “made in America when we’re talking about AI.”
Securing the Supply Chain
Prompted by Amodei’s testimony, lawmakers emphasized the critical nature of securing supply chains, particularly in the event of a Chinese invasion of Taiwan, where a large portion of AI components are manufactured. When asked if Congress should consider limitations or full prohibitions of components manufactured in China, Amodei redirected the question, suggesting that Congress should examine the components produced in the United States that end up in the hands of adversaries. However, Amodei also argued that chip fabrication production capabilities should be developed in the United States quickly to secure the supply chain for AI components.
Watermarking, Labeling, and Ethical Use
Senators Blackburn and Klobuchar questioned panelists on the ethical use of AI, bringing attention to AI scams and the use of an individual’s name, image, and voice, as well as watermarking election materials produced by AI. Senator Klobuchar highlighted that only about half of states have laws giving individuals control over the use of their name, image, and voice. When Klobuchar asked whether panelists would support a federal law giving individuals this type of control, Amodei said yes and argued that “counterfeiting humans” should have the same level of penalty as counterfeiting money.
Senator Blackburn asked whether industry is “mature” enough to self-regulate, to which Mr. Russell explicitly replied “no.” When Senator Blackburn asked whether a federal privacy standard would help, Mr. Russell explained that there should be a requirement to disclose if a system is harvesting data from individual conversations.
International Cooperation
Panelists agreed that an international and multilateral approach would be critical to AI regulation, particularly in mitigating an AI arms race. Specifically, Bengio testified that “we have a moral responsibility” to have an internationally-coordinated effort that can fully retain the economic and social benefits of AI, while protecting against our shared futures. Russell made clear that the UK, not China, is the closest competitor for AI development, claiming that lawmakers have “slightly overstated” the level of threat that China presents. He claimed that China mainly produces “copycat” systems that are not as sophisticated as the original systems. However, he noted China’s “intent” to be a global world leader and flagged that China is investing larger sums of public money in AI than is the U.S.
At the same time, Bengio recognized that allies, such as the UK, Canada, Australia, and New Zealand, are important to an international and multilateral approach, which would work together with a national oversight body doing licensing and registration in the U.S.
Testing and Evaluating Structures
Building on Amodei’s opening testimony around the “control” of AI systems, Senator Blumenthal asked Amodei if he would recommend that Congress impose testing, auditing, and evaluation requirements focused on risk, including the implementation of “safety brakes.” Amodei answered affirmatively, while also recommending a mechanism for recalling products that have shown dangerous behavior. All three witnesses also expressed support for a reporting requirement for product failure, as is regular practice within the medication and transportation industries.
Conclusion
Crowell & Moring, LLP will continue to monitor congressional and executive branch efforts to regulate AI. Our lawyers and public policy professionals are available to advise any clients who want to play an active role in the policy debates taking place right now or who are seeking to navigate AI-related concerns in government contracts, employment law, intellectual property, privacy, healthcare, antitrust, or other areas.
Insights
Client Alert | 5 min read | 11.05.24
On September 3, 2024, the EU Court of Justice overturned the first-instance judgment of the EU General Court, which had held that the European Commission could review transactions that fall below EU and Member States' merger control thresholds through referrals by national competition authorities under Article 22 of the EU Merger Regulation (Case C-611/22 P, Illumina v Commission).
Client Alert | 1 min read | 11.04.24
Client Alert | 14 min read | 11.01.24
Protectionist Trade Policies in the New Administration: A Question of Degree
Client Alert | 23 min read | 10.31.24