Overview of the first U.S. Senate hearing on the “Oversight of A.I.: Rules for Artificial Intelligence”
Client Alert | 7 min read | 05.19.23
On May 16, 2023, the U.S. Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law held a hearing titled “Oversight of A.I.: Rules for Artificial Intelligence.” This hearing is the first in a series intended to provide a forum for industry leaders to discuss and provide an understanding of the implications of A.I. with an eye toward facilitating the development of appropriate guidelines and regulations.
Senator Richard Blumenthal, Chairman of the Committee, opened the hearing by playing a clip of his own voice generated by AI voice cloning software trained on his floor speeches, with remarks drafted by ChatGPT. He then highlighted the opportunities and risks attendant to A.I. Testimony was provided by Samuel Altman, CEO of Open AI, Christina Montgomery, Chief Privacy and Trust Officer at IBM, and Gary Marcus, Professor Emeritus at New York University. The main points and salient remarks from the hearing are highlighted below.
Overview
Mr. Altman proposed that Congress establish a new licensing agency to oversee compliance and safety standards, create a set list of these safety standards, and require companies engage in independent audits. Mr. Marcus advocated for a safety review board, such as the FDA, as well as a monitoring agency, and to put funds towards A.I. safety research. Ms. Montgomery emphasized transparency, and advocated that regulation define the highest risk uses of A.I. and focus on A.I. uses in certain contexts. She also explained that regulation should require impact assessments, transparency, and that companies should disclose data used to train A.I. systems and models.
Additional details on the types of regulation considered are highlighted below:
Licensing
Mr. Altman suggested that the U.S. government consider “a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities.” He proposed that companies like OpenAI “partner with governments” in order to ensure “that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures, and examining opportunities for global coordination.” Professor Marcus agreed with this approach, adding that companies should “say why the benefits outweigh the harms in order to get that license.”
Risk-Based Regulation
Ms. Montgomery stated that regulation should focus on clearly defining risks. She suggested “establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself,” and proposed “different rules for different risks,” calling on “clear guidance on AI uses or categories of AI supported activity that are inherently high risk.”
Transparency and Trust
Transparency and trust were key themes of those that testified. Ms. Montgomery stated that IBM believes “technology needs to be deployed in a responsible and clear way” and that principles of trust and transparency should be built into practice. She noted that “with AI, the stakes are simply too high. We must build, not undermine the public trust.” For IBM transparency means that individuals should “know what the algorithm was trained on” and how to “manage and monitor continuously over the life cycle of an AI model, the behavior and the performance of that model.” She noted that transparency is important to misinformation, explaining that “knowing what content was generated by AI is going to be a really critical area that we need to address.”
Professor Marcus agreed, adding that “transparency is absolutely critical here to understand the political ramifications, the bias ramifications, and so forth.” He advocated for “greater transparency about what the models are and what the data are,” explaining that transparency “doesn’t necessarily mean everybody in the general public has to know exactly what’s in one of these systems, but … there needs to be some enforcement arm that can look at these systems, can look at the data can perform tests and so forth.”
In his written testimony, Mr. Altman noted that Open AI aims to make its “security program as transparent as possible” and explained that “OpenAI takes the privacy of its users seriously and has taken a number of steps to facilitate transparent and responsible use of data.” This includes a “Trust Portal” that “allows customers and other stakeholders to review … security controls and audit reports.”
Disclosure Requirements
Ms. Montgomery emphasized that “consumers should know when they’re interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system.” For IBM, this means “disclosure of the data that’s used to train AI, disclosure of the model and how it performs and making sure that there’s continuous governance over these models.”
Mr. Altman agreed, disclosure is important to the public and that there should be “disclosure of the data that’s used to train AI, disclosure of the model and how it performs and making sure that there’s continuous governance over these models.” He added that there should be guidelines around “what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities that you talk about.” Mr. Altman acknowledged that “we think that people should be able to say, I don’t want my personal data trained on.”
Company AI Ethics Board
Ms. Montgomery explained IBM’s A.I. Ethics Board which “plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner. It provides centralized governance and accountability while still being flexible enough to support decentralized initiatives across IBM’s global operations.” She explained that the A.I. Ethics Board builds on public trust, and that “guardrails should be matched with meaningful steps by the business community to do their part.” Mr. Altman explained that OpenAI is governed by a nonprofit organization, which is driven by their mission and charter.
Intellectual Property
Mr. Altman declared that “creators deserve control over how their creations are used and what happens sort of beyond the point of, of them releasing it into the world.” He hoped to “figure out new ways with this new technology that creators can win, succeed, have a vibrant life.” He did not have a solution to how A.I. can compensate the art or the artist, but made clear that OpenAI is working with visual and musical artists to learn more and believes that this is a critical issue. Mr. Altman explained that OpenAI believes “content creators, content owners, need to benefit from this technology exactly what the economic model is. We’re still talking to artists and content owners about what they want. I think there’s a lot of ways this can happen, but very clearly, no matter what the law is, the right thing to do is to make sure people get significant upside benefit from this new technology.” He continued to reiterate that “content owners, likenesses—people totally deserve control over how that’s used and to benefit from it.”
Independent Audits
Mr. Altman made it clear that independent audits by “experts who can say the model is or is not in compliance with these stated safety thresholds and these percentages of performance on question X or Y” should be required. He believes that independent audits are very important for measuring the weaknesses and strengths their systems. Mr. Altman explained that “as the models get better and better the users can have sort of less and less of their own discriminating thought process around it.” However, because the technology still “makes mistakes,” companies should be responsible for “verifying what the models say.” He explained that “OpenAI conducts extensive testing, engages external experts for detailed reviews and independent audits, improves the model’s behavior, and implements robust safety and monitoring systems.”
National and Global Coordination
Mr. Altman explained that both a national and global approach to AI regulations is important. While “it is important to have the global view on this because this technology will impact Americans and all of us wherever it’s developed,” he “want[s] America to lead.”
Professor Marcus was hesitant about the future in light of A.I. innovations, calling it “destabilizing.” However, Professor Marcus agreed with Mr. Altman’s approach and emphasized that there should be regulation at a federal and international level. Professor Marcus advocated against a state by state approach, as it would call for different rules for different states, would require “more training of these expensive models,” and that “it would be very difficult for the companies to operate if there was no global coordination.” Rather, Professor Marcus explained that in addition to federal oversight, intergovernmental organizations, such as the United Nations (UN) and Organization for Economic Cooperation and Development (OECD) should be involved.
Section 230
There was bipartisan reluctance to apply Section 230 to generative AI, and Mr. Altman and Ms. Montgomery agreed. In his opening remarks, Senator Blumenthal stated that “[w]e should not repeat our past mistakes.” “[F]or example, Section 230, forcing companies to think ahead and be responsible for the ramification of their business decisions can be the most powerful tool of all.” When Senator Lindsey Graham asked Mr. Altman whether he considered OpenAI to be covered by Section 230, Mr. Altman replied that he didn’t think “Section 230 [was] … the right framework” for analyzing this issue. Ms. Montgomery agreed and explained that IBM is not a platform company so Section 230 would not apply.
Conclusion
While specific guidelines or regulations have not been put forward, the framework is being formed. Crowell & Moring LLP will continue to monitor these developments given the high stakes of this issue for our clients.
Contacts
Insights
Client Alert | 3 min read | 11.06.24
How Legal and Trade Developments Are Changing the E-Bike Market
As the popularity of electric bikes grows, so does the legal and trade obstacles here and abroad. In 2022, over a million e-bikes were in sold in the US. China is a large part of the bike supply chain as about 85 percent of bikes purchased in the US are manufactured in China, and many bike manufacturers rely on China for most, if not all, of their components. And while this figure is staggering, many overlook details regarding the most important component for e-bikes, the battery. With the increased focus on electric vehicles and the escalating trade war, e-bike batteries are often caught up in the mix.
Client Alert | 9 min read | 11.06.24
Proposed Rule on Protecting Bulk Sensitive Data and Its Impact on Health Care
Client Alert | 5 min read | 11.05.24
Client Alert | 1 min read | 11.04.24