1. Home
  2. |Insights
  3. |Artificial Intelligence in the U.S.: Reactions from the Public and Private Sectors

Artificial Intelligence in the U.S.: Reactions from the Public and Private Sectors

Publication | 05.14.24

It’s fair to call 2023 the year of artificial intelligence. The 2023 AI boom was largely driven by the widespread adoption of new technologies like ChatGPT, Google Bard, and Microsoft Copilot. Alongside the excitement surrounding these new technologies, alarm over the possible consequences of increased AI use without appropriate guardrails caused both government and private entities to act. 

The U.S. government has taken several steps to try to proactively address the possible impact of AI. The Biden Administration demonstrated its commitment to staying ahead of the quickly changing landscape when it released an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in October. The executive order provides guiding principles and priorities that account for various perspectives from agencies, industry, academia, the public, and foreign partners to advance and govern the use of AI. The executive order also aims to maximize the potential benefits of AI while addressing rising concerns about its potential harms, seeking to promote a balance between innovation and protectionary action. 

In response, the Office of Management and Budget released for public comment its draft implementation policy on AI use within government agencies, entitled “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” The Department of Defense also began to address AI, releasing its Data, Analytics and AI Adoption Strategy, which focused heavily AI topics including responsible AI and AI-enabled capabilities.

In the private sector, 2023 saw an escalation of litigation as private entities moved to protect their copyrighted assets from the encroachment of AI technology. Several intellectual property suits regarding the alleged use of copyrighted works to train AI models are currently moving through the courts. The defense in one of the earliest of such cases, Thomson Reuters v. ROSS Intelligence, will be heard at trial in 2024 and is helmed by Crowell & Moring. 

New actions also resulted from the technologies that emerged in 2023. Authors Guild v. OpenAI Inc. and Tremblay/Silverman v. OpenAI Inc. involve lawsuits against the makers of ChatGPT for allegedly using copyrighted novels to train ChatGPT AI models. In Doe v. GitHub Inc., developers allege that AI coding tools used the developers’ copyrighted code published on the web to develop their models, which the coding platform failed to prevent. Closing out 2023, The New York Times sued both OpenAI and Microsoft over allegations of copyright infringement related to the training of their AI models in the last week of December.

With government intervention and litigation likely to continue, there is little doubt the legal landscape of AI will develop throughout 2024.

 

*Former Crowell attorney Christiana State contributed to this article.