1. Home
  2. |Experience
  3. |Artificial Intelligence

Artificial Intelligence

Overview

Taming the risks and maximizing the rewards of Artificial Intelligence

Artificial intelligence (AI) is subject to the same forces that have affected every world-changing innovation:  the rapid evolution of the technology significantly outpaces the laws and regulations governing its development and implementation. Working at the convergence of law, technology, and business, Crowell & Moring’s AI team helps clients operating in this complex environment by establishing smart, future-focused strategies that maximize competitive advantage, while minimizing potential exposure.  

Real-world experience drives creative solutions

Crowell’s AI group is comprised of lawyers and professionals across our global offices, including from Crowell Global Advisors (CGA), our international public policy entity, with decades of sector-specific experience in machine learning, predictive analytics, facial and voice recognition, cognitive computing, defense, and related technologies. Our lawyers serve as primary outside counsel to AI-focused technology, robotics, machine-learning, asset-trading and related companies, as well as the businesses and organizations that benefit from these sophisticated solutions. We provide counseling to clients who are establishing internal policies regarding the use of generative AI and broader AI governance frameworks.

To help businesses make the most of AI, we draw on the best practices we have developed while guiding clients through similar game- and economy-changing developments, including blockchain/distributed-ledger technology, autonomous vehicles, 3D printing, and the Internet of Things (IoT). We also have deep industry knowledge in market sectors that are rapidly deploying AI, such as app and software development, biomedical devices, consumer products, digital health, energy, entertainment and sports, financial services, law enforcement, manufacturing, retail, and transportation.

As lawmakers, policymakers, and consumer groups strengthen their focus on AI, our clients benefit from Crowell’s deep domestic and international regulatory experience. Our team includes a number of lawyers with direct experience in agencies leading the charge toward effective AI policy:

  • A former general counsel of the Consumer Product Safety Commission (CPSC)
  • A former chief counsel at the Financial Crimes Enforcement Network (FinCEN)
  • The founding director of the Office of Policy in the Office of the National Coordinator for Health Information Technology (ONC), U.S. Department of Health and Human Services (HHS)
  • A former senior advisor to the leadership of the U.S. Department of Homeland Security and current member of the Sandia National Lab External advisory board
  • A former chief of staff at the U.S. Department of Homeland Security

Collaboration: a step beyond full service

Successful technology and business solutions are driven by cooperation between every member of the AI ecosystem, including government and regulatory experts, researchers, academics, product developers, customers, and investors. Our goal — whether negotiating an early stage private equity investment for companies in the AI ecosystem or resolving a commercial dispute involving use of AI tools — is to pursue and protect our clients’ interests while maintaining mutually beneficial relationships with vendors, contractors, consumers, and government agencies.

As our clients face new challenges and opportunities, our approach is to identify firm attorneys with experience in the appropriate discipline, including former government experience where applicable, and integrate them into the team to provide carefully coordinated counsel and informed solutions.

We advise clients in the following areas:

  • Government Contracts, including procurement processes and the use of AI in bid evaluation, contract management, compliance monitoring, evaluation criteria, and vendor selection process.
  • Health care, including working with organizations that are developing digital health technology and those that are using AI to improve health outcomes and management, improve operations, reduce administrative burden, and address issues related to use of health data, fraud and abuse, practice of medicine, and liability.
  • National security, including leading high-profile investigations, developing compliance and regulatory strategy for testing and implementing AI/ML tools in support of law enforcement entities.
  • White collar crime and regulatory enforcement, including investigating and prosecuting white collar crimes involving AI, such as fraud, insider trading, or cyberattacks facilitated by AI tools.
  • Privacy and cybersecurity, including global risk assessment, prevention, and compliance, crisis management, collection, storage, protection, and use of health and other personal data, and the implications of predictive analytics on individuals and protected groups.
  • Legislative and regulatory advocacy, including working with leading companies, industry coalitions, and appointed and elected officials to develop and implement effective laws, regulations, and trade policies that promote technology development and deployment while mitigating potential risks.
  • Product liability and related litigation, including consumer safety and security standards related to robotics equipment, AI-based software, and interconnected products. 
  • Intellectual property prosecution, licensing, and enforcement, including patents, trademarks, copyrights, and trade secrets related to deep machine learning and other AI innovations.
  • Corporate, securities, and finance, including debt and equity financing, angel and seed-stage investments, mergers, acquisitions, joint ventures, supply chain issues, governance, digital advertising, and more.
  • Labor and employment, including use of AI in connection with global talent recruitment, retention, and rewards.

 

Insights

Client Alert | 4 min read | 10.29.24

AI’s Cybersecurity Risks: New York Provides Guidance on Developing Cybersecurity Programs to Address Emerging AI Concerns

On Wednesday, October 16, 2024, New York’s Department of Financial Services (DFS) announced new guidance aimed at identifying and providing a blueprint for protecting against AI-specific cybersecurity risks.  Motivated primarily by advancements in AI that substantially impact cybersecurity—including facilitating new ways to commit cybercrime—DFS’s guidance aims to specifically protect New York businesses but applies to all companies concerned with increasing their cybersecurity and managing risks posed by emerging technologies. The guidance addresses “most significant” AI-related threats to cybersecurity that organizations should consider when they are developing a cybersecurity program, internal protocols, or implementing cybersecurity controls—as well as recommendations for those cybersecurity programs....

|

Professionals

Insights

Client Alert | 4 min read | 10.29.24

AI’s Cybersecurity Risks: New York Provides Guidance on Developing Cybersecurity Programs to Address Emerging AI Concerns

On Wednesday, October 16, 2024, New York’s Department of Financial Services (DFS) announced new guidance aimed at identifying and providing a blueprint for protecting against AI-specific cybersecurity risks.  Motivated primarily by advancements in AI that substantially impact cybersecurity—including facilitating new ways to commit cybercrime—DFS’s guidance aims to specifically protect New York businesses but applies to all companies concerned with increasing their cybersecurity and managing risks posed by emerging technologies. The guidance addresses “most significant” AI-related threats to cybersecurity that organizations should consider when they are developing a cybersecurity program, internal protocols, or implementing cybersecurity controls—as well as recommendations for those cybersecurity programs....