Everyone’s Talking AI, Including the FTC: Key Takeaways from the FTC’s 2023 AI Guidance
Client Alert | 6 min read | 03.13.23
On February 27, 2023, the Federal Trade Commission (“FTC”) Division of Advertising Practices updated their business guidance on the usage of Artificial Intelligence (“AI”) for 2023. In their post titled “Keep your AI claim in check”, the FTC guides marketers on how best to legally and efficiently utilize AI in advertising and avoid AI washing. Building upon the FTC’s previous AI guidance of 2020 and 2021, this year’s iteration emphasizes that false or unsubstantiated claims about a product’s efficacy—including those that involve promises about the ability of AI—runs afoul of the FTC Act. Specifically, the FTC reminds marketers of the following questions that they should consider with the increasing use of AI in products:
- Are you promising that your AI product does something better than a non-AI product? Companies need to provide adequate proof for any kind of comparative claim as to why AI improves a product.
- Are you aware of the risks? Companies must consider the reasonably foreseeable risks for using AI in their products, and can be liable even if they believe the blame is on third-party developers.
- Does the product actually use AI at all? Baseless claims that mention a product is AI-enabled can result in an FTC enforcement action. The FTC also notes that a product is not “AI-powered” only if an AI tool was used in developing the product.
Prior FTC AI Guidance
This update from the FTC builds on the Commission’s 2020 and 2021 AI guidance. In 2020, the FTC focused on the ethical usage of AI and algorithms and transparency with consumers, including compliance with the Fair Credit Reporting Act (“FCRA”) and the Equal Credit Opportunity Act (“ECOA”). The 2021 guidance revolved around harnessing the benefits of AI without “inadvertently introducing bias or other unfair outcomes.” Accordingly, the FTC recommended practices for businesses to avoid violations of Section 5 of the FTC Act that prohibits unfair or deceptive acts or practices, including “the sale or use of racially biased algorithms.”
Looking to 2023 and Beyond
This year’s FTC guidance appears to respond to the recent acceleration of AI research and development, as well as the exploding markets for generative AI products and tools, such as image, text, and audio content generation, AI tools used for prediction, or chatbots. Examples of generative AI products are ChatGPT, DALL-E, Ubberduck AI, and Stable Diffusion.
In light of FTC’s collective AI guidance and other privacy-related regulations or draft regulations, when designing products and services that use AI or machine learning, the following considerations are of particular importance.
Training Data Sets
Training machine learning models requires the collection and preparation of a vast amount of data. This data is collected from different sources and is then prepared and pruned to be used as inputs into a machine learning model. Investigating the sources of this data is crucial to ensuring that it can be used for the purposes of training models and that it meets applicable privacy laws. Depending on how the data is obtained (from third party vendors or brokers, for example), investigating whether proper consents have been obtained from the individuals for the collection and processing of personal data is important.
Notification to Consumers
In order to meet the transparency guidelines, companies should properly inform consumers about the uses of AI in the products or services they offer. For example, a privacy policy should specify the data collection practices for the data used in the machine learning algorithm and other practices with regard to automated processing of data used in profiling an individual. Consumer terms of use and user-facing product documentation should also explain what decisions are being made by the AI-based product, how the AI product works, and how it makes decisions.
Responding to Privacy Requests
In the age of “Big Data,” some companies have liberally sourced and used individuals’ personal data to train and develop their AI models and algorithms. However, with the advent of new privacy laws, companies are also being faced with the question of how to honor requests to delete or correct an individual’s personal information, without simultaneously deleting or retraining their algorithms and models developed and trained using such data.
Taking a broad read of current privacy laws, they would require just that. Honoring an individual’s data deletion request may implicate also deleting the pieces of such data used as the foundation for training the model. The question remains whether state privacy laws would be interpreted to require that honoring deletion of personal information also implicates either the deletion of the models that used such data for training or re-training the model without that particular personal information.
Some commentators suggest replacing full deletion (i.e., algorithm deletion) with “approximate deletion.” Under approximate deletion, most of the individual’s personal information is deleted, but enough data is retained to allow the algorithm to continue operating.
Automated Processing or Decision-Making
In addition to the FTC guidance advising companies to be thoughtful about their AI practices, four states (California, Virginia, Colorado, and Connecticut) have privacy laws that give consumers either the right to opt out of automated decision-making or profiling altogether, or the right to substantially limit an entity’s use of such automated decision-making tools. Operationally, this means that companies must be prepared to evaluate and process consumer data without the use of AI. In addition, draft state privacy regulations have expanded disclosure requirements regarding profiling activities and using automated processing of personal data.
Data Protection Assessments
State privacy laws may also require a company to conduct privacy risk assessments in relation to processing personal information for AI purposes. Though the specific assessment requirements vary by jurisdiction, such assessments are most often required where processing activities are considered to have either a reasonably foreseeable risk of harm or a heightened risk of harm to the consumer. In each jurisdiction in which the data protections assessment requirements are currently known, this risk of harm analysis involves a review of a company’s AI and profiling practices and examining in detail the model used.
Marketing and Advertising Claims
As the FTC warned this year, marketing and advertising claims are rife with mentions of AI since it is the hot technology of the moment. Importantly, all AI claims must comply with the FTC’s advertising substantiation guidelines. The advertiser must have a reasonable basis for all express or implied claims. Companies should be aware that they have the obligation to substantiate all reasonable interpretations of their advertising claims, and must possess such substantiation before the claims are communicated. Higher standards may apply for AI used in certain industries, like health-related and dietary claims, which require competent and reliable scientific evidence.
Validate Models
Validating models that use AI generation is crucial to maintaining the model’s accuracy and ensuring that the models do not illegally discriminate. State privacy laws and draft privacy regulations include requirements to evaluate and validate AI models to check for fairness, accuracy, non-discrimination, bias, and risk to individual consumers. The FTC has been exercising its enforcement powers against consumer lending models for decades, where algorithms have automated underwriting for credit approval. AI models must be validated—and frequently revalidated—in order to prevent discriminatory tactics. The FTC suggests that compliant model validation can be supported by data based on empirical comparisons between sample groups, using accepted statistical principles and methodology.
* * *
Taken together, the three FTC guidelines serve as a reminder that the FTC is focused on the potential for AI to violate the FTC Act, FRCA, and ECOA. The FTC is clearly prepared to hold accountable the companies who violate the law. As AI technologies continue to rapidly develop, companies are strongly encouraged to have rigorous privacy policies and practices in place, and to review AI-related advertising claims carefully.
Contacts
Insights
Client Alert | 8 min read | 11.12.24
The Month in International Trade – October 2024
This news bulletin is provided by the International Trade Group of Crowell & Moring. If you have questions or need assistance on trade law matters, please contact Jana del-Cerro, Anand Sithian, or Simeon Yerokun or any member of the International Trade Group.
Client Alert | 3 min read | 11.11.24
Allegations of a Litany of Lyin’: Penn State Settles Claims of Cybersecurity Noncompliance
Client Alert | 1 min read | 11.08.24
A Common-Sense Change to the Continuous SAM Registration Requirement at FAR 52.204 7
Client Alert | 7 min read | 11.08.24
New BIS Guidance Continues Trend of Enhanced EAR Compliance Expectations for Financial Institutions