1. Home
  2. |Insights
  3. |AI’s Cybersecurity Risks: New York Provides Guidance on Developing Cybersecurity Programs to Address Emerging AI Concerns

AI’s Cybersecurity Risks: New York Provides Guidance on Developing Cybersecurity Programs to Address Emerging AI Concerns

Client Alert | 4 min read | 10.29.24

On Wednesday, October 16, 2024, New York’s Department of Financial Services (DFS) announced new guidance aimed at identifying and providing a blueprint for protecting against AI-specific cybersecurity risks.  Motivated primarily by advancements in AI that substantially impact cybersecurity—including facilitating new ways to commit cybercrime—DFS’s guidance aims to specifically protect New York businesses but applies to all companies concerned with increasing their cybersecurity and managing risks posed by emerging technologies. The guidance addresses “most significant” AI-related threats to cybersecurity that organizations should consider when they are developing a cybersecurity program, internal protocols, or implementing cybersecurity controls—as well as recommendations for those cybersecurity programs.

Who Is Affected

The existing DFS cybersecurity regulation, which is codified at 23 NYCRR Part 500, is not affected by this new guidance.  That regulation was released in November 2023 and applies to any person operating or required to operate under a license, registration, or other similar authorization under New York’s banking, insurance, or financial services laws (called Covered Entities), which include banks, insurance companies, mortgage brokers, financial institutions, and third-party providers that handle nonpublic information on behalf of these financial entities.  DFS’s new guidance is specifically targeted at Covered Entities; however, the advice contained in the guidance is useful for all companies who have access to data and are connected to the internet.

Cybersecurity Risks Posed by AI

  • AI-Enabled Social Engineering: Identified by DFS as the “most significant [cyber] threat to the financial services sector,” cybercriminals are increasingly using AI to create realistic photos, videos, audios, or text deepfakes that they can exploit in their phishing attacks to steal login credentials, convince individuals to wire funds to malicious actors, and gain access to companies’ systems.
  • AI-Enhanced Cybersecurity Attacks: AI can be used to exponentially scale the scope and reach of attacks on companies’ technical infrastructure.  And once a cyberattack has occurred, AI can be used for reconnaissance to mine greater amounts of data.  Additionally, AI has lowered the barriers to entry for cybercriminals who otherwise would not have the technical expertise to carry out a cyberattack.
  • Exposure or Theft of Vast Amounts of Nonpublic Information: Covered Entities that develop AI tools or use them in their own business for processing large amounts of sensitive data are particularly at risk because the potential access to a large amount of sensitive data creates an attractive target for cybercriminals who seek to extract sensitive data for financial gain or other malicious motives. Additionally, the more data that is processed means there is more data that must be safeguarded.  Finally, some of the AI tools require the storage of biometric data, which can be misused by cybercriminals to create highly realistic deepfakes or to conduct additional data theft.
  • Increased Vulnerabilities Due to Supply Chain Dependencies: If Covered Entities use AI tools—and those tools incorporate other, third-party AI tools—there are multiple vulnerability points at each link of the supply chain. In the modern interconnected business world, if one link is compromised by a cyberattack, the entire chain becomes exposed and subject to attack.  In other words, a company’s cybersecurity is only as strong as its weakest supply chain cybersecurity link.

Particularly Vulnerable Industries and Businesses

  • Companies that develop or use AI tools that process large amounts of data, as these entities are the most attractive target for threat actors to attack to maximize theft of sensitive information.
  • Companies that are part of an AI supply chain (g., where the covered entity uses an AI tool that incorporates other AI tools from third parties into its offering).

Guidance Recommendations

Per the existing DFS cybersecurity regulation, Covered Entities are already required to assess risks and implement minimum cybersecurity standards to mitigate those risks.  DFS’s new guidance builds on these requirements and recommends the following:

  • Assess the entity’s internal use of AI as part of a risk assessment, including assessing which third-party tools it incorporates.
  • Adopt specific protocols for detecting and mitigating AI-enabled social engineering. This includes adopting access controls (such as multi-factor authentication to withstand AI-manipulated deepfakes), and including digital-based certificates and physical security keys or employing multiple authentication modalities simultaneously.
  • For Covered Entities that develop or use AI tools to process large amounts of data, their cybersecurity programs should include periodic personnel trainings on how to secure and defend AI systems from attacks and develop AI systems securely, and where and how to employ human review in lieu of leveraging AI.
  • Develop procedures for conducting due diligence before working with a third-party provider, particularly ones that provides an AI tool or services. Such due diligence procedures should address potential threats to the third party via their own use of AI and how those threats could impact the Covered Entity.
  • For Covered Entities who use third-party service providers and their AI offerings, incorporate AI-specific representations and warranties into commercial agreements.
  • Implement trainings specifically on social engineering, including education on how the use of AI can make social engineering, phishing, or deepfake efforts harder for individuals to detect.
  • For Covered Entities that allow their personnel to use generative AI tools, monitoring should be implemented to detect unusual usage behaviors that may indicate a cyberthreat (g., asking ChatGPT how to infiltrate a network or what code is needed to deploy malware).
  • Implement effective data management and inventory protocols to limit exposure if threat actors gain access to company systems. If a Covered Entity uses AI or relies on AI tools, additional controls should be implemented to prevent access to the data used in connection with the AI (either for training or processing purposes)

While DFS’s guidance identifies Covered Entities as its target audience, all companies might benefit from implementing these recommended actions, which can improve their cybersecurity risk profile—whether they use AI tools or not.

Insights

Client Alert | 8 min read | 12.20.24

End of Year Regulations on Interoperability

Federal policy efforts to advance health data exchange and interoperability are continuing to change rapidly. The latest changes are the publication of two final rules by the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP/ONC) finalizing parts of the of the Health Data, Technology, and Interoperability (HTI-2) Proposed Rule. These rules adopt requirements regarding the Trusted Exchange Framework and Common Agreement (TEFCA) (HTI-2 Part 1), and create a new Information Blocking exception under Protecting Care Access (HTI-2 Part 2), on December 16th and 17th, respectively....