1. Home
  2. |Insights
  3. |Key Developments in Artificial Intelligence and Digital Health Signal Growing Federal Activity (Q4 2023)

Key Developments in Artificial Intelligence and Digital Health Signal Growing Federal Activity (Q4 2023)

Client Alert | 22 min read | 12.26.23

Digital health companies, investors, and other health care organizations should follow policy developments with a strategic lens towards their market opportunities for key potential growth and risk mitigation.


ONC HTI-1 Final Rule Modifies EHR Certification and Information Blocking Rules and Creates New AI Transparency Provisions

    • On December 13, the Office of the National Coordinator for Health Information Technology (ONC) issued the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing Final Rule (HTI-1 Final Rule), which implements provisions of the 21st Century Cures Act; makes policy changes to the ONC Health IT Certification Program (Certification Program), including new and updated standards, certification criteria, and implementation specifications; and provides additional updates to the information blocking regulations. This announcement follows the release of the HTI-1 Proposed Rule in April 2023.
    • Why it matters for you: The HTI-1 Final Rule makes changes to the Certification Program, which is a program that includes various standards, implementation specifications and certification criteria for electronic health record (EHR) software companies and health IT developers. It also establishes new transparency and risk management expectations for artificial intelligence (AI) and machine learning (ML) technology that supports clinical decision making. The HTI-1 Final Rule also will accelerate health information exchange through the Trusted Exchange Framework and Common Agreement (TEFCA) by creating a new information blocking exception to support this method of exchange. In the coming months, ONC will hold various information sessions explaining the various provisions of the HTI-1 Final Rule (register here). ONC is also planning to issue another interoperability proposed rule in the Spring 2024.


ONC Announces Major Milestone for Nationwide Health Data Exchange through TEFCA

    • On December 12, the Department of Health and Human Services (HHS), through the ONC, announced that nationwide health data exchange governed by TEFCA is operational. This means that multiple Qualified Health Information Networks (QHINs) are able to securely exchange information according to common, nationwide, technical and policy standards. Required by the 21st Century Cures Act, the primary goal of TEFCA is to establish a universal governance, policy, and technical floor for nationwide network-to-network health information interoperability.
    • Why it matters for you: Collectively, QHINs have networks that cover most U.S. hospitals and tens of thousands of providers and process billions of annual transactions across the nation. Under TEFCA, participants will now be able to connect with each other, regardless of which network they’re in. According to a Health Affairs article announcing the launch, National Coordinator Micky Tripathi states that TEFCA will address the more vexing gaps in interoperability beyond treatment exchange that have been too difficult for the private sector to tackle without public sector participation. This is a voluntary program that builds on existing health information exchanges and networks; however, HHS has stated it will tie TEFCA participation to other programs.


HHS Proposes Rule to Establish Disincentives for Health Care Providers that have Committed Information Blocking

    • On November 1, HHS released a proposed rule with public comment that would establish disincentives for health care providers found by the HHS Office of Inspector General (OIG) to have committed information blocking. The HHS proposed rule implements the HHS Secretary’s authority under section 4004 of the 21st Century Cures Act by establishing a department-wide regulatory framework for managing disincentives and proposing an initial set of appropriate disincentives in Centers for Medicare & Medicaid Services (CMS) programs. If finalized, the initial set of disincentives would apply to certain health care providers that have been found to have committed information blocking by the OIG and for which OIG refers its determination to CMS. HHS and agencies are accepting comments on the proposed rule through January 2, 2024.
    • Why it matters for you: Currently, there is no mechanism for enforcement of health care providers that may violate the information blocking rule. If finalized, this proposed rule would establish consequences for health care providers by impacting reimbursement through other CMS programs.


President Biden Issues Executive Order on Safe, Secure, and Trustworthy AI

    • On October 30, President Biden signed an Executive Order (EO) entitled the, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which establishes a policy framework to manage the risks of AI; to direct agency action to regulate the use of health AI systems and tools; and to guide AI innovation across all sectors, including in the health and human services sectors. The EO requires federal agency action for AI safety and security, protects Americans’ privacy, advances equity and civil rights, promotes innovation and competition, and advances American leadership in AI. Notably, the EO issued provisions specific to the health care sector, advanced responsible use of AI in health care, and included programs to support individuals’ privacy protections. Specifically, the EO directed the Secretary of HHS as follows:
      • Establish an HHS AI Task Force and develop a strategic plan on responsible deployment of AI by January 27, 2025in the health and human services sector;
      • Develop a quality strategy by April 27, 2024 to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality;
      • Advance nondiscrimination compliance by April 27, 2024 by considering appropriate actions to advance the prompt understanding of and compliance with federal nondiscrimination laws by health and human service providers that receive federal financial assistance, as well as how those laws relate to AI;
      • Establish an AI Safety Program by October 29, 2024 that, in partnership with voluntary federally listed patient safety organizations (PSOs) establishes a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in health care settings as well as specifications for a central tracking repository; analyzes data and evidence to develop and disseminate guidelines aimed at avoiding these harms; and
      • Develop a strategy for regulating use of AI in drug development by October 29, 2024.  
    • Why it matters for you: Please see our blog for additional details on the EO. The federal government is acting across all agencies to advance safe use of AI, including addressing cybersecurity, misinformation, and safety. The EO’s health care provisions will shape health care organizations’ operations and HHS oversight of the use of AI-enabled technology in health care.

Stakeholders should expect to see in the coming months subsequent activity from federal agencies regarding health AI/ML. Specifically, Micky Tripathi testified at a recent House Energy and Commerce Committee hearing on AI that HHS is working on the AI Safety Program: 1) by April 2024, HHS will develop an assurance strategy and establish infrastructure and have resources available allowing it to test safety; and 2) by October 2024, HHS will complete the patient safety reporting part of the program by leveraging patient safety organizations to determine how it can use established patient reporting portals and examine those portals to issue informal guidance. 


WHO Issues Regulatory Considerations for Health Care AI

    • On October 19, the World Health Organization (WHO) released a new publication recognizing the potential of AI to improve health outcomes but cautions the importance of responsible and ethical deployment of AI systems and technologies. The publication aims to deliver an overview of regulatory considerations on AI for health that covers six general topic areas: documentation and transparency; the total product lifecycle approach and risk management; intended use and analytical and clinical validation; data quality; privacy and data protection; and engagement and collaboration. The publication’s goal is to outline key regulatory considerations and a resource for all relevant stakeholders, including AI developers and manufacturers, regulators, and health practitioners.
    • Why it matters for you: Organizations may want to consider how the recommendations included in the WHO publication impacts organizational goals and regulatory compliance programs. For companies that conduct international operations, the recommendations may be particularly helpful to inform policies and government engagement strategies as global adoption of innovative health technologies continue to evolve.


FDA Focuses on Digital Health Technologies—Establishing New Advisory Committee and Publishing Guidance

    • On October 11, the U.S. Food and Drug Administration (FDA) announced the creation of a new Digital Health Advisory Committee to help FDA explore the complex, scientific and technical issues related to digital health technologies (DHTs)—such as AI and ML, augmented reality, virtual reality, digital therapeutics, wearables, remote patient monitoring and software—and improve FDA’s understanding of the benefits, risks, and clinical outcomes associated with use of DHT. The FDA has stated that the committee will be operational in 2024.

On December 22, FDA published guidance on DHTs for remote data acquisition in clinical investigations. This guidance outlines recommendations intended to facilitate the use of DHTs in a clinical investigation as appropriate for the evaluation of medical products. The guidance provides recommendations on, among other things: 1) selection of DHTs that are suitable for use in clinical investigations; 2) the description of DHTs in regulatory submissions; 3) verification and validation of DHTs for use in clinical investigations; 4) use of DHTs to collect data for trial endpoints; 5) identification and management of risks associated with the use of DHTs during clinical investigations; 6) retention and protection of data collected by DHTs; and 7) the roles of sponsors and investigators related to the use of DHTs in clinical investigations.

  • Why it matters for you: The FDA is focusing on the use of DHTs, including AI/ML technology. This announcement and guidance suggest that the FDA intends to lean on experts to support their work on review and approval of DHTs and creates new expectations for those using DHTs for clinical trials. Entities that are developing and/or using DHTs should pay attention to this guidance and policy recommendations that may come from the new advisory committee.


OSTP Hosts a Roundtable Discussion on AI in Health Care

  • On October 6, the White House Office of Science and Technology Policy (OSTP) issued a press release recapping a roundtable discussion on AI in health care. OSTP stated that the discussion focused on how AI can be safely deployed to improve health outcomes for patients and understanding of the benefits and risks of AI. Roundtable participants included the Bay Area Global Health Alliance, Open AI, USAID, Johns Hopkins, and Microsoft. Specifically, working sessions focused on the following use cases:
    • Clinical settings: where AI can be harnessed to improve individual patient care (e.g., helping radiologists diagnose breast cancer, cutting their workload in half and getting diagnoses back to patients faster);
    • Drug development: where AI can be used to streamline the discovery and testing of new drugs helping researchers design more effective treatments;
    • Public health: where AI can be used to mitigate public health challenges, and improve access by supporting equitable health decision-making in resource limited settings.
  • Why it matters for you: The OSTP discussions provide additional insight into the Administration’s plans to regulate AI. Organizations should pay attention to the specific use cases referenced during the discussions and how operations may be impacted by potential regulation.


Senate HELP Committee Ranking Member Requests Stakeholder Feedback on AI and Health Data Privacy and Security Policies

    • In September 2023, Senate Committee on Health, Education, Labor and Pensions (HELP) Ranking Member Bill Cassidy (R-LA) released the following publications, which both include requests for information (RFIs) and a list of questions for stakeholder consideration:
      • The Artificial Intelligence (AI) White Paper outlines the potential benefits and risks of AI in health care settings and addresses numerous AI topics, including the role of AI in medical innovation (i.e., drug and biological products and medical device development); patient care and clinical decision support tools; HIPAA and patients’ health data privacy; and liability issues. It asks for stakeholder feedback on potential AI legislation and regulation.
      • The Health Data Privacy Letter raises concern about companies’ use of health care data and information that is not covered under HIPAA and requests stakeholder recommendations to protect health data privacy. Specifically, it includes questions related to HIPAA, collection and sharing of consumer health data, different types of sensitive data, AI, state and international privacy frameworks, and federal agency enforcement efforts, among others.
    • Why it matters for you: While the comment period for both RFIs has closed, we expect additional updates from Ranking Member Cassidy on AI and digital health policy issues. We expect additional hearings, publications, regulations, and even potential legislation from Congress in the coming months. Organizations should consider engaging with the government on how potential legislation and regulations could impact their business.

Crowell Health Solutions is a strategic consulting firm focused on helping clients to pursue and deliver innovative alternatives to the traditional approaches of providing and paying for health care, including through digital health, health equity, and value-based care. We provide this monthly update on artificial intelligence and digital health policy issues for health care stakeholders and innovators. Follow Crowell Health Solutions’ Trends in Transformation blog for the latest updates and in-depth analysis.

Insights

Client Alert | 2 min read | 11.14.24

SEC ESG Enforcement Is Still Alive

On November 8, 2024 the SEC announced a settled enforcement action against Invesco Advisers, Inc. for making misleading statements about its integration of environmental, social, and governance (ESG) factors into the firm’s investment decisions. Invesco agreed to pay a $17.5 million civil penalty to settle the matter. This enforcement action makes it clear that, even though the SEC dissolved its ESG Task Force, the Commission continues to monitor firms’ statements and representations for misleading statements about ESG....