The purpose of this tracker is to identify key federal and state health AI policy activity since July 2024 and summarize laws relevant to the use of AI in health care that were passed between January and October. If you are interested in an overview of federal AI activity to-date and health AI policy bills introduced (but not passed) by states throughout 2024, please see here. If you are interested in an overview of health AI laws passed prior to 2024, please see here.
Artificial intelligence has been used in health care since the 1950s, but recent technological advances in generative AI have expanded the potential for health AI to enable improvements in clinical quality and access, patient and provider experience, and overall value.
The AI legal and regulatory landscape is rapidly evolving as federal and state policy makers work to determine how AI should be regulated to balance its transformative potential with concerns regarding safety, security, privacy, accuracy and bias. Initial efforts have focused on improving transparency between the developers, deployers and users of AI technology.
While there is currently no federal law specifically governing AI, the White House and several federal agencies have begun or are expected to propose laws and regulations to govern AI. Most recently, the White House announced the key accomplishments from the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” including that the Department of Health and Human Services (HHS) established an AI Safety Program to track harmful incidents involving AI’s use in healthcare settings and to evaluate mitigations for those harms. HHS has also developed objectives, goals, and high-level principles for the use of AI or AI-enabled tools in drug development processes and AI-enabled devices. In addition, the Department of Veterans Affairs (VA) and the Food and Drug Administration (FDA) launched an AI testing ground to evaluate health care AI tools for safety and effectiveness. Finally, the FDA’s Advisory Committee will host a Digital Health Meeting on November 20-21st to provide advice and recommendations to FDA on regulatory issues and discuss total product lifecycle considerations for generative AI-enabled devices.
Notably, states are not waiting for federal guidance, and many have introduced and passed legislation that would implicate the use of AI across the health care landscape. The majority of state legislative sessions have ended with very few bills introduced in Q3. Given that bills introduced this year – if not already passed – are unlikely to pass unless re-introduced next year, the below includes a summary of the most relevant bills that have passed this year. If you are interested in reading a summary of health AI bills introduced by states throughout 2024 and federal AI activity to-date, please see here. If you are interested in an overview of health AI bills passed prior to 2024, please see here.
From January until September 2024, over 108 bills were introduced that address AI and health care stakeholders. Of those bills, 18 passed that are particularly relevant to health care stakeholders and AI use, including:
- CA SB 1120 requires that a licensed physician or licensed health care professional retain ultimate responsibility for making individualized medical necessity determinations for each member of a health care service plan or health insurer. One of the major requirements of the bill is that health care service plans and health insurers that use AI1 tools cannot use the tool to “deny, delay, or modify health care services” based upon medical necessity. Said another way, the determinations of medical necessity may only be made by a licensed physician or health care professional.
- CA AB 3030 requires that health care providers disclose, via a disclaimer, to a patient receiving clinical information produced by generative AI2 that the information was generated by AI. In addition, the disclaimer must tell the patient how to contact a “human health care provider” or employee of the health facility. This disclaimer must be included in traditional written communications, such as letters and emails, as well as chat-based technology. Disclaimers are not required if the communications generated by generative AI are “read and reviewed by a human licensed or certified health care provider.” This bill is similar to a Utah law (SB149).
- CA AB 2013 requires developers of generative artificial intelligence systems to publicly post information on the data used to train the system, including the source or owners of the datasets, the number of data points in the datasets, a description of the types of data points in the dataset and whether the datasets include personal information. Developers are defined as those who make AI tools for “members of the public” and specifically exclude “hospital’s medical staff member[s],” though the intention of the exclusion is not entirely clear.
- CA SB 942 requires “covered providers,” i.e., developers of AI systems, to create and make freely available AI detection tools that can identify whether AI content was created or altered by the developer's generative AI system. “Covered Providers” refers to individuals that create, code, or otherwise produce a generative AI system that has over one million monthly visitors or users and is publicly accessible within California. Further, AI-generated content must include embedded metadata (called a “latent disclosure”) that identifies it as being AI-created.
- CO SB 205 governs developers and deployers of high risk AI systems. High risk AI systems defined as those that make, or are a substantial factor in making “consequential decision[s],” which are decisions that have “material legal or similarly significant effect on the provision or denial to any consumer” or the costs or terms of health care services or insurance (among other areas). Developers must mitigate algorithmic discrimination and ensure transparency between themselves and deployers, the public, and the Attorney General through information disclosures. Additionally, the law requires deployers to mitigate algorithmic discrimination, implement a risk management program, and complete impact assessments. For more detailed information, please see "CO Enacts 'High Risk' AI Law Regulating Deployers and Developers, Including Health Care Stakeholders" on Manatt on Health here.
- UT SB 149 (“AI Policy Act”) implements disclosure requirements between a deployer and end user. This consumer protection law requires generative AI to comply with basic marketing and advertising regulations overseen by the Division of Consumer Protection of the Utah Department of Commerce. The law requires “regulated occupations”, which encompass over 30 different health care professions in Utah, ranging from physicians, surgeons, dentists, nurses, and pharmacists to midwives, dieticians, radiology techs, physical therapists, genetic counselors, and health facility managers to prominently disclose that they are using computer-driven responses before they begin using generative AI for any oral or electronic messaging with an end user. This likely means disclosures about generative AI cannot reside solely in an entity’s terms of use or privacy notice. For more detailed information, please see “Utah Enacts First AI Law – A Potential Blueprint for Other States, Significant Impact on Health Care” on Manatt on Health here.
As states seek to understand and regulate emerging AI technology, state legislatures appear to have been focused on passing laws to establish working groups or study committees on AI (including its current use in state government) to study and assess AI use and provide recommendations to policymakers for future AI-focused policymaking. In 2024, the majority of bills that were introduced and passed fell into this category. These bills typically directed one of several actions: (1) establishing a study committee focused on AI to make recommendations to the legislature on areas for future legislation and regulation; (2) conducting current state assessments and yearly inventories of AI tools already used in state government; and (3) establishing government offices and leadership positions within the state for the purpose of ongoing monitoring. Over 70 such bills were introduced, and the following passed: CA SB 896, CO HB 1468, DE HB 333, FL SB 7018, FL SB 1680, IN SB 150, MD SB 818, OR HB 4153, TN HB 2325, UT SB 149, VA SB 487, WA SB 5838, and WV HB 5690.
For questions on the above, please reach out to RSeigel@manatt.com, JAugenstein@manatt.com, or AFox@manatt.com. A full list of the tracked bills and their relevant classifications is available to Manatt on Health subscribers; for more information on how to subscribe to Manatt on Health, please reach out to BJefferds@manatt.com.
1 “Artificial intelligence” is defined as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”
2 Defined broadly, as “artificial intelligence that can generate derived synthetic content, including images, videos, audio, text, and other digital content.”