Manatt Health: Health AI Policy Tracker
Purpose: The purpose of this tracker is to identify key federal and state health AI policy activity and summarize laws relevant to the use of AI in health care. This is published on a quarterly basis.
In 2024, state legislatures actively worked to identify how to legislate AI use—introducing over 100 health AI-related bills—although only a few bills received enough support to get enacted. While some of these laws are now in effect, legislatures may amend language and states may offer additional guidance on compliance as the AI landscape evolves. State legislatures are already off to a busy start this year, introducing some broad-sweeping AI bills in just the first month of the year.
Congress was active in hosting hearings, but no significant legislation was introduced. Several federal agencies (FDA, CMS, ONC, OCR) issued AI-related regulations that are beginning to take effect and are laying the initial groundwork for how AI will be regulated at the federal level. The House Bipartisan Task Force on AI released a late last year that dedicates many pages to the benefits and risks of AI use in health care. The Assistant Secretary for Technology Policy/Office of the National Coordinator for Health IT released the HHS Artificial Intelligence in early January 2025, and the FDA issued new draft on the Use of AI in Medical Product Development.
What we do know is that AI governance appears to be a bipartisan issue, although how the new Administration—known for its deregulatory stance—and new Congress desires to govern it most certainly will be different than the Biden Administration; on his first day in office, President Trump revoked President Biden’s executive order on addressing AI risks. Additional federal activity, including regulations discussed below, may be repealed or rescinded; however, the Administration’s focus may be less on AI use in health care and more focused on its use in other industries, such as cryptocurrency.
To help keep track of where we are and where we may be going, below provides a summary of key health-related AI activity at the federal and state level through 2024 and what to expect heading into the 2025 legislative session.
Current Health AI Landscape Through 2024: State Laws
2024 was an immensely active session across states related to health AI—over 100 bills were introduced across 34 states. Legislatures tackled a variety of issues, most commonly focusing efforts on increased transparency regarding AI algorithm’s use, purpose, risk-mitigation, and protecting consumers (or patients or health plan members) against risks of AI bias and discrimination. A few states enacted laws focused specifically on health care AI used by insurers and by providers.
In anticipation of another active legislative session, Manatt Health conducted an analysis of all state laws impacting the health care industry (“health AI laws”) to provide a comprehensive “current state” of health AI laws in the United States. These bills are categorized into the following: (1) Key State Laws; (2) Additional State Laws; (3) Other: State Activity Laws.
“Key state laws” are those that, based on our review, are of greatest significance to the delivery and use of AI in health care because they are broad in scope and directly touch on how health care is delivered or paid or because they impose significant requirements on those developing or deploying AI for health care use. “Additional state laws” are those that were identified as being relevant to AI in the provision of health care or health care services, but were smaller in scope or significance than “key” laws.
In addition, below we have included “state activity laws,” which are those that mandate State activity and are typically related to states learning more about AI (e.g., mandating states study AI, create an AI group or team, etc.); state activity laws are summarized below but not included in the landscape map as they are often more temporary and less impactful on immediate health AI-related activity in the state.
Key State Laws.
State | Summary |
---|---|
California | requires that a licensed physician or licensed health care professional retain ultimate responsibility for making individualized medical necessity determinations for each member of a health care service plan or health insurer. One of the major requirements of the bill is that health care service plans and health insurers that use AI[2] tools cannot use the tool to “deny, delay, or modify health care services” based upon medical necessity. Said another way, the determinations of medical necessity may only be made by a licensed physician or health care professional. Date Enacted: 9/28/2024 |
California | requires that health care providers disclose, via a disclaimer, to a patient receiving clinical information produced by generative AI[3] that the information was generated by AI. In addition, the disclaimer must tell the patient how to contact a “human health care provider” or employee of the health facility. This disclaimer must be included in traditional written communications, such as letters and emails, as well as chat-based technology. Disclaimers are not required if the communications generated by generative AI are “read and reviewed by a human licensed or certified health care provider.” This bill is similar to Utah law . Date Enacted: 9/28/2024 |
California | requires developers of generative artificial intelligence systems to publicly post information on the data used to train the AI system, including the source or owners of the datasets, the number of data points in the datasets, a description of the types of data points in the dataset, and whether the datasets include personal information. Developers are defined as those who make AI tools for “members of the public” and specifically exclude “hospital’s medical staff member[s],” though the intention of the exclusion is not entirely clear. Date Enacted: 9/28/2024 |
California | requires “covered providers,” i.e., developers of AI systems, to create and make freely available AI detection tools that can identify whether AI content was created or altered by the developer's generative AI system. “Covered Providers” refers to individuals that create, code, or otherwise produce a generative AI system that has over one million monthly visitors or users and is publicly accessible within California. Further, AI-generated content must include embedded metadata (called a “latent disclosure”) that identifies it as being AI-created. Date Enacted: 9/19/2024 |
Colorado | governs developers and deployers of high risk AI systems. High risk AI systems are defined as those that make, or are a substantial factor in making “consequential decision[s],” which are decisions that have “material legal or similarly significant effect on the provision or denial to any consumer” or the costs or terms of health care services or insurance (among other areas). Developers must mitigate algorithmic discrimination and ensure transparency between themselves and deployers, the public, and the Attorney General through information disclosures. Additionally, the law requires deployers to mitigate algorithmic discrimination, implement a risk management program, and complete impact assessments. For more detailed information, please see "CO Enacts 'High Risk' AI Law Regulating Deployers and Developers, Including Health Care Stakeholders" on Manatt on Health . Date Enacted: 5/17/2024 |
Colorado | prohibits insurers from using algorithms that rely on external consumer data sources in a way that unfairly discriminates. After a stakeholder process, the commissioner shall adopt rules that: establish when an insurer may use algorithms, detail how to demonstrate the algorithm has been tested for unfair discrimination, outline what information insurers must submit to the commissioner regarding the use of AI models and external consumer data, and mandate that insurers establish and maintain a risk management framework. Colorado Department of Regulatory Agencies, Division of Insurance has proposed regulations and is beginning the stakeholdering process for health insurance in January 2025.[4] Date Enacted: 7/6/2021 |
Utah | (“AI Policy Act”) implements disclosure requirements between a deployer and end user. This consumer protection law requires generative AI to comply with basic marketing and advertising regulations overseen by the Division of Consumer Protection of the Utah Department of Commerce. The law requires “regulated occupations”, which encompass over 30 different health care professions in Utah, ranging from physicians, surgeons, dentists, nurses, and pharmacists to midwives, dieticians, radiology techs, physical therapists, genetic counselors, and health facility managers to prominently disclose that they are using computer-driven responses before they begin using generative AI for any oral or electronic messaging with an end user. Disclosures about generative AI likely cannot reside solely in an entity’s terms of use or privacy notice. For more detailed information, please see “Utah Enacts First AI Law – A Potential Blueprint for Other States, Significant Impact on Health Care” on Manatt on Health . Date Enacted: 3/13/2024 |
Additional State Laws.
State | Summary |
---|---|
Arkansas | requires the Arkansas Department of Heath to develop algorithms within the controlled substance database that would alert a practitioner if their patient is being prescribed opioids by more than three physicians within any thirty-day period. The bill includes a caveat that this is only required if funding is available. Date Enacted: 4/8/2015 |
California | requires the California State Department of Health Care Services to, in partnership with managed care plans and in consultation with stakeholders, implement a mechanism or algorithm to identify persons with higher risk and more complex care needs. Date Enacted: 10/19/2010 |
California | prohibits individuals from using undisclosed bots (“automated online account where all or substantially all of the actions or posts of that account are not the result of a person”) to communicate with another person in California with the intent to mislead or knowingly deceive the other person in order to influence purchases or votes. Stipulates that disclosures of bots must be "clear, conspicuous, and reasonably designed," and makes exemptions for online platform service providers. Date Enacted: 9/28/2018 |
California | requires that a clinical laboratory director or authorized designee establish, validate, and document criteria by which any clinical laboratory test or examination result is auto-verified (“autoverification” means the use of a computer algorithm in conjunction with automated clinical laboratory instrumentation to review and verify the results of a clinical laboratory test or examination for accuracy and reliability). This criteria must be annually re-evaluated. Requires an authorized person to be responsible for the accuracy and reliability of all test and examination results. Date Enacted: 9/18/2006 |
Illinois | mandates the Illinois Department of Healthcare and Family Services to solicit stakeholder input about and subsequently implement an algorithm to facilitate automatic assignment of eligible Medicaid enrollees into managed care entities based on quality scores and other operational proficiency criteria. It also dictates that the algorithm preserve provider-beneficiary relationships and only be used to assign enrollees that have not voluntarily selected a primary care physician and a managed care entity or care coordination entity; the algorithm cannot be used to reassign an individual currently enrolled in a managed care entity. Enrollees are granted a 90-day period after the algorithm's automatic assignment to select a different managed care entity. Date Enacted: 8/26/2016 |
Kentucky | outlines requirements related to the use of “assessment mechanisms” (including AI devices) to conduct eye exams or generate prescriptions for contact lenses. Select requirements include: ensuring assessment mechanisms allow for synchronous or asynchronous interaction between the patient and the KY-licensed optometrist, osteopath, or physician; patient age minimums; pre-visit requirements and patient disclosures, among others. Similar to . Date Enacted: 3/30/2018 |
New Jersey | prohibits individuals from using undisclosed bots (“automated online account where all or substantially all of the actions or posts of that account are not the direct result of a person”) to communicate with another person in the state with the intent to mislead or knowingly deceive the other person in order to influence purchases or votes. Stipulates that disclosures of bots must be “clear, conspicuous, and reasonably designed” and imposes escalating civil penalties for multiple violations. Date Enacted: 1/21/2020 |
New York | prohibits state agencies or entities acting on behalf of an agency from using or procuring automated decision-making systems in relation to the delivery of any public assistance benefit or in circumstances that impact the rights, civil liberties, safety, or welfare of an individual, unless such utilization is subject to ongoing human review or authorized by law. Requires state agencies to submit an impact assessment including description of the objectives of technology, data used to train the system, and testing of accuracy, fairness, and potential bias to the governor and legislature every two years. It also prohibits agency use of tools that alter the rights or benefits of existing employees of the state and/or demonstrates bias and requires disclosure of information about automated decision-making tools, including description of the system, software vendors, the data used, the purpose, among others. Date Enacted: 12/21/2024 |
Oklahoma | allows physician-approved protocols to utilize or reference “medical algorithms” (note: medical algorithms undefined). Physician-approved protocols are protocols “such as standing orders that describe the parameters of specified situations under which a registered nurse may act to deliver public health services for a client who is presenting with symptoms or needs addressed in the protocol.” Date Enacted: 5/1/2012 |
Oregon | bans covered organizations from collecting, using, or disclosing personal health data related to exposure to or infection by SARS-CoV-2 for training machine learning algorithms that are related to, or may be used in, commercial advertising or commerce, unless certain parameters are met. "Covered organizations" include persons or sites/applications that collect, use, or disclose personal health data, and do not include certain government officials, health care providers, or HIPAA-covered entities. Provisions repealed 270 days after the end of the COVID-19 public health emergency. Date Enacted: 9/25/2021 |
Rhode Island | outlines requirements related to the use of “assessment mechanisms” (including AI devices) to conduct eye exams or generate prescriptions for contact lenses. Select requirements include: ensuring assessment mechanisms allow for synchronous or asynchronous interaction between the patient and the RI-licensed optometrist, osteopath, or physician; patient age minimums; pre-visit requirements, and patient disclosures, among others. Similar to . Date Enacted: 6/29/2022 |
Utah | mandates Utah's Medicaid agency to apply for a Medicaid and CHIP waiver from CMS to, amongst other initiatives, develop an algorithm to assign new recipients to accountable care plans based upon the plan's performance in relation to quality measures. Date Enacted: 3/26/2013 |
Virginia | requires hospitals, nursing homes, and certified nursing facilities to establish and implement policies on access to, and use of, an intelligent personal assistant at their facility. Date Enacted: 3/18/2021 |
Other: State Activity Laws. Over the past several decades, states have sought to understand AI technology before regulating it, for example by creating councils to study AI and/or creating AI-policy positions within government in charge of establishing AI governance and policy. These bill reflect states interest in the potential role of AI in across industries, and potentially in health care specifically. Note, as well, that some of these laws may no longer be applicable (e.g., if an AI research task force was disbanded after a set number of years, it may no longer be active), but are included here to provide a more exhaustive list. , , , , , , , , , , , , , , , , , , , , , .
Federal Activity
While there is currently no federal law specifically governing AI, the White House and federal agencies were very active throughout 2024. Please see the table below with an overview of key activity.
However, with the change in administration imminent, it is not assured that current Executive Orders, draft policies, or guidance will be continued or finalized. As noted above, it is anticipated that Trump will pursue a deregulatory AI policy agenda (e.g., on his first day in office, President Trump repealed Biden’s Executive Order on AI), yet it is unclear what influence Trump’s advisors will have on Trump’s approach, as some of his advisors, including Elon Musk, have been previously pro regulation and wary of AI’s potential to cause harm.
Federal Health AI Regulatory Landscape
Federal Organization | Impacted Stakeholders | Implications | Relevant Policies |
---|---|---|---|
ONC | Certified health information technology (HIT) products[5] | Certified HIT vendors must provide users (hospitals and physicians) with information regarding AI clinical decision tools[6]; they must also establish an intervention risk management program. | (December 2023). Manatt Health summary of rule . (December 2024) focuses primarily on provisions related to the Trusted Exchange Framework and Common Agreement (TEFCA) but, by-and-large, does not dive into the role of AI. The large and wide-sweeping HTI-2 rule (which proposed standards and requirements related to decision-support tools) is anticipated to be split into several final rules. |
OCR | Many providers and health plans that are “covered entities” | Prohibit covered entities including providers, clinics, pharmacies, and health plans, from using AI to discriminate (e.g., racial bias in use of photo-based AI clinical diagnosis tools). | Final 1557 (April 2024). Manatt Health summary of proposed . This rule is subject to ongoing litigation and Trump’s previous Administration reversed a prior version of this rule. Proposed Rule (Dec 2024). Manatt Health summary . |
FDA | Developers of FDA-regulated products (e.g., software, hardware, drugs and biologics) | Issued an Action Plan that FDA will take to oversee AI/ML in SaMD; providers overview of current and future uses for AI/ML in drug and biological development. | For instance, non-binding on CDS software (September 2022); Review/approval of (ongoing); for predetermined change control plans (PCCPs) tailored to artificial intelligence (AI)-enabled devices (December 2024). |
CMS | Medicare Advantage (MA) Plans | Prohibit MA plans from solely relying on AI outputs to make coverage determinations or terminate a service. Require MA organizations ensure that services are delivered equitable, regardless of whether they were delivered through human or automated systems. | Regulatory (April 2023; February 2024). Manatt Health summary of guidance . Proposed for CY 2026 (December 2024). Manatt Health summary . |
Legislative & Executive Activity: Notably, in December 2024, the House Bipartisan Task Force on AI ( in early 2024) delivered its . Findings related to health care noted that AI’s use in health care can potentially reduce administrative burdens and accelerate drug development and clinical diagnosis and that the lack of ubiquitous, unform standards for medical data and algorithms impedes system interoperability and data sharing. The report outlined key challenges—including data availability, transparency, bias, cybersecurity, and liability—and detailed key recommendations, such as encouraging practices to support the safe use of AI, maintaining support for health care related AI research, creating incentives to encourage risk management of AI technologies and development of standards for liability of AI uses, and appropriate payment mechanisms.
Other Activity of Note:
- VA & FDA: The Department of Veterans Affairs (VA) and the Food and Drug Administration (FDA) an AI testing ground to evaluate health care AI tools for safety and effectiveness.
- FDA: The FDA’s Advisory Committee hosted a Digital Health in November to make recommendations on AI regulatory issues and discuss product lifecycle considerations for generative AI-enabled devices; the group that “there remain open questions on the approach to regulating GenAI-enabled products that may fall under the purview of FDA’s regulatory jurisdiction and that it is of public health importance for that Agency work with experts to address these questions in a timely manner.”
- FDA: The FDA also finalized on the types of information that must be included in a Predetermined Change Control Plan (PCCP) as part of a marketing submission for an AI-enabled device software function.
- Self-Regulation: There has been significant industry activity seeking to self-regulate health AI; there are more than 50 active consortia such as the Health AI Partnership, CHAI, VALID AI and others and we expect significant activity in this space in 2025
What We Expect From States in 2025
This year, we anticipate a continued interest from states on transparency, anti-discrimination, and insurance coverage determinations. These areas saw the greatest number of introduced legislation in 2024 and will likely continue to see language introduced and an uptick of bills passing. Notably, bills already introduced in Texas (), Virginia () and New York () are focused on transparency, and a bill in Illinois () is focused on insurance coverage determinations. States with existing AI-focused legislation may begin to offer additional guidance on implementation, such as the California Attorney General’s “ on the Application of Existing California Law to Artificial Intelligence in Healthcare”. Some states, although likely a lower volume, may pioneer language regarding the use of AI in clinical decision making and/or deepening the highly-anticipated discussion on liability for AI tools used in clinical settings.
Manatt Health is closely tracking all state and federal level activity and will provide relevant updates to this page as appropriate.
For questions on the above, please reach out to , , or . A full list of tracked bills (introduced and passed) from 2024 and 2025—classified by topic category and stakeholder impacted—is available to Manatt on Health subscribers; for more information on how to subscribe to Manatt on Health, please reach out to .
Laws for analysis were identified based on certain parameters and/or key words. Almost every bill that fit the search criteria were introduced in or after 1990.
“Artificial intelligence” is defined as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”
Defined broadly, as “artificial intelligence that can generate derived synthetic content, including images, videos, audio, text, and other digital content.”
Proposed amended Regulation at 3 CCR 702-4, available at .
Health IT Certification Program, under which developers of health information technology (HIT) can seek to have their software certified as meeting certain criteria.
HTI-1 final rule defines predictive decision support interventions () as “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produces an output that results in prediction, classification, recommendation, evaluation, or analysis.”