Manatt Health: Health AI Policy Tracker

Health Highlights

The purpose of this tracker is to identify key federal and state health AI policy activity. Below reflects federal legislative and regulatory activity to date related to AI, and state legislative activity introduced between January 1st and June 30th, 2024. This summary is current as of June 30th, 2024 and will be next published in January 2025 to reflect full 2024 activity.1 Relevant updates from now until then will be published individually.   


Artificial intelligence has been used in health care since the 1950s, but recent technological advances in generative AI have expanded the potential for health AI to enable improvements in clinical quality and access, patient and provider experience, and overall value.

The AI legal and regulatory landscape is rapidly evolving as federal and state policy makers work to determine how AI should be regulated to balance its transformative potential with concerns regarding safety, security, privacy, accuracy and bias. Initial efforts have focused on improving transparency between the developers, deployers and users of AI technology. While there is currently no federal law specifically governing AI, the White House and several federal agencies have begun or are expected to propose laws and regulations to govern AI. We continue to expect activity in the second half of 2024 and beyond as deadlines included in President Biden’s Executive Order on responsible AI approach and pass.2 For a summary of key federal activities to date, please see table below. Notably, states, however, are not waiting for federal guidance, and many have begun to introduce legislation that would implicate the use of AI across the health care landscape.

State Health AI Legislative Activity Prior to 2024

Given the high volume of activity in 2024, we were eager to understand the previously existing landscape of AI legislation—that is, what bills had passed prior to 2024. To do so, all passed bills with specific key words and phrases related to AI, algorithms, and predictive models were evaluated. Using these key words, we identified thirty (30) bills that regulate a health care stakeholder or impact health care.3

Overall, there was little consistency in the types of activities addressed by passed legislation—the laws govern an array of provider specialties, a range of AI uses within the Medicaid Program, and provide several different ways for states to evaluate or study the potential AI uses and effects.

Eight (8) bills focused on specific clinical and healthcare use cases rather than regulating AI in health care generally. For example, two states set parameters on the use of AI-enabled tools in eye exams4,5 and two others initiated state studies on the use of clinical algorithms for treatment of sickle cell disease.6,7 Notably, several bills directed resources towards the development and use of tools to aid in the management of prescription drugs and/or psychiatric drug treatments, including developing/implementing evidence-based algorithms for clinically and cost-effective mental health medications utilization,8,9 creating algorithms to alert practitioners to potential opioid overprescribing10 and identifying possible violations of law and breaches of professional standards related to prescription drugs.11

California and Oklahoma were the only two states identified that passed more general provisions on the use of algorithms in for specific clinical practice: in 2006, California required laboratory directors (or an authorized designee) to (i) annually revalidate the criteria by which clinical laboratory tests used algorithms to review and verify the results of the tests and (ii) annually reapprove the relevant algorithm(s).12 In 2012, Oklahoma allowed providers to utilize and/or reference medical algorithms when developing protocols to assist in delivery of public health services.13

Given the myriad activities that Medicaid state agencies conduct to support beneficiary access to Medicaid services and oversee Medicaid Managed Care plans (including ensuring adequate payments to plans), it is unsurprising that initial AI-related activity focused pushing the agencies to leverage technology to improve efficiency and stretch often-limited resources. California was amongst the first movers in this space, requiring its Department of Health Care Services, in coordination with Medicaid managed care plans, to develop and implement an algorithm for risk stratification (2010).14 California later required the Department to consult with stakeholders when adding factors to managed care plan assignment algorithms.15 Soon after, in 2013, Utah mandated that the Department of Health apply for a Medicaid and Children’s Health Insurance Program (CHIP) waiver to develop an auto-assignment algorithm based on quality performance,16 and in 2016, Illinois authorized the implementation of a Medicaid managed care auto-assignment algorithm “preserving existing provider-beneficiary relationships” and taking into account quality scores and other proficiency criteria.17 Please note that many other states implemented auto-assignment policies over the past decade (which may or may not have addressed algorithms) and may not be represented here because the states passed bills that did not include key words relevant to this search, the policies were implemented in sub-regulatory guidance, or for some other reason.

States also explored how to effectively use algorithms for claims processing, issuing RFPs/RFIs to obtain insight into how to effectively leverage predictive modeling in claims management and/or identification of fraudulent billing practices.18,19,20,21 Arkansas appropriated funds to develop an algorithm to calculate savings in a care management pilot program in effect through 2016.22  

States also authorized funds to support algorithm development for more specific activities: in 2010, Florida required the Agency for Healthcare Administration to use an algorithm to develop an individual expenditure budget for home- and community-based Medicaid waiver program services based on variables (e.g., level of need, individual characteristics, etc.).23 In 2021, Washington allocated funds for the Department of Social and Health Service’s mental health program to develop and implement an algorithm to identify individuals “who are at high risk of future involvement with the criminal justice system and [to] estimate demand for civil and forensic state hospital bed needs.”24

States began to study the use of AI at the state level, but with much less fervor than in 2023 and 2024. Amongst a few states (AL,25 MA,26 VT) that created task forces and commissions on AI through legislation, Vermont is notable: in 2019, the state created an “AI Task Force”27 and, in 2022, passed a bill28 to implement multiple recommendations outlined in the AI Task Force’s 2020 final report, including establishing a Division of AI, an AI Advisory Council, and maintaining an annually updated inventory of all AI systems developed, used, or procured by the state. Other states focused on niche studies (e.g., potential use of algorithms to assess possibility of obtaining insurance to meet the state’s medical pension liabilities for retired state employees29) or specific disease (e.g., studies on sickle cell,30,31 as described above).

Finally, very few (only two) states passed anti-discrimination measures and no states passed AI transparency requirements. Colorado barred insurers from using algorithms or predictive models that unfairly discriminates32, and Oregon mandated entities that collect personal health data related to exposure to or infection by COVID-19 must establish policies to prevent using health data for any discriminatory purpose.33

Two bills governing the use of chatbots were not captured by our key word and phrase search, but are often cited as AI laws and may be relevant for tracking the evolution of AI policy: California passed a law prohibiting undisclosed bots from interacting online with the intent to deceive about their artificial identity to influence purchases or votes, requiring clear disclosure of bot usage, with exemptions for online platform service providers.34 New Jersey passed a nearly identical law prohibiting the use of online bots to mislead individuals about the bot's identity with the intent to influence commercial transactions or elections. Neither bill is specific to health care.35

2024 State Health AI Legislative Activity (January–June 2024)

In the first half of 2024, states introduced legislation focused on a wide range of issues that implicate health care stakeholders. As shown below, proposed legislation regulates states, payers, providers, deployers, and developers. “Deployer” describes entities that use an AI tool or service and—depending on the precise use or definition within a bill—could include states, providers, payers, or individuals. “Developer” describes entities that make or build AI tools, which—again, depending on the precise use or definition within a bill—may include anyone developing AI, such as technology companies, states, providers or payers. Additionally, a bill that regulates state agencies could potentially impact other stakeholders, for example if an entity is a contractor or agent of the state or the requirements have downstream effects on developers or deployers.

Note: as of the start of July, the majority of state sessions have ended (see legislative schedule here), and thus the introduced bills (discussed below) if not already passed, will not pass during this legislative cycle. However, the bills are summarized and categorized with the anticipation that several will be reintroduced at the start of new year’s session.

Manatt-Health_AI_Policy_Tracker_state-by-state_overview_map.jpg

Bills were identified as relevant if they regulated activity that fell into one of the following categories:

Categorization Type Definition

Clinician Use and Oversight of AI Tools in Care Delivery

Regulation of clinician’s use of AI tools and/or oversight of AI outputs in clinical care.

Provider Legal Protections

Regulations protecting clinicians from prosecution or disciplinary action related to use of AI tools.

Determinations of Insurance Eligibility or Medical Necessity/Authorization

Regulation of AI in insurance eligibility or medical necessity or coverage determinations.

Anti-Discrimination

Regulations focused on ensuring AI tools do not discriminate.

Transparency between Developer and Deployer

Regulations outlining disclosure or information requirements between those who develop AI tools and those who deploy them.

Transparency between Deployer and End User

Regulations outlining disclosure or consent requirements between those who deploy AI tools and those who use or may be impacted their output.

Transparency between Developer or Deployer36 and State

Regulations outlining disclosure, submission, registration or other imposed on the developer or deployer with regard to the state (e.g., deployer/developer impact assessment submissions, data submissions, etc.).

State Aligns with National Standards / Administration's AI Blueprint

Regulations aligning state’s AI policies with national benchmarks and/or Biden Administration’s AI blueprint.

State Activities: State Mandated Study of AI, State-Evaluation of Tools, AI Task Force, etc.

Regulations mandating specific state activity related to the study, oversight, or evaluation of state agencies’ use of AI or AI use within the state.

Manatt-Health_AI_Policy_Tracker_state-by-state_overview_map.jpg

37

Manatt-Health_AI_Policy_Tracker_state-by-state_overview_map.jpg

38

As shown above, key trends from health AI bills introduced between January–June 2024 include:

1. The majority of legislative activity relates to states mandating study bills, working groups or reports on AI to inform future policy making (56 bills). More than half of the bills tracked this quarter fell into this category:

  • More than 30 bills would create AI task forces/committees (e.g., CT SB2 would establish an “AI Advisory Council” to make recommendations on the development of ethical and equitable use of AI in state government; AK SB262 would establish a “State Artificial Intelligence Task Force” to study artificial intelligence and make recommendations for us of AI in state government) and/or require completion of a one-time or routine study or report on AI (e.g., IL HB4705 would require state agencies to submit annual reports on algorithms in use by each agency; NJ S3357 would establish the New Jersey Artificial Intelligence Advisory Council to study the opportunities and risks for state agencies in leveraging AI and submit a report with the Council’s findings to the governor).
  • More than ten bills either would require states to conduct inventories of AI systems used in state government (e.g., NC H1036 would require state agencies to submit inventories of high-risk39 automated decision systems to a newly-established AI Task Force; CA SB896 would require state agencies to conduct an inventory of all high-risk uses of generative AI specifically), require states to complete impact assessments of AI systems used by state actors (e.g., OK HB3828 would prohibit state agencies from deploying AI systems without first performing an impact assessment), or affect public procurement of AI systems (e.g., NM HB184 would require government AI procurement contracts to include a requirement for transparency by the vendor; NJ A4399 requires the Commission on Science, Innovation and Technology to study the impact of State agencies procuring and operating AI technology)
  • Eight bills would create new AI leadership positions to guide policy or align AI procedures across state government (e.g., NJ HB1438 would require the appointment of an AI Officer to develop procedures regulating the use of automated systems by state agencies making “critical decisions”40 [including those implicating health care] and organize an inventory of automated systems used by the state and the appointment of an AI Implementation Officer who would approve or deny state agency use of automated systems based on established state procedures. NY A10231 would establish an office of artificial intelligence and the position of Chief Artificial Intelligence Officer to develop statewide AI policies and governance).

The majority of these bills were focused on the general use of AI, rather than AI in health care specifically, although the implications of findings from these studies/reports may implicate the future use or regulation of AI in health care. Several bills require participation from health care stakeholders (e.g., MD HB1174 would require the Secretary of Health (or designee) and a representative from the Office of Minority Health and Health Disparities to serve on the “Technology Advisory Commission”; HI HB2176 would require “a representative of the health care industry” to serve on the AI working group; NY AB8195 would include the Commissioner of Health as a member of the Advisory Council for AI; see also RI HB7158, WV HB5690, FL SB1680, among others). There were also a handful of bills more specific to health care and AI in health (e.g., FL SB7018 creates the “Health Care Innovation Council” to regularly convene subject matter experts to work towards improved quality and delivery of health care, including convening AI experts as necessary; NJ A4594 would require the Department of Health to study the use of technology, including artificial intelligence, in long-term care settings).

2. States are introducing bills focused on transparency between those who develop AI tools and those who deploy them, between those who deploy them and end users, and/or between those who develop or deploy them and the state. Both Utah and Colorado passed bills with relevant transparency requirements; see below section #5 for more detail.

  • Transparency between developers and deployers (15 bills). Bills were included in this category if they specified communication requirements between those who build AI tools (“developers”) and those who deploy them (“deployers”). A large majority of bills in this category also included transparency requirements between deployers and end users as well as transparency requirements between developers/deployers and the state.

    Specific transparency requirements vary but generally focused on ensuring that developers provide background information on the tools’ training data, best use cases, and potential tool limitations to the entities purchasing or deploying the tools. For example, Virginia and Vermont introduced similar bills (VA HB747, VT HB710) that each would require developers to provide deployers—prior to the selling, leasing, etc. of AI tools—documentation that describes the AI’s intended uses, training data types, data collection practices and steps the developer took to mitigate risks of discrimination, among other requirements. Other states proposed similar bills (e.g., IL HB5322, OK HB3835, RI HB7521, CT SB2 and CA AB2930).

    California introduced two more unique bills: CA AB3211 requires generative AI developers to add difficult-to-remove watermarks to content produced by generative AI systems that contain the developer’s name, information about the AI system, among other identification markers and other provisions. CA AB2013 requires developers to publicly post a high-level summary of datasets used in the development of an AI system.

    These bills would apply to health care stakeholders who are developers or deployers.

  • Transparency between deployers and end users (29 bills). Bills were included in this category if they specified disclosure or transparency requirements between deployers and those who are impacted by AI tools (i.e., end users). Illinois HB5116 would require deployers that use AI tools to make “consequential decisions”41 (which include decisions relevant to health care or health insurance) to notify individuals at or before the use of the AI tool that AI is being used to make, or is a factor in making, the consequential decision (similar to VT HB710, VA HB747, CA AB2930). Illinois has another proposed bill, IL HB5649, that would make it unlawful for a licensed mental health professional to provide mental health services to a patient through the use of AI without first disclosing that an AI tool is being used and obtaining the patient’s informed consent.

    Several bills were not specific to the provision of health care or health insurance but apply to health care. For example, New York introduced language specific to generative AI: NY SB9450 would require the “operator of a generative artificial intelligence system [to] conspicuously display a warning on the system’s user interface [to] consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate and/or inappropriate.” Another bill in New York (NY SB 9381) would require chatbots to provide clear and explicit notice to users that they are interacting with an AI chatbot, establishes deployer responsibility for misleading, incorrect, or harmful chatbot responses that result in financial loss or user harm, and makes clear that the proprietor of a chatbot “may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system.” California AB3211 would require end users “affirmative consent” prior to interacting with a conversational AI system. If passed, these bills would require providers, health administrators, payers, and others that use chatbots to communicate with patients—e.g., to schedule an appointment or answer questions on coverage or eligibility—to include a disclaimer that the information provided originated from an AI tool.

  • Transparency between developer or deployer and state (23 bills). These bills require developers/deployers of AI tools to submit specific information or impact assessments to the state and/or to register AI tools with the state.

    Two unique bills originated in Oklahoma and New York. Oklahoma HB3577 would require payers to submit AI algorithms and training data used for utilization review to the state. New York SB8206 would require “every operator of a generative” AI system to (1) obtain an affirmation from users prior to the tool’s use that the user agrees to certain terms and conditions (expressly proposed in the bill), including, without limitation, that the user will not use the AI tool to promote illegal activity and (2) submit each “oath” (which is the term used in the bill) to the attorney general within 30 days of the user making such oath.

    States also proposed a variety of actions to provide them with insight into AI development and implementation. Louisiana (LA SB118) introduced a bill that would require "any person who makes publicly available within the state a foundation model or the use of a foundation model” to register with the secretary of state; this is similar to NY SB8214, which would require AI deployers to biennially register with the state. California SB1047 would require developers of large and complex AI models to determine whether their models have a “hazardous capability” and submit a certification to the state with the basis of their conclusion. Hawaii’s HB1607 requires certain deployers to conduct annual audits to determine whether the tools discriminate in any prohibitive manner.

    Although only one bill in this category passed (UT SB149; see below), we anticipate states will continue to introduce bills with similar approaches and goals. Notably, these types of bills have the potential to impact a range of health stakeholders: payers and providers may need to submit specific information to states—operational lifts they will need to take into account when evaluating the potential benefits and risks of implementing AI tools into their systems. In addition, state health departments will need to determine how to absorb required audits and the review of submitted data—a significant lift for state health departments which are chronically under-resourced.

3. 11 states introduced legislation that included requirements to prohibit or address discrimination by AI tools (22 bills). Most bills in this category would prohibit the use of AI tools that result in discrimination, require deployers/developers to develop processes to avoid discrimination or bias, and/or mandate that deployers/developers summarize how they are managing against the risk of discrimination (e.g., OK HB3835, RI HB7521, VA HB747, VT HB710, WA HB1951, IL HB5116, CA AB3211, CA AB2930). A few states introduced language that would prohibit states from using discriminatory AI tools and/or require states to ensure tools are not discriminatory (e.g., NH HB1688, NY AB9149, OK HB3828). Oklahoma introduced language (OK HB3577) which would require payers to attest that training datasets minimized the risk of bias.

4. Only a small number of states introduced legislation on specific health care use cases, including provisions that impact insurance coverage determinations and access to services or the use of AI in clinical decision-making. Bills introduced in the first quarter of this year tended to specify that the determinations could not be based solely on the AI tool algorithm. For example, OK SB1975 states that “government, business, or any agent representing such shall not use AI and biotechnology applications to: […] determine who shall or shall not receive medical care or the level of such care; determine who shall or shall not receive insurance coverage or the amount of coverage”. CA SB1120 proposes to require that a “health care service plan shall ensure that a licensed physician supervises the use of artificial intelligence decisionmaking tools when those tools are used to inform decisions to approve, modify, or deny requests by providers for authorization prior to, or concurrent with, the provision of health care services to enrollees”. Other bills seemingly allow AI tools to make positive coverage and eligibility determinations but require a physician to review any decision that would negatively impact coverage or access to services (e.g., OK HB3577, NY AB9149). No identified bills were introduced in the second quarter of 2024 related to the determination of insurance eligibility or medical necessity/prior authorization.

Notably, there were a few bills that implicate the use of AI in clinical decision making. As Manatt Health has previously summarized, Georgia’s HB887 proposes to require that AI-generated health care decisions be reviewed by an individual with “authority to override” the tools’ existing decision and also requires the Medical Board to establish policies—including, but not limited to, disciplining physicians.

Illinois’ SB2795 echoes a few bills introduced throughout 2023, which states that health care facilities may not substitute recommendations, decisions, or outputs made by AI for a nurse’s judgement, and that nurses may not be penalized for overriding an AI’s recommendations if, in the nurse’s judgement, it is in the patient’s best interest to do so. In April, Louisiana (HB916) introduced language that would require a health care professional to review any health care decision “made by or with the use of artificial intelligence” and would prohibit health care entities from making “any decision regarding the care of patients based solely on the results derived from the use or application of artificial intelligence”.

Passed Bills

5. Of the approximately 100 relevant bills identified, 13 passed. The majority of passed bills establish task forces or mandate the study of AI, although a few set precedent for language regarding transparency and consumer protections. Many health care stakeholders logged concerns regarding the breadth of the Colorado law, the feasibility of compliance and the risk that it may stifle innovation in the state. Colorado Governor Polis acknowledged that he shared some of those concerns when he signed the law and urged the legislature to reconsider some of the laws provisions. He also noted his desire for a federal law that may preempt state law in this area. The Utah law, which is much more narrowly focused, appears to have received much less stakeholder attention and outrage, although there are open questions as to how beneficial the disclosure is to patients and reason why this disclosure may be necessary given that professionals rely on all types of technology to render treatment and diagnose patients, such as complex imaging machines. See more detail on each bill here:  

Utah passed a law (SB 149) focused on disclosures between the deployer and end user. Utah’s AI Policy Act places generative AI under its consumer protection authority, requiring that generative AI must comply with basic marketing and advertising regulations, as overseen by the Division of Consumer Protection of the Utah Department of Commerce.

The law requires “regulated occupations”, which encompass over 30 different health care professions in Utah, ranging from physicians, surgeons, dentists, nurses, and pharmacists to midwives, dieticians, radiology techs, physical therapists, genetic counselors, and health facility managers to prominently disclose that they are using computer-driven responses before they begin using generative AI for any oral or electronic messaging with an end user. This likely means disclosures about generative AI cannot reside solely in the regulated entity’s terms of use or privacy notice. For more information on this Act, please see Manatt Health’s full summary here.

In May, Colorado Governor Jared Polis, with noted reservations, signed into law SB205, a consumer protection law which imposes significant requirements on developers and deployers of “high risk” AI systems, requires consumer transparency, and arms the Attorney General with oversight authority. Developers and deployers are defined broadly and would include health care stakeholders, such as hospitals, insurers, and digital health companies, who develop or deploy high-risk AI systems, if they are not otherwise exempt. Developers are required to mitigate algorithmic discrimination (by using “reasonable care” to protect consumers) and ensure transparency between developers and deployers (make a variety of specific information available to deployers), the public (develop a statement on website that summarizes the types of high-risk systems the developer has developed), and the Attorney General (share foreseeable risks in a format as yet to be determined). The bill also requires deployers to mitigate algorithmic discrimination, create and implement a risk management policy and program, complete impact assessments, and ensure consumer transparency. There are several exemptions; most relevant to healthcare, developers and deployers are exempt if develop/deploy a high-risk system that has been approved by a federal agency (e.g., FDA) or if they are a HIPAA-covered entity and providing health care recommendations. The law’s provisions require developers/deployers to take certain actions by February 1, 2026. However, given Governor Polis’ stated reservations, SB205 in their current form may not be what is ultimately implemented. For a more detailed breakdown of this bill, please see the Manatt on Health analysis here.

11 other bills passed:

  • Florida SB7108: Establishes the "Health Care Innovation Council" to regularly convene subject matter experts to improve the quality and delivery of health care, including leveraging artificial intelligence. Council representatives include members across the health care industry and ecosystem, and Council activities include: developing and updating a set of best practice recommendations to lead and innovate in health care; identifying focus areas to advance the delivery of health care, including through the use of innovative technologies; and recommending changes, including changes to law, to innovate and strengthen health care quality, among other duties.
  • Washington SB5838: Establishes an AI task force to assess current use of AI and make recommendations to the legislature on possible guidelines and legislation. Health care and accessibility is one of several topics included in task force scope.
  • West Virginia HB5690: Establishes the "West Virginia Task Force on Artificial Intelligence" to develop best practices for public sector use of AI, recommend legislative protections for individual rights as relating AI, and take an inventory of current or proposed use of AI by state agencies, among other duties. Task force membership must include the Secretary of Health or their designee and a member representing either the WV University Health System or Marshal Health Network.
  • Colorado HB1468: Establishes the “Artificial Intelligence Impact Task Force” to "conside[r] issues and propos[e] recommendations regarding protections for consumers and workers from artificial intelligence systems and automated decision systems." Membership includes over 25 individuals, one of which is a “a technology expert from an organization that represents healthcare, bioscience, or medical practitioners, to be appointed by the governor”.
  • Florida SB1680: Establishes the "Government Technology Modernization Council" to "study and monitor the development and deployment of new technologies and provide reports on recommendations for procurement and regulation of such systems". Membership includes the Secretary of Health Care Administration or their designee.
  • Delaware HB333: Establishes the “Delaware AI Commission” to make recommendations on legislative, executive, and judicial actions regarding AI; develop and recommend statewide processes, principles, and guidelines regarding use of AI; encourage agencies to leverage AI to improve service delivery; and conduct an inventory of generative AI use in Delaware state government. Membership includes 19 individuals, one of which is the Secretary of the Department of Health and Social Services or their designee.
  • Indiana SB150, Oregon HB4153, Tennessee HB2325 , Maryland SB818, and Virginia SB487, each establish task forces / councils to study AI that may implicate the use of AI in health care in the future, do not expressly reference health care or health care stakeholders. We will be watching to see what the output is of such task forces and councils and how such output may shape AI policy going forward.

Federal Health AI Regulatory Landscape

Federal Health AI Regulatory Landscape
Federal Agency Impacted Stakeholder Implications Relevant Policies
ONC Certified health information technology (HIT) products42 Certified HIT vendors must provide users (hospitals and physicians) with information regarding AI clinical decision tools43; they must also establish an intervention risk management program.

HTI-1 Rule
(December 2023); Manatt Health summary of rule here

HTI-2 Rule
(proposed July 2024) which proposes standards and requirements related to decision-support tools.

OCR Many providers and health plans that are “covered entities” Prohibit covered entities including providers, clinics, pharmacies, and health plans, from using AI to discriminate (e.g., racial bias in use of photo-based AI clinical diagnosis tools).

Final 1557 Rule (April 2024)

Manatt Health summary of proposed here.
FDA Developers of FDA-regulated products (e.g., software, hardware, drugs and biologics) Issued an Action Plan that FDA will take to oversee AI/ML in SaMD; providers overview of current and future uses for AI/ML in drug and biological development.

For instance, Non-binding guidance on CDS software (September 2022); Review/approval of AI/ML devices (ongoing)

Manatt Health summary of FDA AI activity here.
CMS Medicare Advantage (MA) Plans Prohibit MA plans from solely relying on AI outputs to make coverage determinations or terminate a service.

Regulatory guidance (April 2023; February 2024)

Manatt Health summary of guidance here.

 

For questions on the above, please reach out to RSeigel@manatt.com, JAugenstein@manatt.com, or AFox@manatt.com. A full list of the tracked bills and their relevant classifications is available to Manatt on Health subscribers; for more information on how to subscribe to Manatt on Health, please reach out to BJefferds@manatt.com.


1 The majority of state sessions have ended as of July 2024, and thus it is unlikely that there will be significant amendments to the themes presented below. Relevant updates between now and the end of the year will be published individually on Manatt on Health.

2 For a summary of key takeaways from Executive Order, please see here.

3 Note: Bills were identified based on the language that was first passed. Thus, bills described in this section are described in the past tense, as bill language may have since been amended or redacted.   

4 KY HB 191 passed in 2018.

5 RI H 6654 passed in 2022.

6 MA HB 5050 passed in 2022.

7 OK SB 1467 passed in 2022.

8 OR HB 2300 passed in 2017.

9 WA SB 6387 passed in 2002.

10 AR SB 717 passed in 2015.

11 MD HB 25 passed in 2019.

12 CA AB 2156 passed in 2006.

13 OK HB 2266 passed in 2012.

14 CA SB 208 passed in 2010.

15 CA AB 1468 passed 2012.

16 UT HB 141 passed in 2013.

17 IL SB 2306 passed in 2016.

18 IL HB 2994 passed in 2013.

19 OR SB 1577 passed in 2014.

20 WA SHB 2571 passed in 2012.

21 MN HF 25 passed in 2011.

22 AR SB 101 passed in 2015.

23 FL H 5303 was passed in 2010; language was updated in 2016 to an “allocation methodology” that includes an algorithm and any additional authorized funding.

24 WA SB 5092 passed in 2021.

25 AL SJR 71 passed in 2019.

26 MA HB 5250 passed in 2021.

27 VT H16 passed in 2019.

28 VT H410 passed in 2022.

29 MI HB 5816 passed in 2008.

30 MA HB 5050 passed in 2022.

31 OK SB 1467 passed in 2002.

32 CO SB 169 passed in 2021.

33 OR HB 3284 passed in 2021; note that provisions were repealed 270 days after the end of the COVID-19 Public Health Emergency.

34 CA SB 1001 passed in 2018.

35 NJ A4563 passed in 2020.

36 Note: A developer or deployer could include a state agency.

37 Note: Introduced bills may regulate more than one stakeholder, so the sum of these categories is greater than the total number of identified bills introduced. Additionally, “deployer” and “developer” are more general categories that could also include states, payers, providers, individuals, or other entities.

38 Note: Introduced bills may regulate more than one activity. The sum of these categories is greater than the total number of identified bills introduced.

39 Bill defines “high-risk” automated decision systems as those that are “used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, […] health care.”

40 “‘Critical decision’ means any decision or judgment that has any legal, material, or similarly significant effect on an individual's life concerning access to, or the cost, terms, or availability of: […] family planning services, including, but not limited to, adoption services or reproductive services; […] health care, including, but not limited to, mental health care, dental care, or vision care; […] government benefits; or […] public services”

41 “Consequential decision” is defined as a “decision or judgement that has a legal, material, or similarly significant effect on an individual’s life relating to the impact of, access to, or the cost, terms, or availability of, any of the following: […] (5) family planning, including… reproductive services, … (6) healthcare or health insurance, including mental health care, dental, or vision”

42 Health IT Certification Program, under which developers of health information technology (HIT) can seek to have their software certified as meeting certain criteria.

43 HTI-1 final rule defines predictive decision support interventions (Predictive DSI) as “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produces an output that results in prediction, classification, recommendation, evaluation, or analysis.”

manatt-black

ATTORNEY ADVERTISING

pursuant to New York DR 2-101(f)

© 2024 Manatt, Phelps & Phillips, LLP.

All rights reserved