Last week, the Department of Health and Human Services (HHS) finalized antidiscrimination regulations implementing Section 1557 of the Affordable Care Act (ACA Section 1557). The final rule expands upon the proposed rule’s prohibition of discrimination through the use of clinical algorithms. In the final rule, these algorithms are referred to as “patient care decision support tools” to better capture the range of tools subject to the regulations. HHS also clarified that the rule covers artificial intelligence (AI) used to support clinical decision-making “given covered entities’ widespread use of automated decision systems and AI, and the scale by which AI can influence covered entities decision-making.” In finalizing the regulations, HHS noted the Biden Administration’s intense focus on AI, including Executive Order 14110, which directed HHS to “ensure the safe, responsible deployment and use of AI in the healthcare, public-health, and human-services sectors.”1 The final rule will be published in the Federal Register on May 6.
In light of the modifications to the proposed rule and enhanced compliance requirements, HHS will delay the applicability date for the regulations regarding patient care decision support tools until no later than 300 days after the final rule’s effective date, meaning that covered entities will have approximately one year to comply with the final rule.
Executive Summary
- Regulations finalized by HHS last week address the use of patient care decision support tools, including clinical algorithms and AI, to ensure that the tools are used responsibly to avoid discrimination.
- The final rule broadly defines "patient care decision support tools" to encompass all tools aiding clinical decision-making, from simple flowcharts to advanced AI technologies. This includes predictive decision support interventions that derive insights from data to inform clinical outcomes.
- Covered entities, including health programs receiving federal financial assistance, must:
- Identify risks of discrimination within decision support tools; and
- Mitigate risks by making reasonable efforts to prevent discriminatory outcomes.
- Although the final rule does not specify how covered entities must identify and mitigate risks, HHS encourages covered entities to mitigate discrimination by establishing written policies and procedures governing how patient care decision tools will be used in decision-making, including adopting governance measures, monitoring any potential impacts and developing ways to address complaints, and training staff on the proper use of such systems in decision-making.
- The HHS Office for Civil Rights (OCR) will enforce compliance with the final rule on a case-by-case basis considering each entity’s resources and the complexity of the tools used. Covered entities will have approximately one year to comply.
- The final rule aims to integrate ethical use of AI and other technologies in healthcare, emphasizing non-discrimination and responsibility in patient care decisions.
Background Regarding the Final Rule
ACA Section 1557 prohibits covered entities from discriminating on the basis of race, color, national origin, sex, age, and disability in health programs and activities. HHS contends that covered entities include all health programs and activities that receive Federal financial assistance from HHS. Examples of covered entities include hospitals, health clinics, physicians’ practices, community health centers, nursing homes, rehabilitation centers, health insurance issuers and State Medicaid agencies. Federal financial assistance includes grants, property, Medicaid, Medicare Parts A, C and D payments, and tax credits and cost-sharing subsidies under Title I of the ACA.
In 2022, HHS published proposed regulations implementing ACA Section 1557. Prior versions of the rule and the proposed regulations have been controversial for reasons unrelated to the use of algorithms and AI by covered entities because, among other things, the regulations relate to how the antidiscrimination rules will be applied to faith-based covered entities. HHS included clinical algorithms in the proposed regulations, but the regulation was relatively undeveloped. HHS also sought comment from stakeholders on a wide range of issues related to the potential for discrimination through the use of algorithms and AI in health care. In light of those comments and the development of the technologies, HHS has modified the proposed rules and provided a detailed discussion of enforcement.
Patient Care Decision Support Tools
The finalized regulations refer to the tools regulated by ACA Section 1557 as “patient care decision support tools.” This terminology was used to encompass the broad range of tools referenced in the proposed rules which included “tools used to guide health care decision-making that could range in form from flowcharts and clinical guidelines to complex computer algorithms, decision support interventions, and models.” In the final rule, these tools are defined as “any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities.”
In the preamble, HHS describes the types of patient care decision support tools that must comply with the rule:
- Tools used at the individual patient level to assess patient risks, such as the risk of a severe cardiac event;
- Tools used at a group or population level with respect to health care administration decisions such as a hospital system treatment protocol that varies by geographic area based upon risk adjustment modeling; and
- Tools used for prior authorization and medical necessity analysis.
In addition to clarifying that patient care decision support tools include AI, HHS noted that the definition includes “predictive decision support interventions” as defined in the Office of the National Coordinator for Health Information Technology’s final rule for “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing.” The term “predictive decision support interventions” means “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, commendation, evaluation, or analysis,” which includes tools that use generative AI.
Importantly, the final rule also encompasses “non-automated and evidence-based tools that rely on rules, assumptions, constraints, or thresholds,” such as Crisis Standards of Care, a flowchart for triage guidance that has been the subject of HHS enforcement actions. According to HHS, patient care decision tools also include flowcharts, formulas, equations, calculators, algorithms, utilization management applications, software as medical devices, software in medical devices, screening, risk assessment, and eligibility tools, and diagnostic and treatment guidance tools.
Covered Entities’ Obligations Under the Final Rule
In response to comments, including comments regarding the difficulties covered entities face in determining whether the tools include discriminatory features, HHS has added Section 92.210(b) and (c) to the final rule to clarify covered entities’ affirmative obligations under Section 92.210:
- General Prohibition. Section 92.210(a) restates the “general prohibition” against discrimination in the use of patient care decision support tools.
- Identification of Risk. “Section 92.210(b) requires a covered entity to make reasonable efforts to identify patient care decision support tools used in its health programs and activities that employ input variables or factors that measure race, color, national origin, sex, age, or disability.”
- Mitigation of Risk. “Section 92.210(c) requires that for each patient care decision support tool identified in paragraph (b), a covered entity must make reasonable efforts to mitigate the risk of discrimination resulting from the tool’s use in its health programs or activities.”
In the preamble, HHS states that “covered entities must exercise due diligence when acquiring and using [patient care decision support] tools to ensure compliance with § 92.210.” According to HHS, that means covered entities have affirmative obligations when a covered entity has “reason to believe” that variables such as race, color, national origin, sex, age or disability are “being used” or the covered entity knows or should know that the tool “could result in discrimination.” If a covered entity has a “reason to believe” any variable is or could be used for discrimination, it should investigate further, including by reference to publicly available sources or by requesting information from the developer of the tool.
HHS also indicates that covered entities’ due diligence obligations include identifying possible discrimination from examples used by HHS in the proposed and final rules, other information published by HHS, published peer-reviewed medical journals, research studies and media stories that report on reliable studies, health care professional and hospital associations, health insurance-related associations and other government agencies.
Enforcement by the Office of Civil Rights
Section 92.210 will be enforced on a case-by-case basis by the OCR. OCR’s analysis regarding whether a covered entity is in compliance with the requirement under Section 92.210(b) to use reasonable efforts to identify discrimination will consider, among other factors:
- The covered entity’s size and resources;
- Whether the covered entity used the tool in the manner or under the conditions intended by the developer and approved by regulators, if applicable, or whether the covered entity has adapted or customized the tool;
- Whether the covered entity received product information from the developer of the tool regarding the potential for discrimination or identified that the tool’s input variables include race, color, national origin, sex, age or disability; and
- Whether the covered entity has a methodology or process in place for evaluating the patient care decision support tools it adopts or uses.
For example, according to HHS, a large hospital with an IT department and a health equity officer would be expected to make greater efforts to identify tools than a smaller provider without such resources. In addition, HHS suggests processes for evaluating patient care decision support tools, such as seeking information from the developer, reviewing relevant medical journals and literature, obtaining information from membership in relevant medical associations or analyzing comments or complaints received about patient care decision support tools.
In regard to mitigation, HHS acknowledges that it is not always possible to completely eliminate the risk of discriminatory bias because patient care decision support tools serve important health care functions. Accordingly, Section 92.210(c) requires covered entities to “make reasonable efforts” to mitigate the risk of discrimination. HHS also notes that a covered entity could comply with the mitigation requirement by either discontinuing use of the tool or modifying the tool. HHS indicated that specific risks may “generate greater scrutiny” and give rise to different types of mitigation efforts. For example, input variables that include race would be subject to the highest level of scrutiny, whereas using age as an input variable while subject to scrutiny could be justified by showing that “age is clinically indicated as a measure in the particular tool” or “aligns with evidence-based clinical best practices that do not result in discrimination.”
In what is becoming a common principle of AI regulation, HHS rejected proposals to allow covered entities to point the finger at “algorithm creators” because clinicians are not in a good position to detect that an algorithm can result in discrimination and to impose strict liability on developers rather than users. ACA Section 1557 and Section 92.210 of the rule are focused on covered entities’ use of patient care decision support tools, and those entities are required to mitigate the risk of discrimination from such use.
Although HHS stopped short of requiring specific mitigation measures, the preamble reinforces HHS’ suggestion that covered entities covered entities “mitigate discrimination by establishing written policies and procedures governing how clinical algorithms will be used in decision-making, including adopting governance measures; monitoring any potential impacts and developing ways to address complaints; and training staff on the proper use of such systems in decision-making.” HHS added that it encourages “covered entities to take these and other additional mitigating efforts to comply with § 92.210.”
1 HHS also referenced the Blueprint for an AI Bill of Rights and Executive Order 14091, Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.