Manatt Health: Health AI Policy Tracker
Introduction:
2026 has been busy on the AI front, with almost all states actively debating AI legislation and the federal government increasingly indicating it has a role to play. Federal and state policymakers are approaching AI regulation from different perspectives: states are focused on establishing guardrails for use and promoting transparency, while the federal government is actively promoting the use of AI in health care with a deregulatory posture. States and the federal government appear aligned regarding the risks of minors engaging with AI chatbots, as both are poised to regulate such activity.
At the end of 2025, the White House took direct aim at states legislating AI development and deployment through the , Ensuring a National Policy Framework for Artificial Intelligence, which directed the Department of Justice (DOJ) to establish an AI Litigation Task Force to challenge “onerous” state AI laws and instructed the Secretary of Commerce to publish an evaluation of such laws by March 11, 2026 (as of the publication of this newsletter, such a list has not materialized).
Initial speculation that December EO 14365 could slow down state activity proved unfounded in the first months of 2026, as 43 states have introduced over 240 bills (almost as many as introduced in all of 2025). Emerging themes from state legislatures in 2026 are largely consistent with the focus of 2025 activity: use of AI chatbots (particularly in mental health contexts or by minors), clinical oversight of AI tools and patient disclosure and consent requirements, targeted transparency mandates, AI regulatory mitigation and AI sandbox programs, and AI use by payors in determinations of medical necessity and prior authorization. In 2026, there has also been a focus on payor use of AI to downcode claims. Almost every state has gotten involved over the past 2.5 years—only Wyoming and North Dakota have not introduced AI-focused legislation impacting health care stakeholders.
After a quiet two months from the federal government at the start of the year, March saw a wave of action from the White House and Congress:
- On March 20, 2026, the White House released its . The framework focuses on federal control over the legislation (if at all) of AI development and protecting First Amendment rights, while preserving meaningful room for states to regulate AI use in other areas where states have reserved policy powers under the 10 Amendment. The most immediate pressure on Congress as it relates to health care stakeholders appears to center on protecting minors interacting with AI systems, including safeguards addressing privacy, self-harm and exploitation risks. The framework also signals support for sector-specific regulation by federal agencies—potentially spurring increased activity by the Food and Drug Administration (FDA), Centers for Medicare & Medicaid Services (CMS), Office of the National Coordinator for Health Information Technology (ONC) and Federal Trade Commission (FTC)—and new AI sandbox programs. Nothing in the framework suggests that state momentum will slow in areas considered traditional “state police powers,” like protecting public safety and health.
- Congress has shown greater movement towards legislating AI, including Republican members of the House Energy and Commerce Committee through their advancement of a children’s online safety package, including (the KIDS Act) and (the SAFE BOTs Act), with guardrails for AI chatbots and a for the “TRUMP AMERICA AI Act” released by Senator Marsha Blackburn’s (R-TN) seeking to codify the EO 14365.
- Recent reporting indicates White House advocacy is knocking state AI-focused bills off course through pressure on state lawmakers. In Utah, the state lawmaker sponsoring received a from the White House expressing opposition to the bill and describing it as “going against the Administration’s AI agenda,” without describing a legal rationale; the bill failed on March 6. It was reported that Florida’s House Speaker, Daniel Perez (R), that he will not bring the Governor Ron DeSantis-backed AI Bill of Rights ()—which passed the state’s Senate—to the floor, and prefers a federal solution rather than state legislation, after reported outreach from the White House. Over 50 Republican state lawmakers across 24 states sent a to President Trump on March 3, 2026, urging the Administration to “discontinue efforts to block state AI laws.”
The sections below provide a deep dive into emerging themes and notable actions from states and federal government and what we’re watching as the year progresses.
Emerging AI Health Policy Themes in 2026:
Many bills introduced in 2026 are thematically aligned, often mirroring the same bill text as laws enacted in 2025. Specifically, we have seen a number of bills mirroring three bills enacted last year: (1) the mental heath clinician use of AI and chatbot provisions prohibiting chatbots from representing themselves as licensed providers set forth in Illinois (effective 8/1/25); (2) the chatbot provisions requiring disclosures to end users, detection of mental health crises and suicidal ideation, and establishing guardrails for users under 18 set forth in California (effective 1/1/26); and (3) requiring and clinician disclosure included in Texas (effective 1/1/2026).
1. Transparency:
While 2025 saw the passage of AI transparency bills targeting large developers and frontier AI models like the (including chapter finalized in early 2026) and California, more states focused on introducing bills mandating use-case specific transparency provisions. These proposed laws generally center on ensuring transparency between AI developers or deployers and the end user or patient, with some bills additionally requiring consent from the patient or their legal representative.
Colorado continues to be the state to watch with regard to broad transparency bills; the Colorado AI Policy Work Group (convened by Governor Jared Polis in October 2025 after two attempts to revise Colorado failed during Colorado’s 2025 regular session) unanimously a to revise SB 205 before it goes into effect on June 30, 2026. Most notably for health care stakeholders, this proposed bill broadens the narrow HIPAA carve-out for non-high-risk AI recommendations requiring provider action to exempt all HIPAA-covered entities and business associates from most obligations, only requiring them to provide patients with general notice that they use advanced technologies. This notice does not need to be a standalone notice. However, if health care entities use AI to determine a patient’s eligibility for financial assistance, there are additional disclosure requirements. Also exempt are pharmaceutical manufacturers’ research and development activities subject to the FDA. For health care stakeholders not subject to the exemption, such as direct-to-consumer products, the definition of consequential decisions and automated decision-making technology was streamlined but also added some significant new provisions. Additional proposed modifications include: (1) eliminating anti-discrimination duties; (2) removing the requirement for a comprehensive risk management policies and programs and annual impact assessment, in favor of “reasonably necessary” recordkeeping; (3) removing the requirement for public website disclosures and pre-decision consumer notices, in favor of simpler point-of-interaction notices, which may be satisfied by a public link or posting; and (4) modifications to consumer rights to appeal adverse decisions and liability/fault allocation provisions. It remains to be seen if this draft bill will advance in the weeks to come.
2. AI Chatbots:
States and the federal government are bringing a particular focus to AI chatbots in response to ongoing public attention on the negative impacts of these tools—particularly harmful or inaccurate responses generated by AI chatbots, failure to detect mental health crises, and impacts on minors.
In the first quarter of 2026, 36 states introduced over 70 bills regulating AI chatbots, the majority of which include requirements to disclose to the chatbot user that they are interacting with an AI chatbot, not a human. Requirements for chatbot end-user transparency were common in 2025; in 2026, states have brought greater specificity to the responsibilities of chatbots, leveraging language from California (effective 1/1/2026) requiring detection of mental health and suicidal ideation and establishing guardrails for users under 18. Specifically, numerous bills (e.g., Washington , Iowa , Oregon ) stipulate that chatbot operators ensure that their AI chatbot is capable of detecting mental health crises (including self-harm and suicidal ideation), and implements an appropriate response, including referring users to crisis resources or suicide hotlines.
Over 37 AI chatbot bills include provisions requiring chatbot operators to ensure all users create an account and submit age verification information prior to accessing the AI chatbot (e.g., Alabama , New York , Louisiana , Virginia ). Two bills with chatbot provisions specific to minors have been signed into law in 2026 to date (Idaho , effective 7/1/2027, and Oregon , effective 1/1/2027). If a user is found to be a minor, numerous bills include more frequent disclosures that a user is interacting with AI (and not a human being), additional restrictions on content that can be generated or displayed by the chatbot, and requirements for tools that allow parents or guardians to monitor minor’s usage of AI chatbots and control their privacy.
Over 30 bills (e.g., Louisiana , Kansas , Missouri ) include prohibitions on chatbots representing themselves as licensed professionals, including medical professionals, which is similar to provisions set forth in Illinois (effective 8/1/2025) and California (effective 1/1/2026). To date, one bill has been signed into law with these provisions: Tennessee , effective 7/1/2026.
Action on AI chatbots has not been confined to the states. The similarly advocates for tools that provide parents with the ability to protect children’s privacy and manage interactions with AI; establish commercially reasonable age-assurance requirements for AI platforms likely to be accessed by minors; and require AI platforms likely to be accessed by minors to implement safety features to reduce the risk of sexual exploitation or self-harm.
Following numerous agency and Congressional committee hearings and investigations into AI chatbots in 2025, Congress is also signaling making AI chatbots a legislative priority. On March 5, 2026, Republican members of the House Energy and Commerce committee a children’s online safety package, including the KIDS Act and the SAFE BOTs Act setting forth requirements for AI chatbots. These bills require the chatbot to disclose that they are an AI chatbot at prescribed cadence, prohibit the chatbot from representing itself as a licensed professional, include minor-specific protections and mandate that chatbots refer users to mental health crisis resources. However, bipartisan negotiations broke down over the bill’s preemption language that would have limited states abilities to pass stronger laws and omission of “duty of care” requirements. The recently-released for Senator Marsha Blackburn’s (R-TN) “TRUMP AI AMERICA” Act incorporates provisions establishing a minimum duty of care for chatbot developers, disclosures to the end users, and establishes protections for minors (e.g., requiring reasonable age verification measures). Section 504 of the discussion draft imposes a prohibition on minor use of AI overall. These federal actions are largely aligned with state action.
We expect this to be an area of continued state and federal interest and consider AI chatbots to be the most likely area where we may see federal legislation in 2026, given significant interest from Congress and the Trump Administration.
3. Liability:
As health care stakeholders rapidly adopt AI, both state and federal lawmakers have turned attention to defining who is liable for harms caused by AI tools. In the fall of 2025, Senators Josh Hawley (R-MO) and Dick Durbin (D-IL) introduced (the “AI LEAD Act”), which, if enacted, would classify AI systems as products, allowing consumers to bring civil product liability claims against developers and deployers that cause harm. Notably, if enacted, the bill would “supersede State law only where State law conflicts with” its provisions, which would allow states to enact or enforce protections stronger than the provisions of the AI LEAD Act so long as those protections are aligned with the principles of the AI LEAD Act. This year, select states have introduced bills defining AI tools as products governed by product liability standards (e.g., Illinois , Maryland , Vermont , Missouri and Louisiana (specific to chatbots).
While no states have passed AI liability bills to date, a few notable examples moving through state legislatures include:
- New York , which would establish strict liability standards for developers of large-scale AI models when those models cause harm to people who are not direct users of the model. The bill mandates that except for causes of action for defamation, developers of covered models are strictly liable for “all injuries to a non-user of the covered model that satisfy the actual harm element of an ordinary negligence claim” if: (1) the injuries are proximately caused by an AI tool whose actions would be considered negligence or any intentional tort or crime if conducted by a human; and (2) that conduct could not have been anticipated by the user of the model or any intermediary who modified the model, among other provisions.
- California , which would prohibit deployers or developers of AI that are alleged to have caused harm from asserting as a defense that a licensed health care professional’s failure to override the AI output severs the developer’s or deployer’s liability as a superseding cause.
- Illinois , which would set legal standards for liability for developers and deployers of high-impact AI systems (including systems used as a medical device).
- Illinois , which would prohibit chatbot proprietors from disclaiming liability if the chatbot provides materially misleading, incorrect, contradictory or harmful information that results in financial loss, demonstrable harm or bodily injury to the user, unless the chatbot proprietor corrects the information and cures the harm within 30 days of notification. The bill would prohibit chatbot proprietors from disclaiming liability if the chatbot provides information to a user that results in bodily harm, including but not limited to self-harm.
4. Payor Use of AI for Utilization Management and Prior Authorization:
States have shown a steady interest in regulating payor use of AI since we began tracking AI legislation in 2024. 2026 is no different: over 25 states introduced over 35 bills regulating payor use of AI since the start of the year. Historically, legislation has focused on prohibiting the sole use of AI in prior authorizations denials; requiring human review of algorithm-driven decisions; and mandating clear disclosure when AI systems are used in claim or coverage decisions. While states continue to introduce bills with these themes in 2026, states are turning their focus on prohibiting a different use case for AI: the sole use of AI in downcoding claims without oversight by a licensed physician. In 2026, bills with downcoding provisions were introduced across seven states: California , Connecticut , Illinois , Indiana , Maryland , Missouri and Oregon .
Of those downcoding-focused bills, one has been signed into law: Indiana (enacted 3/4/2026, effective 7/1/2026) which:
- Prohibits health insurers—with the exception of the state's Medicaid program and Medicaid managed care organizations (MCOs)—from using AI as the sole basis to downcode a claim without review of the covered individual's medical record;
- Requires insurers to disclose when AI is used in a downcoding decision or adverse prior authorization determination, and notify providers when a claim is downcoded;
- Prohibits providers from using AI to submit a health benefits claim without review by a provider or other person involved in its development; and
- Prohibits targeted or discriminatory downcoding against providers treating patients with complex or chronic conditions.
Utah passed a law (, enacted 3/19/26 and effective 1/1/2027) requiring insurers to publicly disclose if AI is used to review authorization requests and to issue a disclosure notifying the department of Insurance, providers, and enrollees of the use of AI to review authorization requests.
5. Use of AI in Clinical Care:
Building on a theme we saw in 2025, state legislatures sustained interest in regulating the use of AI tools in clinical contexts, with over 40 bills introduced across 25 states thus far in 2026. Bills generally focus on requiring clinical oversight of AI tools and ensuring patients are aware that such tools are being used, including provisions requiring consent from the patient.
States are taking inspiration from laws passed in 2025, most notably Illinois (enacted 8/1/2025) and Texas (enacted 6/22/2025). Multiple states have introduced bills that include provisions aligned with Illinois , including Vermont , Virginia and Kentucky . These bills allow providers to use AI to provide administrative or supplementary support as long as the provider maintains responsibility for the AI’s output and obtains patient consent before using AI to do tasks like prepare notes from therapy sessions; these bills also prohibit providers from using AI to do certain tasks, including making independent therapeutic decisions, interacting directly with clients or generating treatment plans without oversight from a licensed clinician.
Given the growing prevalence and adoption of AI transcription services by health care providers, it is notable how many bills introduced this year require the mental health providers to actually obtain consent before using AI tools in clinical context; that is to say, notification of the patient or their legal representative alone is not sufficient but affirmative (sometimes standalone) consent is required (e.g., Virginia , Florida , South Carolina and Maine ).
At the federal level, the Advanced Research Projects Agency for Health (ARPA-H) launched the model, a 39-month, two-phase initiative to develop and deploy the first Food and Drug Administration (FDA) authorized agentic AI system for clinical care. ADVOCATE will fund two systems: (1) a patient-facing AI agent capable of autonomously adjusting appointments, medications, diet and exercise; (2) and a supervisory AI “overseer” to monitor deployed agents for continued safety and efficacy. Phase 2 includes large-scale scalability studies evaluating clinical outcomes, safety, cost-efficiency and reimbursement implications. Given the program's explicit FDA authorization goal, early and frequent FDA engagement on device authorization pathways is built into the program structure. ARPA-H anticipates selecting award teams by June 2026.
6. AI Sandboxes and Regulatory Relief Programs:
In 2026 to date, four states (Arizona, Illinois, New Hampshire and Virginia) have introduced bills establishing AI sandboxes or regulatory relief programs. State action to create AI Sandboxes and regulatory relief programs began in earnest in 2024 with the passage of Utah and continued with the enactment of Texas in 2025.
New Hampshire and Virginia both include general provisions that allow companies to temporarily test innovative AI health care products or services and waive certain regulatory requirements. However, Arizona and Illinois introduced more targeted AI sandbox legislation:
- Arizona would establish a pilot program testing innovative AI that performs nursing-adjacent tasks or workflows. Applicants must have an agreement with an accredited nursing school for evaluation/oversight.
- Illinois would exempt AI-assisted therapy or psychotherapy services provided exclusively within a “qualified research program” from existing regulations of AI therapy services, with required protections for research participants (including informed consent and ensured access to licensed mental health professionals).
In alignment with Delaware (enacted in 2025), Delaware’s is actively drafting AI sandbox legislation in partnership with their secretary of state.
In March 2026, Utah’s Office of Artificial Intelligence Policy (OAIP) announced their fourth regulatory relief agreement with , an “AI-native” digital health/psychiatry company. Under its agreement with OAIP, Legion Health will provide prescription refills for non-controlled, maintenance psychiatric medications (e.g., SSRIs for depression and anxiety). Similar to OAIP’s 2025 agreement with prescription renewal company , Legion Health’s tool is launched in a three-phased approach. Legion Health’s workflow additionally includes psychiatry-specific safety screens and hard stops/escalation is red flags are raised.
Interestingly, in light of the between Doctronic and Utah’s OAIP that allows Doctronic to re-prescribe “low-risk” medications to Utah patients (see Manatt Health analysis for more details), Missouri preempts the creation of AI sandboxes or regulatory waivers related to the distribution of medication. As written, the bill prohibits certain state-level regulatory agencies (including the Board of Registration for the Healing Arts, the Dental Board, the Board of Nursing and the Board of Pharmacy) from granting regulatory mitigation, waiving or modifying any rules related to dispensing, prescribing (including prescription renewals), administering or otherwise distributing medications or controlled substances for any entity developing AI for such uses.
At the federal level, the FDA’s Technology-Enabled Meaningful Patient Outcomes (TEMPO) Pilot, which was released in connection with the Center for Medicare and Medicaid Innovation (CMMI) Advancing Chronic Care with Effective, Scalable Solutions (ACCESS) Model late last year, creates a de facto regulatory relief program for participants. United States-based manufacturers selected for participation will be able to offer their devices to ACCESS participants without FDA premarket authorization for improving patient outcomes while the device used to deliver care is covered by the model.
Additional Notable Federal Activity
Early 2026 saw additional action at the federal agency level. 2026 will see the launch of numerous models and initiatives initially announced by the federal government in 2025, including ACCESS Model and TEMPO Pilot Participation, with the first ACCESS cohort beginning July 1 and TEMPO Pilot statements of interest due on a rolling basis beginning in January of this year. States are actively launching Rural Health Transformation Program initiatives, balancing rapid spending targets with thoughtful, impactful and sustainable investments. States are also making investments in rural health care transformation with a significant focus on technology, which can help fund adoption of AI tools (see Manatt on Health analyses and ).
CMS released two requests for information (RFIs) in February 2026 related to AI use by CMS. The on the Comprehensive Regulation to Uncover Suspicious Healthcare (CRUSH) initiative solicits stakeholder feedback on regulatory changes, including provisions related to AI-powered fraud detection tools and beneficiary reporting mechanisms, part of a broader federal focus on fraud, waste and abuse. The second seeks feedback on AI and machine learning tools to improve Medicare beneficiary plan selection, including improvements to Medicare.gov, the Plan Finder tool and the 1-800-MEDICARE call center.
The Department of Health and Human Services (HHS) and the Office of the Deputy Secretary, in collaboration with the Assistant Secretary for Technology Policy and ONC (ASTP/ONC), published an seeking public comment on steps HHS can take to accelerate the adoption and use of AI in clinical care—one of the first formal HHS solicitations focused specifically on AI in care delivery.
Specifically, HHS is seeking feedback on the key barriers to private sector AI innovation and adoption in clinical care, including regulatory, reimbursement, governance, liability, privacy and administrative challenges, and on what specific regulatory, payment or programmatic changes HHS should prioritize. HHS also asks for input on how AI should be evaluated and governed across its lifecycle, particularly for non-medical devices, including promising evaluation methods, metrics, robustness testing, interoperability needs, decision making structures within health care organizations and the role HHS could play in supporting standards, accreditation, certification and industry-led validation efforts. Finally, HHS requests evidence on where AI has succeeded or fallen short in improving quality and reducing costs, what use cases hold the greatest future promise, how patients and caregivers perceive both benefits and risks, and which AI research areas and evidence gaps HHS should prioritize to accelerate responsible adoption in clinical care. Comments were due at the end of February.
FDA published revised final guidance on and , expanding the categories of AI-enabled CDS tools and consumer wearables that are not regulated by the FDA. Specifically, the CDS guidance replaces the prior interpretation that software providing risk scores or differential diagnoses automatically constitutes a regulated device and assets that such software may fall outside of the FDA’s jurisdiction, if a clinician independently reviews the basis for a recommendation. FDA Commissioner Dr. Martin Makary the changes on January 6, 2026 and indicated FDA is developing a new risk-based AI framework emphasizing post-market monitoring over premarket approval.
Additional activity is described in the federal activity table below.
Conclusions and What to Watch:
After a busy start to the year, we’ll be watching to see how the federal government may begin to shape AI policy and carve-out what it may own and where it will allow states to continue to legislate. We’ll be closely watching Congress to see if they are able to get bipartisan support to pass an AI bill that addresses areas that have interest from both parties, as well as the state governors, such as AI chatbots. We think a bill setting for liability for AI developers may face more of an uphill battle. We expect that states will continue to introduce legislation and advance bills through the end of their general sessions, with more states adopting laws to legislate AI chatbots and patient consent and disclosure requirements when AI is used in clinical contexts. As regulatory relief programs are getting significant attention and may draw innovation to states where they are enacted, we anticipate several states will create such programs and we’ll be watching to see how AI developers attempt to take advantage of the regulatory relief programs. We are also specifically watching Colorado’s attempts to revise SB 205, and whether other states will introduce any legislation that Colorado ultimately adopts.
We’ll also be looking at early signals from the various models launched across agencies—ACCESS and TEMPO, WiSER, and ADVOCATE—as well as additional action from state sandboxes/regulatory relief programs in Utah and Texas.
Deep Dive: State Activity
For a full list of all laws prior to and including 2025, please see .
Deep Dive: Federal Activity:
Agency | 2026 Activity to Date |
|---|---|
White House |
|
Congress |
|
FTC |
|
HHS |
|
OCR |
|
ONC |
|
CMS |
|
FDA |
|
NIH |
|
DOJ |
|
OMB |
|
ARPA-H |
|
For questions on the above, please reach out to or . A full list of tracked bills (introduced and passed) from 2024 through Q1 2026—classified by topic category and stakeholder impacted—is available to Manatt on Health subscribers. For more information on how to subscribe to Manatt on Health, please reach out to BJefferds@manatt.com.
Colorado was explicitly called out in the EO and is thus a likely target for any list of “onerous” state laws, as well as laws that compel AI developers or deployers to disclose or report information, such as California and New York (the RAISE Act).
Wyoming and North Dakota have explored non-health-specific regulation of AI (e.g., Wyoming’s (signed by the state’s governor on 3/7/2026) focused on AI generated deepfakes; North Dakota’s (signed by the state’s governor on 4/11/2025) focused on AI-generated political deepfakes).
Hospital financial assistance eligibility decisions (e.g., Hospital Discounted Care) are carved back in with specific disclosure requirements. Employment decisions are not covered by this HIPAA-covered entity and business associate exception.
In September 2025, the FTC announced the launch of an inquiry into AI chatbots acting as companions, with particular attention to the impact of these chatbots on children and teenagers. In November 2025, the House Energy & Commerce Oversight and Investigates Subcommittee held a hearing on risks and benefits of AI chatbots, with significant focus on mental health support and health information uses. In November 2025, the FDA’s Digital Health Advisory Committee discussed the line between AI companions, mental health chatbots and CDS tool.
An October 2025 by Menlo Ventures found that in 2025, 22% of health care businesses have implemented domain-specific AI tools. That implementation level is a 10x increase over 2023 and more than twice the rate of the broader U.S. economy.
Downcoding provider claims refers to the practice of changing a submitted medical claim to a billing code that reflects a lower level of service than what the provider documented and billed, resulting in a lower reimbursement for providers.
AI Sandboxes are structured, time-limited programs that allow developers of AI systems to test or deploy those systems in a controlled environment with defined oversight and temporary relief from select state laws or regulatory requirements, for the purpose of evaluating system performance, risks, and appropriate regulatory treatment. AI regulatory mitigation programs are similarly structured but allow testing to occur in the real world, not a controlled environment.
A number of states have also enacted broader innovation or fintech sandbox programs that are not AI-specific and predate the current wave of AI sandbox activity, including Arizona, Wyoming, North Carolina, Ohio, Kentucky, West Virginia, Florida, Missouri, Kansas and Utah. Non-AI-specific regulatory sandbox programs by state include: Arizona (enacted 2018; expanded by in 2022); Wyoming (enacted 2019); Kentucky (enacted 2019; expanded to a universal sandbox by in 2023); West Virginia (enacted 2020); Utah (enacted 2021; general multi-sector sandbox, separate from Utah's AI-specific ); Florida (enacted 2021); North Carolina (enacted 2021; administered by the ); Ohio (enacted 2022); Missouri (enacted July 2024); and Kansas (enacted July 2025).
Phase 1: The first 250 patients processed through the system will have their AI-generated renewal decision reviewed by licensed physicians prior to the renewal submission to the pharmacy. Phase 2: The next 1,000 patients processed through the system will have their AI-generated renewal decision retrospectively renewed by licensed physicians. Phase 3: Following these two review periods, a structured sampling approach will be taken to quality oversight, including a monthly review of 5–10% of processed renewals, comprehensive quarterly analysis of escalated cases, and annual systematic review of performance metrics and clinical outcomes.
The frames U.S. AI dominance as a national economic imperative and outlining Administration deregulatory and investment priorities (Jan. 21, 2026).
The – a six-pillar strategy governing government-private sector coordination on cyber threats – labels AI security as a priority (Mar. 6, 2026).