UPDATE: Since this was published some of the below bills passed. On September 28th, Governor Newsom signed SB3030, SB1120, and AB2013. On September 29th he vetoed SB1047.
The Big Picture: |
California is at the forefront of a group of states that have focused legislation on increasing transparency and oversight of the development and deployment of AI systems, and may be the first state to pass legislation specifically focused on AI use in health care by providers and payors.
|
The Details:
Two bills specifically focus on health care and would impose requirements on use of AI by health care providers—including health facilities, clinics, physicians’ offices, or group practices —and health plans, respectively:
- Assembly Bill 3030 requires that health care providers disclose, via a disclaimer, to a patient receiving clinical information generated by generative AI that information was generated by AI. “Generative artificial intelligence” is defined broadly, as “artificial intelligence that can generate derived synthetic content, including images, videos, audio, text, and other digital content.” In addition, the disclaimer must tell the patient how to contact a “human health care provider” or employee of the health facility. This disclaimer must be included in traditional written communications, such as letters and emails, as well as chat-based technology. Unlike prior versions of the bill, the final bill pending signature does not require a disclaimer if the communications generated by generative AI are “read and reviewed by a human licensed or certified health care provider”; if health care providers implement AI review processes involving licensed professionals, they would not be obligated to include disclaimers. This bill seeks to balance the burden on health care providers against a patient’s desire to know when information was not prepared by a licensed professional and to know who to contact if they have any questions about the information. This bill is similar to a Utah law that passed in late 2023.
- Senate Bill 1120 essentially requires that a licensed physician or licensed health care professional retain ultimate responsibility for making individualized medical necessity determinations for each member of a health care service plan or health insurer. While the bill imposes some other requirements, one of the major requirements is that health care service plans and health insurers that use AI1 tools not use the tool to “deny, delay, or modify health care services” based upon medical necessity. Said another way, the determinations of medical necessity may only be made by a licensed physician or health care professional. This proposed law is consistent with the guidance that CMS published regarding Medicare Advantage Plan’s use of AI to render clinical coverage determinations, which requires that decisions be based on an individual patient’s medical history and clinical conditions and that a physician or other appropriate provider review coverage denials. This bill, similar to CMS’s guidance, seeks to strike a fair balance between allowing a plan to use AI to improve efficiency in utilization review and ensuring that a human reviews decisions (and agrees with the decisions) that would have an adverse impact of the plan’s members.
The other two bills relate to AI use more broadly, and would impact health care stakeholders to the extent they are developers of AI systems:
- Assembly Bill 2013 is a transparency bill and requires developers of generative artificial intelligence2 systems (defined slightly differently than the other bills) to publicly post information on the data used to train the system, including the source or owners of the datasets, the number of data points in the datasets, a description of the types of data points in the dataset and whether the datasets include personal information. Developers are defined as those who make AI tools for “members of the public” and specifically exclude “hospital’s medical staff member[s],” though the intention of the exclusion is not entirely clear.
- Senate Bill 1047 sets forth a broad set of requirements focused on mitigating the safety and security concerns believed to be posed by very large language models. It is unlikely that this bill will apply to health care stakeholders. Covered models include models that cost over one hundred million dollars to develop or over ten million dollars to refine. Requirements on developers3 of these models include capabilities for a full system shutdown; an annually-updated safety protocol and statement of compliance; annual third-party audits; and a requirement to report any safety incidents to the state Attorney General within 72 hours of their identification. The state Attorney General may pursue civil action against developers for violations of this law.
1 “Artificial intelligence” is defined as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”
2 “Generative artificial intelligence” is defined as “artificial intelligence that can generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence’s training data.”
3 “Developer” is defined as the person that performs the initial training of a covered model either by training a model using a sufficient quantity of computing power and cost, or by fine-tuning an existing covered model or covered model derivative using a quantity of computing power and cost greater than the amount specified in SB 1120’s subdivision (e).