Colorado Enacts Law Regulating “High-Risk” AI: Here’s What You Should Know

Client Alert

On May 17, Colorado Governor Jared Polis signed into law Colorado Senate Bill 2-205 (SB 205), also known as the Colorado Artificial Intelligence Act—making Colorado the second state to enact comprehensive legislation on artificial intelligence (AI), and the first to specifically address the use of high-risk AI systems.

Highlights

Unlike Utah’s recently enacted AI Policy Act which generally addresses commercial communications using generative AI, Colorado’s law covers AI more generally and takes a risk-based approach by bifurcating “high-risk systems” from “general purpose models” and focusing the most stringent regulations on systems with significant potential to impact consumer rights.

The law assigns responsibilities of ensuring trustworthy AI through the development chain from developers to deployers. Developers and deployers of high-risk AI systems have duties of care to protect consumers from known or foreseeable risks of algorithmic discrimination arising from intended and contracted uses of the system. Compliance with specific requirements, such as providing adequate disclosures and documentation, affords developers and deployers a rebuttable presumption that they followed the standard of reasonable care for managing AI risks. 

The law also provides an affirmative defense intended to avoid litigation for developers and deployers that proactively discover and cure violations, or otherwise demonstrate adherence to recognized AI risk management frameworks such as NIST’s AI Risk Management Framework.

Applicability to “High Risk” AI

The law applies to “developers” and “deployers” of high-risk AI systems doing business in Colorado. High risk AI systems are machine-based systems characterized by their capacity to influence “consequential decisions” about consumers, which are those that produce legal or similarly significant effects, such as those concerning the provision, denial or cost of:

  • education
  • employment
  • financial or lending services
  • housing
  • healthcare
  • insurance, or
  • legal services. 

In adopting a risk-based approach, the law echoes the European Union’s AI Act, as well as recent consumer privacy laws seeking to regulate automated profiling in Virginia, Colorado and Connecticut, which turn on uses of profiling to produce decisions with “legal or similarly significant effects.”

High-risk AI systems do not include anti-fraud technology that does not use facial recognition technology, anti-malware, data storage, databases, AI-enabled video games and chat features, to the extent that they are not a substantial factor in making consequential decisions. 

The law contains no revenue or data volume thresholds, no broad entity-level exemptions, and no exemptions for employee or business-to-business data. Instead, the law includes exemptions that largely hinge on whether the developer or deployer’s use of the high-risk AI system is subject to other regulatory oversight, such as where the system has been approved by or follows standards established by a federal agency, or where the system is used to conduct research to support an application for approval or certification from a federal agency.

This means that the law will apply even to businesses that fall outside the scope of the Colorado Privacy Act (CPA)—Colorado’s comprehensive consumer privacy law—including financial institutions, health care organizations, businesses processing lower volumes of personal data and businesses processing only employee or business contact data.

Certain small businesses with fewer than 50 employees may be exempt from some specific obligations, which are discussed in more detail below. 

Specific obligations

The law imposes rigorous compliance measures on developers and deployers of high-risk AI systems to mitigate risks of algorithmic discrimination. These duties flow down the development chain from developers to deployers, and then from deployers to consumers. “Algorithmic discrimination” is defined as “any condition in which the use of an [AI] system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”

Developers

Those developing high-risk systems have transparency responsibilities regarding risk management, along with comprehensive documentation of the AI system’s training data and performance evaluation. 

For example, developers must make available to deployers or other developers of high-risk AI systems information regarding reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system. Developers must also provide documentation to allow deployers to understand outputs, monitor the performance of the system for risks of algorithmic discrimination, and complete impact assessments. 

Developers must summarize on their websites the types of high-risk AI systems they have developed or intentionally and substantially modified and currently makes available. Developers also must disclose how they manage known or reasonably foreseeable risks of algorithmic discrimination that may arise.

Where a developer discovers that a high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, it must disclose that to the Colorado Attorney General’s office—as well as to all known deployers of the system—within ninety (90) days of discovery. 

Deployers

Deployers (companies that use high-risk AI systems), in turn, must implement risk management policies and conduct impact assessments. Deployers must also notify consumers about the use of high-risk AI in consequential decisions, and where applicable, provide consumers information regarding the right to opt out of profiling under the CPA. For example, deployers must make available on their websites information regarding the types of high-risk AI systems that the deployer currently implements, and how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination.

In the event that company uses a high-risk AI system to make an adverse and consequential decision about a consumer (e.g., rejecting a loan application), the deployer must provide the consumer information regarding that decision as well as the consumer’s opportunity to correct misinformation or appeal that decision. 

Deployers must also notify the state attorney general where a high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination within ninety (90) days of discovery. 

Although the law is predominantly focused on high-risk systems, it requires general-purpose AI systems to follow a basic disclosure rule if the system is meant to interact with consumers.  Specifically, developers and deployers of such systems must disclose to consumers that they are interacting with an AI system. Such transparency requirements align with California and New Jersey “chatbot” laws and Utah’s AI Policy Act. 

Enforcement

The law provides no private right of action. Instead, enforcement authority lies exclusively with Colorado Attorney General’s office, which also has discretionary rule-making authority.

Violations of the law will be deemed unfair and deceptive trade practices under Colorado’s Consumer Protection Act.  

What’s next?

The law is slated to take effect on February 1, 2026. As acknowledged by Governor Polis in his signing statement, the delay allows time for further legislative refinement and stakeholder engagement through an AI impact task force set up by a separate law passed by the legislature earlier this month, HB24-1468.

Businesses in and outside of Colorado should remain vigilant about changing guidelines and regulations and apply recommended best practices, including developing robust AI policies, governance committees, and risk assessment procedures. As organizations continue to seek out industry standards to inform their AI governance efforts, the Colorado AI Act is yet another valuable resource, even though it is not yet enforceable. Manatt will continue monitoring developments related to AI and offering guidance in the evolving regulatory landscape. 

For more information and resources, please visit our dedicated Artificial Intelligence webpage.

manatt-black

ATTORNEY ADVERTISING

pursuant to New York DR 2-101(f)

© 2024 Manatt, Phelps & Phillips, LLP.

All rights reserved