Consumer Protections for Artificial Intelligence Act
Colorado AI Act
Data last verified: March 23, 2026
- Effective Date
- June 30, 2026
- Enforcement Date
- Not specified in statute
Summary
Colorado AI Act regulates high-risk AI systems that make consequential decisions in education, employment, financial services, government services, healthcare, housing, insurance, or legal services. Requires impact assessments, consumer notification, website disclosure (§ 6-1-1703(4)(b)), and AG disclosure of discrimination risks (§ 6-1-1703(9)).
Who It Applies To
Not specified in statute
Penalties
- Penalty Range
- $0 – $20,000per violation
- Cure Period
- Not specified in statute
- Private Right of Action
- No private right of action
- Enforcement Body
- Colorado Attorney General (exclusive)
- Notes
- Violations treated as unfair trade practices under Colorado Consumer Protection Act. No mandatory cure period, but § 6-1-1706 provides an affirmative defense for deployers/developers who discover and cure violations through internal processes (red teaming, internal reviews) while otherwise complying with a recognized risk management framework.
Requirements (9)
- Non-Discrimination§ 6-1-1703(1)
This law requires deployers to use reasonable care to protect consumers from algorithmic discrimination in high-risk AI systems.
- Impact Assessment§ 6-1-1703(3)
This law requires deployers to complete an impact assessment for each high-risk AI system before deployment.
- Disclosure§ 6-1-1703(4)(a)
This law requires deployers to notify consumers when a high-risk AI system makes or substantially factors into a consequential decision about them.
- Correction Right§ 6-1-1703(4)(b)(II)
This law provides consumers the right to correct inaccurate personal data used by high-risk AI systems.
- Appeal Right§ 6-1-1703(4)(b)(III)
This law provides consumers the right to appeal adverse consequential decisions made by high-risk AI systems.
- Record-Keeping§ 6-1-1703(2)
This law requires deployers to implement a risk management policy and program for high-risk AI systems.
- Disclosure§ 6-1-1703(5)(a)
This law requires deployers to make a public statement on their website describing the types of high-risk AI systems they deploy, how they manage algorithmic discrimination risks, and the nature, source, and extent of information collected and used.
- Disclosure§ 6-1-1702(5), § 6-1-1703(7)
This law requires developers to disclose known or reasonably foreseeable risks of algorithmic discrimination to the Attorney General and all known deployers within 90 days of discovery.
- Disclosure§ 6-1-1704
This law requires deployers or developers to disclose to consumers that they are interacting with an artificial intelligence system.
Claire tracks 31 state and local AI laws across 23 US states. No prescriptive federal AI compliance statutes have been enacted. EU AI Act and sector-specific regulations are not covered.
Check if this law applies to your business