Colorado is consistently at the forefront of technology and data governance regulation at the state level and is poised to maintain that position in 2026. Current legal trends include amendments to the Colorado Privacy Act regarding biometric data, online privacy for children, and the ongoing debate surrounding the implementation of SB24-205 (the Colorado AI Act).
The Colorado AI Act was signed into law in 2024 and is the first comprehensive risk-based AI governance regulation in the United States. It seeks to prevent algorithmic discrimination by regulating high-risk AI systems. Although the law is subject to additional debate during the current legislative session, the structural framework will likely remain intact with enforcement starting during 2026.
Updates to the Colorado Privacy Act
The Colorado Privacy Act (CPA) was one of the first comprehensive state-level privacy laws in the United States and became effective on 1 July 2023. The CPA grants Colorado consumers the right to access, correct, delete, and opt out of automated processing of their personal information for targeted advertising, sale, and certain forms of profiling. The CPA applies to controllers processing the personal information of 100,000 or more Colorado consumers annually or deriving revenue from the data of 25,000 or more Colorado consumers.
The CPA imposes heightened obligations on the processing of sensitive data, which it defines to include racial or ethnic origin, religious beliefs, mental or physical health conditions, sexual orientation or gender identity, citizenship, or immigration status, genetic or biometric data, personal data from known children, and precise geolocation data. Controllers may not process sensitive data without first obtaining the consumer’s consent.
Controllers must also conduct and document data protection assessments for processing activities that present heightened risk, including the processing of sensitive data, the sale of personal data, and processing for targeted advertising or certain forms of profiling. These data protection assessments must be provided to the Colorado Attorney General upon request and robust documentation serves as the best defence against a regulatory inquiry. In the wake of a data breach, the absence of a well-documented assessment can be a significant liability, both in regulatory proceedings and in civil litigation.
Building on the foundation established in the Colorado Privacy Act, HB 24-1058 expanded the definition of “sensitive data” in the CPA to include biological data and neural data. The law defines “sensitive data” to include genetic, biometric, or biological data processed for the purpose of identifying any individual, or personal data from a known child.
HB 24-1130 further amended the Colorado Privacy Act to add protection for the collection of biometric data, including establishing a retention schedule and protocols for responding to a data security incident. The law specifically prohibits the collection of biometric identifiers unless the controller obtains consent from the consumer and prohibits the sale, lease, or trade of any biometric data.
Additionally, SB24-041 amends the Colorado Privacy Act to provide specific protection for children’s online privacy. The law limits the collection of precise geolocation data and requires online services to obtain consumer consent before processing a minor’s personal data for targeted advertising, sale, or profiling.
From a compliance standpoint, the ongoing evolution of the Colorado Privacy Act will present challenges for companies who deal in the collection, processing, and transfer of sensitive data. Colorado continues to aggressively expand the scope of consumer rights in the CPA and create new compliance obligations for business. Any company doing business in one of the above-referenced sectors should consult with privacy experts to better understand their obligations and mitigate potential risk exposure.
The Colorado AI Act (SB 24-205)
On 17 May 2024, Colorado Governor Jared Polis signed into law SB 24-205, which became the first comprehensive risk-based AI governance law in the United States. Although other states have passed AI governance and transparency laws, the Colorado AI Act creates many unique burdens for developers and deployers of high-risk AI systems, including documentation requirements, risk assessments, self-reporting requirements, data subject rights, and impact assessments. The law continues to be the subject of much debate, and the governor has convened a working group of diverse stakeholders to continue negotiating refinements to the law in the current legislative session.
The law “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” is also known as the “Colorado Anti-Discrimination in AI Law” (ADAI) or the Colorado AI Act. The Colorado Attorney General is charged with rulemaking and enforcement authority and there is no private right of action. The law is currently scheduled to go into effect on 30 June 2026.
While the law focuses on developers and deployers of high-risk AI systems, it also requires disclosure to consumers when they are interacting with any AI system in situations where it would not be obvious. This disclosure obligation applies broadly, not just to high-risk deployers but any company using consumer-facing artificial intelligence systems. The notification requirements also include disclosure to consumers about the right to opt-out of automated decision-making.
Consequential decisions, algorithmic discrimination and high-risk AI systems
The Colorado AI Act defines “algorithmic discrimination” as any circumstance where the use of artificial intelligence creates a condition involving “an unlawful or differential treatment that disfavours an individual or group”. Unlike traditional anti-discrimination laws, the Colorado AI Act is proactive in nature by seeking to prevent algorithmic discrimination before it occurs. The law places the burden on businesses using high-risk AI systems to prove non-discrimination through governance, oversight, and safety testing.
A consequential decision is defined as one that “has a material legal or similarly significant effect on the provision or denial to any consumer, or the cost or terms of” opportunities, goods, or services. The statute further defines eight presumptive categories of consequential decisions, which include:
Notably, these categories track closely with the high-risk categories listed in Annex III of the European Union’s AI Act, but the risk-based approach is unique among US states.
A high-risk artificial intelligence system is one that either makes a consequential decision or is a substantial factor in making a consequential decision. Consequential decisions made by AI systems will be subject to heightened obligations if they affect a consumer’s education, employment, banking, public benefits or services, healthcare, housing, insurance, or legal rights. Importantly, the law prohibits disparate treatment related to access to goods or services, as well as the cost and terms of those services.
From a compliance standpoint, the broad scope of services where AI systems could affect consequential decisions means the law will apply to many businesses in Colorado, some of which are using AI but lack the policies and procedures to withstand regulatory scrutiny. Colorado’s landmark AI law has broad application for both developers and deployers of high-risk AI systems across a wide range of potential services. Any company doing business in one of the above-referenced sectors should consult with AI governance experts to better understand the obligations and mitigate potential risk exposure.
Developers and deployers
The Colorado AI Act regulates developers and deployers of high-risk AI systems when used as a substantial factor in making a consequential decision. A deployer is anyone doing business in the State of Colorado that uses a high-risk AI system as a substantial factor in making a consequential decision, while a developer is anyone doing business in Colorado that develops or intentionally and substantially modifies an AI system.
Examples of companies that are AI developers include providers like Open AI (ChatGPT), Anthropic (Claude), Microsoft (Copilot), or Google (Gemini). However, smaller companies that would otherwise be considered deployers can become developers if they intentionally modify an AI system. In such instance, the company would assume all the same obligations as larger developers.
Under the law, both developers and deployers of high-risk AI systems have a duty to avoid algorithmic discrimination. Developers are required to provide documentation about the risks, benefits, and intended use of each AI system, as well as high-level summaries of training data, documentation regarding governance, safety testing, and impact assessments.
Deployers of high-risk AI systems are obligated to conduct impact assessments and implement a risk-management programme that incorporates policies and practices to mitigate the risk of algorithmic discrimination. Deployers have a duty to disclose the high-risk AI system to consumers before processing and notify consumers of the right to opt-out. Deployers must also provide an explanation about the nature of automated decision-making and the basis on which the decision was made, including the extent to which the AI contributed to the decision.
Requirements for adverse consequential decisions
Under the Colorado AI Act, adverse consequential decisions trigger additional consumer rights and corporate obligations, which include the following requirements:
To use one prominent example of a high-risk AI system, most online job portals implement artificial intelligence to scan and sort resumes through an applicant tracking system (ATS). Under the current iteration of the Colorado AI Act, using an ATS to sort resumes would constitute high-risk processing if it is used as a substantial factor in making a consequential decision about an employment opportunity. Because most ATS systems determine which candidates move forward in the selection process and which candidates do not, resume filtering and stack-ranking systems would be considered high-risk. Under this example, each company that uses an AI system to sort resumes would be subject to the high-risk processing requirements of the Colorado AI Act if that system affects Colorado consumers.
Because the law applies broadly to companies using AI affecting Colorado consumers, the online nature of business will require compliance from companies located outside of the Rocky Mountain Region. Companies using AI systems to make consequential decisions should carefully consider the benefits and obligations associated with continued high-risk processing.
Risk management frameworks, impact assessments, and enforcement
The Colorado AI Act requires each deployer of a high-risk AI system to implement a risk management policy and programme to ensure an iterative process of corporate governance. The risk management framework must be appropriate for the size and complexity of the deployer and the nature and scope of the high-risk processing. The law specifically references the National Institute of Standards and Technology (NIST) AI governance framework and International Standards Organization (ISO) 42001 as acceptable AI risk management frameworks and offers an affirmative defence if a company discovers and cures a violation but is otherwise in compliance with one of those frameworks. Businesses looking to mitigate regulatory risk associated with the use of AI systems would be well-served by selecting and implementing one of the approved governance frameworks.
The Colorado Attorney General’s rule-making authority includes establishing requirements for documentation, risk management programmes, impact assessments, and affirmative defences. The Colorado Attorney General’s enforcement authority also provides the right to inspect any company’s risk management policy, impact assessment, or records to ensure compliance. This broad oversight authority will allow the Colorado Attorney General to identify those companies that have implemented the required governance structures and those that lack appropriate corporate governance. There is no private right of action associated with this law.
What next for the Colorado AI Act?
In August 2025, following a special legislative session that resulted in little progress, the legislature agreed to delay enforcement of SB 24-205 until 30 June 2026. However, negotiations between consumer and industry groups are ongoing and may yield minor changes to the law before it goes into effect. An additional delay of enforcement until January of 2027 is also a possibility.
Proposed modifications to the law include a focus on Automated Decision-Making Technology (ADMT), which aligns with language in the California Consumer Privacy Act (CCPA). Despite the potential for delayed implementation enforcement and further changes to the law, the era of comprehensive state-level AI governance regulation is upon us with the coming implementation of the Colorado AI Act.
For companies using AI systems to make consequential decisions, waiting until the enforcement date to develop a compliance strategy creates increased regulatory risk. Colorado’s law is structurally closer to the EU AI Act than any other US state law and stands out for its heightened risk-based regulatory requirements. Colorado’s AI Act will usher in a new era of comprehensive state-level AI governance, regulation and enforcement in 2026.
1099 18th Street
Suite 1900
Denver
CO 80202-1905
USA
+1 (303) 253 6740
dpietragallo@buchalter.com www.buchalter.com/