Last updated: 2026-04-09
The United States does not yet have a single, comprehensive federal AI law — but that does not mean the regulatory landscape is quiet. From Colorado's pioneering high-risk AI statute to New York City's employment algorithm law, a patchwork of state and local AI regulations is already in effect or taking shape. For regulated organizations, understanding this mosaic is not optional — it is a compliance imperative.
In my work with 200+ clients across industries, I have watched organizations scramble when a new law drops. The ones that navigate it best are those who understand the architecture of AI regulation, not just the individual rules. This guide is designed to give you exactly that foundation.
Why US AI Regulation Is Accelerating Now
Artificial intelligence adoption in enterprise settings has grown dramatically. According to McKinsey's State of AI 2024 report, 72% of organizations had adopted AI in at least one business function — up from 55% the year prior. Regulators at the state and local level have responded by filling the federal vacuum.
As of early 2026, at least 45 states have introduced AI-related legislation, and more than a dozen have enacted laws with enforceable compliance obligations. The European Union's AI Act, which took effect in August 2024, has also created a "Brussels Effect" — pushing US regulators and multinationals alike to adopt risk-based AI governance frameworks domestically.
Citation hook: The absence of a federal US AI law does not reduce compliance risk — it multiplies it, because organizations must simultaneously satisfy a growing set of inconsistent state and local obligations.
The practical implication: if your organization deploys AI in hiring, lending, insurance underwriting, healthcare, or consumer-facing services, you are almost certainly subject to at least one enforceable AI regulation today.
The Colorado AI Act (SB 24-205): The First Comprehensive State AI Law
What It Is
Colorado Senate Bill 24-205, signed into law in May 2024 and effective February 1, 2026, is the first US state law to impose comprehensive obligations on both developers and deployers of high-risk AI systems. It is modeled — in broad strokes — on the EU AI Act's risk-tiered approach.
Who It Covers
The Colorado AI Act applies to:
- Developers who create or substantially modify high-risk AI systems and make them available to Colorado consumers or deployers
- Deployers who use high-risk AI systems in decisions that materially affect Colorado consumers
A "high-risk AI system" under the Act is one that makes, or is a substantial factor in making, consequential decisions in areas including: - Education enrollment or opportunity - Employment or employment opportunities - Financial or lending services - Essential government services - Healthcare services - Housing or insurance
Key Obligations Under the Colorado AI Act
| Obligation | Applies To | Key Detail |
|---|---|---|
| Risk Management Program | Developers & Deployers | Must implement a program to identify, document, and mitigate algorithmic discrimination |
| Impact Assessments | Deployers | Annual or pre-deployment assessment required for high-risk systems |
| Consumer Disclosures | Deployers | Must notify consumers when AI is used in a consequential decision |
| Adverse Action Notices | Deployers | Must disclose AI's role and provide opportunity to appeal or correct data |
| Developer Documentation | Developers | Must provide deployers with technical documentation and usage policies |
| Attorney General Enforcement | State | Civil penalties; no private right of action |
What Makes Colorado Unique
Colorado's law is notable for several reasons. First, it explicitly targets algorithmic discrimination — defined as any condition in which a high-risk AI system contributes to differential treatment or impact based on a protected class. Second, it creates a two-actor compliance chain: developers and deployers share responsibility, which changes how procurement contracts must be written.
Citation hook: Under the Colorado AI Act (SB 24-205), deployers of high-risk AI systems must conduct annual impact assessments and provide consumers with meaningful recourse when AI contributes to an adverse consequential decision.
In my advisory practice, I have seen Colorado's law serve as a de facto national compliance standard for clients operating across multiple states. It is often the most demanding obligation on the list.
NYC Local Law 144: The Employment Algorithm Law
What It Is
New York City Local Law 144, effective July 5, 2023, was the first law in the United States to directly regulate the use of Automated Employment Decision Tools (AEDTs). It applies to employers and employment agencies that use AEDTs to screen candidates or employees for positions based in New York City.
Who It Covers
Any employer or employment agency that uses an AEDT to: - Screen candidates for employment - Screen current employees for promotion
...where the role is based in New York City, regardless of where the employer is headquartered.
Key Obligations Under NYC Local Law 144
| Requirement | Detail |
|---|---|
| Bias Audit | Must be conducted by an independent auditor before use and annually thereafter |
| Public Posting of Audit Results | Summary statistics must be posted on the employer's public website |
| Candidate/Employee Notice | Must notify candidates or employees at least 10 business days before AEDT use |
| Alternative Process | Must provide candidates with the option to request an alternative selection process |
| Data Retention | Must retain bias audit and notice records |
The Bias Audit Requirement — What It Actually Means
The bias audit under LL144 requires an independent auditor to calculate selection rates broken down by sex, race/ethnicity, and intersectional categories. The audit must assess whether the tool produces disparate impact for any of these groups.
This is a significant operational ask. Many organizations discovered — upon attempting to comply — that they did not have the demographic data necessary to perform the audit, or that their AEDT vendor had not retained the data in a usable format.
Citation hook: NYC Local Law 144 requires employers to post bias audit results for automated hiring tools on their public website, making AI hiring discrimination risk transparently visible to candidates, regulators, and the public.
NYC's Department of Consumer and Worker Protection (DCWP) enforces LL144. Penalties range from $375 to $1,500 per violation per day.
Illinois: The Artificial Intelligence Video Interview Act (AIVIA)
Illinois was actually ahead of the curve. The Artificial Intelligence Video Interview Act (820 ILCS 42), effective January 1, 2020, requires employers using AI to analyze video interviews to:
- Notify applicants before the interview that AI may be used
- Explain how the AI works and what characteristics it evaluates
- Obtain the applicant's consent
- Limit sharing of applicant videos
- Destroy video recordings within 30 days upon request
Illinois also enacted HB 3773 (effective January 1, 2026), which expanded protections to prohibit employers from using AI to discriminate against employees based on protected classes, and requires notice when AI is used in employment decisions.
California: A Regulatory Ecosystem in Motion
California has taken a multi-front approach to AI regulation, with several laws now in effect:
- AB 2013 (2024): Requires developers of generative AI systems to publish documentation about training data used in their systems.
- SB 942 (2024) — California AI Transparency Act: Requires large AI providers to make detection tools available and to watermark or label AI-generated content.
- AB 1008: Clarified that existing California Consumer Privacy Act (CCPA) rights — including the right to access, deletion, and opt-out of sale — apply to personal information used in AI training data.
California's Governor vetoed the more expansive SB 1047 in 2024, which would have imposed safety obligations on frontier AI model developers. However, new bills continue to be introduced, and California's AI regulatory posture remains highly active.
CCPA/CPRA and AI: An Underappreciated Overlap
Many organizations overlook that the California Privacy Rights Act (CPRA) already creates AI-relevant obligations. Specifically: - The right to opt out of automated decision-making in certain contexts - The requirement to conduct risk assessments for processing that presents significant risk to consumers - The obligation to honor data minimization and purpose limitation — which directly constrain AI training practices
Texas, Virginia, and the Growing State-Level Landscape
Texas Responsible AI Governance Act (TRAIGA)
Texas passed the Responsible AI Governance Act in 2025, effective September 1, 2025. Like Colorado, TRAIGA imposes obligations on developers and deployers of high-risk AI systems, including:
- Risk management programs
- Impact assessments for high-risk AI
- Consumer disclosures and adverse action notices
- A safe harbor for organizations that have adopted a recognized AI governance framework (such as NIST AI RMF or ISO 42001:2023)
The safe harbor provision is significant — it is one of the clearest legislative signals that voluntary standards like ISO 42001:2023 carry real regulatory weight in the United States.
Virginia Consumer Data Protection Act (VCDPA)
Virginia's VCDPA includes a right for consumers to opt out of profiling in connection with decisions that produce legal or similarly significant effects. This directly implicates AI systems used in hiring, lending, or insurance decisions affecting Virginia consumers.
Federal AI Activity: What's on the Horizon
While no comprehensive federal AI law is yet in force, several significant federal developments shape the environment:
| Federal Action | Status | Key Relevance |
|---|---|---|
| Executive Order 14110 (Biden, Oct 2023) | Revoked by Trump Administration Feb 2025 | Established federal AI safety and testing standards for federal agencies and contractors |
| NIST AI Risk Management Framework (AI RMF 1.0) | Active, voluntary | Widely referenced in state safe harbor provisions; foundational for ISO 42001 alignment |
| FTC AI Guidance & Enforcement | Active | FTC uses Section 5 authority to pursue deceptive/unfair AI practices |
| EEOC AI Guidance | Active | Applies existing Title VII disparate impact doctrine to AI hiring tools |
| CFPB AI Guidance | Active | Requires "specific reasons" for adverse action even when AI is the decision-maker |
| Draft American AI Act (various proposals) | Legislative | Multiple competing bills; no clear path to passage as of Q1 2026 |
The Federal Trade Commission has been particularly active, using its existing Section 5 authority to challenge AI-related deception and unfair practices — without waiting for new legislation. Similarly, the Equal Employment Opportunity Commission has made clear that existing civil rights law applies fully to AI hiring tools, regardless of whether a new AI-specific law exists.
How to Map Your Organization's AI Compliance Exposure
Given the complexity of this landscape, I recommend a structured approach that I use with all new clients:
Step 1: Inventory Your AI Systems
Document every AI or automated decision-making tool in use, its purpose, the data it processes, and which geographies it affects.
Step 2: Classify by Risk and Jurisdiction
Map each system against the definition of "high-risk" in each applicable jurisdiction. Colorado, Texas, and EU AI Act definitions overlap but are not identical.
Step 3: Identify the Applicable Obligations
For each system and jurisdiction, identify which specific obligations apply — audit, impact assessment, disclosure, opt-out, etc.
Step 4: Gap Analysis
Compare your current governance practices against each obligation set. This is where ISO 42001:2023 becomes especially valuable — its clause structure (particularly clause 6.1.2 on AI risk assessment and clause 8.4 on documentation of AI system impacts) maps closely to the requirements in Colorado, Texas, and EU law.
Step 5: Prioritize and Remediate
Prioritize gaps by enforcement risk and remediation complexity. Not all gaps are equal.
Comparing Key US AI Regulations at a Glance
| Law | Jurisdiction | Effective Date | Scope | Key Mechanism | Enforcement |
|---|---|---|---|---|---|
| Colorado SB 24-205 | Colorado | Feb 1, 2026 | High-risk AI, consequential decisions | Impact assessments, disclosures | CO Attorney General |
| NYC Local Law 144 | New York City | Jul 5, 2023 | Automated employment decision tools | Bias audit, public posting | NYC DCWP |
| Illinois AIVIA | Illinois | Jan 1, 2020 | AI video interview analysis | Notice, consent, deletion | IL Dept. of Labor |
| Illinois HB 3773 | Illinois | Jan 1, 2026 | AI in employment decisions | Non-discrimination, notice | IL Dept. of Labor |
| Texas TRAIGA | Texas | Sep 1, 2025 | High-risk AI, consequential decisions | Risk management, safe harbor | TX AG |
| CA AI Transparency Act | California | Jan 1, 2026 | Generative AI providers | Watermarking, detection tools | CA AG |
| CCPA/CPRA | California | Jan 1, 2023 (CPRA) | Automated decision-making | Opt-out, risk assessments | CA Privacy Protection Agency |
| EU AI Act | EU (affects US orgs) | Aug 1, 2024 | Risk-tiered AI systems | Conformity assessment, CE marking | National market authorities |
What Regulated Organizations Should Do Right Now
The organizations I work with that are ahead of the curve share a common trait: they have stopped treating AI governance as a legal exercise and started treating it as an operational discipline. Here is my practical advice:
-
Do not wait for federal law. State and local enforcement is real and accelerating. NYC has already issued civil penalties under LL144.
-
Adopt a recognized framework now. ISO 42001:2023 is the only internationally auditable AI management system standard. It satisfies safe harbor provisions in Texas and is aligned with EU AI Act conformity requirements. If your organization is already ISO 9001 or ISO 27001 certified, the structural overlap makes ISO 42001 implementation substantially faster.
-
Revise your AI procurement contracts. Colorado and Texas both create obligations that run through the developer-deployer chain. Your vendor contracts must address documentation, audit rights, and indemnification for AI-specific risks.
-
Train your people. Regulators expect human oversight. If your employees do not understand how to interpret, override, or escalate AI-driven decisions, your "human-in-the-loop" claims will not hold up in an enforcement action.
-
Document everything. The burden of proof in most AI regulations falls on the deployer. If you cannot produce your impact assessments, audit records, and consumer notices on demand, you are already non-compliant.
At Regulated AI Consulting, our AI compliance gap assessment is specifically designed to map your current AI portfolio against the full landscape of applicable US and international obligations — in days, not months.
Frequently Asked Questions
Does my organization need to comply with the Colorado AI Act if we are not based in Colorado?
Yes, if your AI system makes consequential decisions that affect Colorado consumers, the Colorado AI Act applies regardless of where your organization is headquartered. This is a common misconception. The law is triggered by the location of the consumer, not the organization.
What is the difference between a bias audit under NYC Local Law 144 and an impact assessment under the Colorado AI Act?
A bias audit under LL144 is a statistical analysis of disparate impact conducted by an independent auditor, focused specifically on employment selection rates by demographic group. A Colorado impact assessment is broader — it evaluates algorithmic discrimination risk across the full AI system lifecycle and must address mitigation measures. Both are required on an annual basis for applicable systems.
Is ISO 42001:2023 required to comply with US AI laws?
ISO 42001:2023 is not legally mandated by any current US AI law. However, Texas TRAIGA explicitly provides a compliance safe harbor for organizations that have adopted a recognized AI governance framework, and ISO 42001 is the leading international standard in this category. Adoption significantly reduces regulatory risk and audit burden across multiple jurisdictions simultaneously.
What happens if a vendor's AI tool is non-compliant — is my organization still liable?
In most cases, yes. Colorado, Texas, and NYC LL144 all place compliance obligations on the deployer — the organization using the AI tool. Vendor non-compliance is not a defense. This is why contract provisions requiring developer documentation, audit cooperation, and indemnification are essential when procuring AI systems.
How does the EU AI Act affect US-based organizations?
The EU AI Act applies to any organization that places an AI system on the EU market or puts it into service in the EU, regardless of where the provider is established. US companies with EU operations, customers, or data subjects must comply with the relevant provisions. High-risk AI systems require conformity assessments, technical documentation, and registration in the EU AI database before deployment.
Jared Clark, JD, MBA, PMP, CMQ-OE, CQA, CPGP, RAC is the founder of Regulated AI Consulting, where he advises regulated organizations on AI governance, ISO 42001 implementation, and multi-jurisdictional AI compliance. With 8+ years of regulatory experience and a 100% first-time audit pass rate across 200+ clients, Jared helps organizations build AI governance programs that work in the real world.
Last updated: 2026-04-09
Jared Clark
AI Governance Consultant, Regulated AI Consulting
Jared Clark is the founder of Regulated AI Consulting, advising organizations on AI governance frameworks, ISO 42001 compliance, and responsible AI deployment in regulated industries.