The US has no federal AI law. That much is true. What is also true — and what too many compliance officers in regulated industries are underestimating — is that your organization already has legal obligations under a growing set of state and municipal AI laws, some of which are in effect right now. Colorado, New York City, Illinois, Texas, and California have all enacted AI-specific legislation. The question is not whether these laws apply to your industry. The question is whether you've mapped your exposure and started building the governance infrastructure to address it.
For regulated organizations — pharma, healthcare, financial services, insurance, manufacturing — the stakes are compounded. The categories that state AI laws define as high-stakes AI use cases map almost exactly onto what regulated industries do: making credit decisions, processing insurance claims, screening job applicants, supporting clinical decisions, managing patient services. Your industry doesn't get a pass; it gets disproportionate exposure.
This article covers the five most significant US AI regulatory frameworks in effect or taking effect through 2026: NYC Local Law 144, the Illinois AI Video Interview Act and its 2026 successor, the Colorado AI Act, Texas TRAIGA, and California's 2024-2025 legislative wave. It closes with a practical compliance roadmap — the six things regulated organizations need to do now, regardless of which specific laws apply to them.
The US AI Regulation Landscape at a Glance
The EU gave its member states a single, unified AI Act. The US is building its regulatory framework from the bottom up — state by state, city by city — while federal policy oscillates between two administrations with opposing philosophies on AI governance. The result is a patchwork that will only become more complex before it becomes simpler.
Unlike the EU AI Act, which establishes a tiered risk classification system with centralized enforcement, US state AI laws each define their own scope, their own compliance obligations, and their own enforcement mechanisms. Some require proactive third-party audits. Others prohibit specific practices entirely. Still others create civil rights claims that can be brought by individuals, not just regulators. A company operating in multiple states faces potentially overlapping and sometimes conflicting requirements — and the absence of federal preemption means there is no single compliance posture that satisfies all of them.
The table below summarizes the major laws covered in this article.
| Law | Jurisdiction | Effective Date | Key Requirement | Penalty |
|---|---|---|---|---|
| NYC Local Law 144 | New York City | July 5, 2023 | Annual bias audit of employment AI; candidate notice | $500–$1,500/day per violation |
| Illinois AI Video Interview Act | Illinois | January 1, 2020 | Notice, consent, deletion rights for AI video interviews | IHRA enforcement |
| Illinois HB 3773 | Illinois | January 1, 2026 | Civil rights violation to use AI with discriminatory effect | IHRA enforcement; civil claims |
| Colorado AI Act (SB 205) | Colorado | June 30, 2026 | Risk management program; impact assessments; consumer notices | Up to $20,000/violation |
| Texas TRAIGA | Texas | January 1, 2026 | Categorical prohibitions; governmental entity obligations; sandbox | AG enforcement |
| California AI Bundle | California | January 1, 2026 | Training data transparency; AI content provenance; expanded CCPA | CPPA enforcement; civil claims |
NYC Local Law 144: The First AI Employment Law With Teeth
Status: In effect since July 5, 2023
New York City's Local Law 144 was the first US law to impose mandatory third-party auditing on AI systems used in employment decisions. If your organization uses any software that automates or substantially assists with hiring or promotion decisions for positions in New York City, this law applies to you.
What Is an Automated Employment Decision Tool?
The law targets Automated Employment Decision Tools (AEDTs) — any computational process derived from machine learning, statistical modeling, data analytics, or AI that issues a simplified output used to substantially assist or replace discretionary decision-making in hiring or promotion. If you use a resume screening platform, an interview scoring tool, or any software that flags or ranks candidates algorithmically, assume you have an AEDT until you've confirmed otherwise in writing.
The Bias Audit Requirement
The core compliance obligation is an annual independent bias audit. The audit must analyze the AEDT's outputs for disparate impact across race/ethnicity and sex categories. Specifically, it must calculate the selection rate — the proportion of applicants who are advanced or hired — for each category, and identify whether any category is selected at a rate below 80% of the most-selected category (the "four-fifths rule").
The audit must be conducted by an independent auditor — not the AEDT vendor, not internal staff — and the results must be posted publicly on the employer's website before the tool is used. That public posting must remain accessible for the duration of use and for at least six months after the tool is discontinued.
The Candidate Notice Requirement
Employers must notify candidates and employees at least 10 business days before the AEDT is used in an assessment. The notice must state that an AEDT will be used, explain what job qualifications or characteristics it evaluates, and provide information on how to request an alternative selection process if one is available. Candidates may request that alternative; employers are not required to provide one, but they must disclose whether one exists.
Penalties and Enforcement Reality
The New York City Department of Consumer and Worker Protection (DCWP) enforces Local Law 144. Penalties run $500 to $1,500 per day, per violation. Each day of non-compliance is a separate violation — meaning a company using an unaudited AEDT for 90 days faces up to $135,000 in exposure from that single violation alone, before stacking across multiple candidates or multiple tools.
Enforcement has been light since 2023, but that is not a compliance strategy. The DCWP has issued guidance, conducted investigations, and the law remains on the books with no sunset provision. Organizations that have not conducted their first bias audit should treat themselves as currently out of compliance if they are using covered tools in New York City hiring.
Regulated Industries Are Disproportionately Exposed. Healthcare systems, financial institutions, and insurers in New York City are among the heaviest users of AI-assisted HR and talent platforms. Applicant tracking systems with automated scoring, video interview platforms that analyze speech and facial patterns, and workforce management tools that flag candidates for advancement all fall within Local Law 144's scope. If your HR technology vendor has not provided a bias audit and you haven't required one in your contract, you are the party in violation — not the vendor.
Illinois: The Quiet Pioneer (And Now a Model for Employment AI Law)
Illinois AI Video Interview Act: effective January 1, 2020. HB 3773 (IHRA amendment): effective January 1, 2026.
Illinois was the first state to regulate AI in employment — years before the current legislative wave. Its approach is notable because it created private rights of action, not just regulatory enforcement, and it keeps expanding.
The AI Video Interview Act
The Illinois AI Video Interview Act requires any employer using AI to analyze video interviews to: (1) notify applicants before the interview that AI will be used; (2) provide a written explanation of how the AI works and what general factors it uses to assess candidates; (3) obtain applicant consent before proceeding; and (4) delete video recordings within 30 days of a request, and delete all copies — including those held by third parties — within that window.
The deletion obligation is the one that routinely surprises employers. "Third party" includes your ATS vendor, your video platform, and any downstream storage. Your contract must give you the ability to direct deletion within 30 days — and if it doesn't, you have a compliance gap today.
HB 3773: AI Discrimination as a Civil Rights Violation
Starting January 1, 2026, Illinois law makes it a civil rights violation under the Illinois Human Rights Act (IHRA) to use AI that has a discriminatory effect on any IHRA-protected basis. The protected bases include race, color, religion, sex, national origin, ancestry, age, order of protection status, marital status, physical and mental disability, military status, sexual orientation, pregnancy, and unfavorable military discharge.
The scope is broad: hiring, promotion, renewal of employment, selection for training, discipline, discharge, and tenure. The "zip code proxy" prohibition is particularly significant for credit and insurance underwriting — the law expressly bars using zip code as a proxy for any protected characteristic.
Employees and applicants who are notified that AI was used in an adverse decision have the right to request an explanation of the factors considered. That notice requirement applies even when the employer believes no discrimination occurred.
Why Illinois and NYC LL 144 Are Not the Same Compliance Problem
A common misconception is that satisfying one of these frameworks covers the other. It does not. NYC LL 144 requires a formal, published, third-party bias audit using a specific statistical methodology — the four-fifths rule across race/ethnicity and sex. Illinois HB 3773 does not require bias audits or public postings; it creates enforcement-by-complaint through the IHRA and private civil rights claims. The legal theories, remedies, and compliance mechanisms are distinct. An organization operating in both jurisdictions needs to address both, separately.
The Colorado AI Act: The Regulation That Will Shape Everything Else
Status: Signed May 2024; effective June 30, 2026 (delayed from February 1, 2026)
Colorado's SB 205 — the Colorado Artificial Intelligence Act — is the first comprehensive US state AI law modeled on the architecture of the EU AI Act. It is the most significant piece of US domestic AI legislation enacted to date, and its structure will influence how other states build their own frameworks. If you operate in Colorado and use AI in any consequential decision domain, this law requires immediate attention.
Scope: Consequential Decisions
The law applies to high-risk AI systems — those used to make "consequential decisions." A consequential decision is any decision that has a material legal or similarly significant effect on a consumer in the areas of education, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services.
Read that list. Education, employment, financial services, healthcare, housing, insurance, legal services. That is not a fringe set of use cases. That is the core operating domain of every regulated organization in the country. If your organization uses AI in clinical decision support, underwriting, loan origination, benefits administration, or workforce management — and you have any customers or employees in Colorado — you are in scope.
Developer vs. Deployer: Who Owes What
The law distinguishes between developers (entities that create or substantially modify high-risk AI systems) and deployers (entities that use high-risk AI systems in consequential decisions affecting consumers). Most regulated organizations are deployers.
Developer obligations include: disclosure to deployers of known or reasonably foreseeable risks of algorithmic discrimination; performance data across demographic groups; a public summary of high-risk AI systems; and notification to deployers if the developer becomes aware of algorithmic discrimination.
Deployer obligations are more extensive. Deployers must:
- Implement a risk management policy and program reasonably suited to the nature and scope of the AI system
- Conduct and retain an annual impact assessment — including a description of the system's purpose, known risks, steps taken to mitigate those risks, and demographic performance data
- Notify consumers when an AI system makes or substantially assists in a consequential decision, including what data was used and the consumer's right to appeal or seek correction
- Retain impact assessments and related documentation for three years
- Provide consumers an opportunity to correct data used in the decision or opt for an alternative process
The "Reasonable Care" Standard
Colorado's law does not establish a bright-line checklist. It requires deployers to exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. That standard will be interpreted in enforcement proceedings, and its content will be shaped by what peer organizations in your industry are doing. The practical implication: organizations that do nothing will face much greater exposure than those that document a good-faith risk management effort, even if that effort is imperfect.
Penalties and Enforcement
Colorado's Attorney General enforces the Act. Penalties reach up to $20,000 per violation, per consumer or transaction. There is no private right of action — only AG enforcement. A small-business limited exemption applies to deployers only, with conditions, and does not cover developers. The exemption is narrow enough that most regulated enterprises will not qualify.
The Federal Preemption Threat
The December 2025 Trump Executive Order establishing a National AI Policy Framework directed the Department of Justice to create an AI Litigation Task Force specifically to challenge state AI laws — and Colorado's Act is widely understood to be a primary target. The EO also directed the FTC to issue policy statements that could be used to preempt state consumer protection-based AI rules, and suggested withholding federal broadband funding from states that enact "AI regulations inconsistent with" federal policy.
These are real pressures. But executive orders cannot preempt state law — only Congress can. Constitutional challenges take years to resolve through the federal courts. Colorado's law is in effect on June 30, 2026, and the organizations that wait for legal clarity before starting their impact assessment programs will arrive at that date without the documentation the law requires.
Texas TRAIGA: The Business-Friendly Counter-Model
Status: Signed June 22, 2025; effective January 1, 2026
Texas became the second state to enact comprehensive AI legislation, and its approach is instructive precisely because it is so different from Colorado's. TRAIGA — the Texas Responsible AI Governance Act — reflects a philosophy that AI regulation should protect against genuine harm without burdening private sector development.
Categorical Prohibitions That Apply to Everyone
TRAIGA establishes a set of prohibitions that apply to all entities, private and public, without exception:
- Using AI for behavioral manipulation that exploits psychological vulnerabilities
- Using AI to discriminate on protected bases
- Using AI to generate child sexual abuse material
- Using AI to violate constitutional rights
These prohibitions are categorical and non-negotiable. No sandbox, no exemption, no safe harbor applies to them.
The Narrowing: Private Sector Largely Exempt from Substantive Obligations
Beyond the categorical prohibitions, most of TRAIGA's substantive compliance obligations apply to governmental entities only, not private companies. This is the most significant contrast with Colorado's approach. A private insurer in Texas faces the categorical prohibitions but does not face the risk management program and impact assessment requirements that a similar insurer in Colorado faces.
Texas also established a 36-month regulatory sandbox allowing companies to test AI systems under relaxed enforcement conditions, and created an AI Advisory Council to shape ongoing regulatory development. The sandbox is the most employer-friendly feature of any US AI law to date.
What TRAIGA Means for Multi-State Organizations
A regulated organization with operations in both Colorado and Texas faces a compliance problem that neither state's law alone would create: the Colorado framework requires documented risk management and impact assessments; Texas does not — but Texas's categorical prohibitions create a baseline floor that applies regardless of Colorado's requirements. Compliance programs need to address both, and the documentation standards are not interchangeable.
California: Quantity Over Comprehensiveness
Status: 17+ AI bills enacted in the 2024 session; major laws effective January 1, 2026
California did not pass a single comprehensive AI law. What it did pass, across the 2024-2025 legislative sessions, is a stack of targeted obligations that collectively impose significant compliance requirements on organizations operating in California — particularly those developing or deploying generative AI.
The Laws Regulated Organizations Need to Know
AB 2013 (Training Data Transparency) — effective January 1, 2026 — requires any business that makes a generative AI system publicly available to publish documentation of the data used to train it, including a high-level summary of the data types, sources, and the date ranges of the training data. This is a transparency obligation, not a prohibition. But for regulated organizations deploying generative AI tools — clinical decision support, document summarization, coding assistants — it creates due diligence requirements around vendor disclosure.
SB 942 (AI Content Provenance) requires large AI providers to implement content provenance and watermarking mechanisms for AI-generated content, and to offer free tools allowing users to identify AI-generated content. Regulated organizations deploying AI in patient communications, marketing, or public disclosures need to understand what their vendors' watermarking and provenance capabilities are.
AB 1008 (CCPA Expansion) explicitly extends the California Consumer Privacy Act to cover personal information stored in AI systems — including information used to train AI models or stored in AI model weights. This is the most direct impact on regulated industries: if your AI system was trained on or processes patient data, customer financial data, or any other California consumer personal information, your CCPA compliance program must now account for that data's presence in AI systems.
Governor Newsom vetoed SB 1047 in 2024 — the sweeping frontier AI liability bill — drawing a clear line: California's approach favors transparency obligations over liability regimes. The 2025 successor, SB 53, pursues transparency requirements for frontier AI developers without the liability framework. That political line matters: California is shaping a transparency-focused model rather than a damages-based enforcement model, which is a meaningfully different compliance posture than what Colorado has created.
The Federal Landscape: A Wild Card That Changes Everything
The federal government's role in US AI regulation is best understood as a set of pressures rather than a settled framework.
The Biden Executive Order 14110 (October 2023) created meaningful infrastructure: red-teaming requirements, cybersecurity protocols for critical infrastructure AI, and federal agency coordination. It was revoked on January 20, 2025 — the first day of the Trump administration.
The Trump EO 14179 ("Removing Barriers to American Leadership in AI," January 2025) directed agencies to rescind Biden-era AI regulations and instructed the OSTP to develop AI guidance "free from ideological bias." The December 2025 follow-on EO creating the AI Litigation Task Force is the most significant federal development for state AI law since Colorado's Act was signed.
The NIST AI Risk Management Framework 1.0 (January 2023), and its July 2024 GenAI Profile with 400+ recommended actions, remains the most useful voluntary federal reference for regulated organizations building AI governance programs. It is not law, it carries no penalties, and it does not preempt state requirements — but it provides a defensible, federally-recognized governance structure that regulators in multiple sectors (banking, healthcare, FDA) have begun to reference in their own guidance.
No comprehensive federal AI legislation has passed as of April 2026. The constitutional question is settled: executive orders cannot preempt state law. Only Congress can do that. Legal battles over the Trump EO's preemption mechanisms are expected to run through 2026 and likely into 2027. Regulated organizations that treat federal preemption as a compliance strategy rather than a long-shot bet are taking on substantial legal risk.
What Regulated Organizations Need to Do Now
1. Map Your AI Use Against Jurisdictional Exposure
You cannot comply with laws you haven't mapped. If you have employees in New York City, operations in Colorado, employees or applicants in Illinois, or customers in any of these states, you have current or near-term legal obligations. Start by listing where you operate and which laws apply. Then work backward from that list to your AI inventory.
2. Inventory Your Consequential and Employment AI
Colorado's "consequential decision" scope — healthcare, financial services, insurance, employment, housing, education, legal services — maps directly onto regulated industry operations. Employment AI tools — screening software, interview analysis platforms, performance management systems — trigger NYC LL 144 and Illinois obligations. Build an inventory that classifies each AI system by use domain and then by which laws' definitions it falls within.
3. Conduct Impact Assessments — Don't Wait for the Deadline
Colorado's June 30, 2026 deadline is the compliance floor, not the starting gun. Impact assessments require documentation, may surface algorithmic discrimination findings that need remediation, and depend on data that vendors may not provide without specific contractual requirements. Starting in Q4 2025 was not early. Starting now is not late — but it requires moving immediately.
4. Establish a Developer/Deployer Documentation Chain
Under Colorado's Act, deployers are entitled to specific documentation from developers: training data demographics, known limitations, performance metrics across protected classes, and notification if discrimination is discovered. Audit your current AI vendor contracts. Most standard SaaS agreements do not include these disclosures. You need contract amendments before you can complete your impact assessments — and that negotiation takes time.
5. Get Your Bias Audit Calendar Set
NYC LL 144 requires annual independent audits. If you haven't conducted your first audit, you are already out of compliance on any AEDT you're using in NYC hiring. Find an accredited independent auditor — not your AEDT vendor's recommended audit service — and schedule the audit. The published results must be on your website before the tool is used again.
6. Don't Wait for Federal Preemption
The DOJ AI Litigation Task Force is new and its strategy is unclear. Constitutional challenges to state AI laws take years. Meanwhile, Colorado's Act takes effect on June 30, 2026. NYC LL 144 has been in effect since July 2023. Illinois HB 3773 took effect January 1, 2026. The cost of being wrong on the preemption bet is not just penalties — it's the discrimination claims, consumer protection actions, and reputational exposure that the underlying AI failures create, independent of any AI-specific law.
The Bottom Line
The US AI regulatory patchwork is real, it is growing, and it is not going to consolidate into a single federal standard before several of these state laws become fully enforceable. Regulated industries face disproportionate exposure because the categories state legislatures have identified as high-stakes AI use cases — healthcare, lending, insurance, employment — are the same categories that define your business operations.
What the major US frameworks converge on, despite their differences, is three things: transparency about how AI systems make decisions, non-discrimination as a baseline obligation, and documented accountability through risk management programs, impact assessments, or bias audits. Organizations that build AI governance infrastructure to address these three principles will find that most specific regulatory requirements fall naturally within a program already designed for them. Organizations that wait for a simpler environment will find themselves retrofitting compliance into systems that were never designed for it.
If you are navigating AI compliance for a regulated organization and need a practical roadmap, our AI governance advisory practice helps organizations build the governance infrastructure these laws require — from initial jurisdictional exposure mapping through impact assessment design, vendor contract remediation, and ongoing program management. Schedule a consultation to discuss where your organization stands.
FAQ: US AI Regulations for Regulated Organizations
Q: Does the Colorado AI Act apply to my company if we're not based in Colorado?
A: Yes, if you deploy AI systems that make consequential decisions affecting Colorado consumers — in healthcare, lending, insurance, employment, housing, education, or legal services — you are in scope regardless of where your company is headquartered. The law follows the consumer, not the company's address.
Q: What is an Automated Employment Decision Tool under NYC Local Law 144?
A: Under NYC LL 144, an AEDT is any computational process derived from machine learning, statistical modeling, data analytics, or AI that issues a simplified output — including scores, classifications, or recommendations — used to substantially assist or replace discretionary decision-making in hiring or promotion for positions in New York City.
Q: Will federal preemption eliminate state AI laws?
A: Possibly, but not imminently. Executive orders cannot preempt state law — only Congress can. The December 2025 Trump Executive Order directing the DOJ to create an AI Litigation Task Force creates legal pressure, but constitutional challenges to state AI laws take years to resolve. State laws are in effect and enforceable now.
Q: Do the Colorado AI Act and NYC Local Law 144 cover the same things?
A: They overlap but are not identical. NYC LL 144 is narrowly focused on employment decisions and requires mandatory annual bias audits. The Colorado AI Act covers employment as one of several consequential decision domains and uses a risk management and impact assessment framework rather than mandatory third-party audits. Organizations operating in both jurisdictions must address both frameworks separately.
Q: What should a regulated organization do first?
A: Start with a jurisdictional exposure map — identify where you operate and which laws apply. Then inventory your AI systems against those laws' scope definitions. Colorado's impact assessment process and NYC's bias audit requirement are the two most concrete near-term obligations that require external expertise and advance scheduling.
Last updated: 2026-04-09
Jared Clark
JD, MBA, PMP, CMQ-OE, RAC — AI Governance Advisor
Jared Clark is the founder of Regulated AI Consulting and Certify Consulting. He advises regulated organizations on AI governance, risk management, and compliance frameworks, drawing on a background spanning regulatory law, quality systems, and project management across pharmaceutical, healthcare, and financial services industries.