State of AI Governance 2025: Regulations, Standards, and What's Coming Next
By Jared Clark, JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, RAC | Principal Consultant, Certify Consulting
If you've been waiting for AI governance to "settle down" before taking action, I have bad news: the landscape is moving faster in 2025 than it did in 2024—and it was already moving fast. The organizations I work with across life sciences, financial services, and critical infrastructure are no longer asking whether they need an AI governance program. They're asking which frameworks apply, in what order, and how quickly they need to comply.
This guide answers those questions definitively. Whether you're a compliance officer trying to map your regulatory exposure, a quality leader building an AI management system from scratch, or a board member trying to understand fiduciary risk, this is your current-state briefing.
Why AI Governance Has Become Non-Negotiable in 2025
The numbers tell the story. According to the OECD AI Policy Observatory, over 60 countries have enacted or proposed AI-related legislation as of early 2025, compared to fewer than 10 in 2019. The EU AI Act entered into force in August 2024, with its first binding obligations—on prohibited AI practices—taking effect in February 2025. In the United States, at least 40 states introduced AI-related legislation in 2024, with several bills crossing the finish line.
For regulated industries, the stakes are especially high. The FDA's AI/ML-Based Software as a Medical Device (SaMD) action plan, FINRA's evolving AI surveillance guidance, and the FAA's nascent framework for AI in aviation decision systems all share a common thread: regulators are no longer treating AI as an exotic edge case—they're treating it as a quality and risk management problem that existing compliance infrastructure must absorb.
At Certify Consulting, I've guided over 200 clients through regulatory transitions across quality, compliance, and now AI governance. The pattern I see in 2025 is clear: organizations that built robust quality management systems are adapting faster—because AI governance borrows heavily from established QMS, risk management, and software validation concepts.
The Major AI Governance Frameworks: A Comparative Overview
Before you can build a governance program, you need to understand what's actually in force, what's approaching, and what remains voluntary. Here's how the major frameworks stack up:
| Framework | Jurisdiction | Status (2025) | Mandatory? | Primary Scope |
|---|---|---|---|---|
| EU AI Act | European Union | Phased enforcement (2024–2027) | Yes (for market access) | Risk-tiered AI systems; prohibited uses, high-risk categories |
| ISO 42001:2023 | International | Published; certification available | No (market-driven) | AI management systems; organizational governance |
| NIST AI RMF 1.0 | United States | Active; used in federal procurement | No (but referenced in contracts) | Risk management across AI lifecycle |
| FDA AI/ML SaMD Framework | United States | Evolving guidance | Yes (for regulated medical devices) | AI in medical software, predetermined change control |
| EU AI Liability Directive | European Union | Proposed; in negotiation | Pending | Civil liability for AI-caused harm |
| UK AI Principles | United Kingdom | Sector-by-sector guidance | No (pro-innovation stance) | Cross-sector; delegated to sector regulators |
| China AI Regulations | China | Multiple active regulations | Yes (for covered systems) | Generative AI, recommendation algorithms, deepfakes |
| IEEE P2863 / P7000 series | International | Active development | No (technical standards) | Organizational AI governance, ethical design |
Citation hook: As of 2025, the EU AI Act represents the most comprehensive binding AI regulatory framework globally, establishing a four-tier risk classification system that determines compliance obligations for any organization deploying AI in or affecting EU markets.
The EU AI Act: What's Actually Required and When
The EU AI Act is the framework most of my clients in regulated industries are focused on—and with good reason. Its extraterritorial reach means that any organization placing an AI system on the EU market or whose AI outputs are used in the EU must comply, regardless of where the organization is headquartered.
The Risk Tier Structure
The Act's architecture is built around four risk tiers:
1. Unacceptable Risk (Prohibited) — AI systems that manipulate behavior, exploit vulnerabilities, enable social scoring by public authorities, or conduct real-time biometric surveillance in public spaces. These were prohibited as of February 2, 2025.
2. High Risk — AI used in critical infrastructure, education, employment decisions, essential private and public services, law enforcement, migration management, and administration of justice. Also included: AI safety components in regulated products (medical devices, machinery, vehicles). Full obligations apply from August 2, 2026.
3. Limited Risk — Systems with transparency obligations only (e.g., chatbots must identify themselves as AI). Obligations apply from August 2, 2026.
4. Minimal Risk — Spam filters, AI in video games. No specific obligations beyond existing law.
What High-Risk Compliance Actually Requires
For organizations in life sciences, financial services, insurance, and HR technology, the high-risk classification is the one that demands immediate attention. Requirements include:
- Risk management system (Article 9) — documented and integrated with the AI lifecycle
- Data governance (Article 10) — training data documentation, bias evaluation, data quality criteria
- Technical documentation (Article 11) — sufficient for conformity assessment
- Transparency and instructions for use (Article 13)
- Human oversight measures (Article 14) — by design, not as an afterthought
- Accuracy, robustness, and cybersecurity (Article 15)
- Conformity assessment (Article 43) — either self-assessment or third-party, depending on category
- EU Declaration of Conformity and CE marking
- Post-market monitoring (Article 72)
If that list looks familiar, it should. It maps almost directly onto ISO 13485 (medical devices), ISO 9001 (quality management), and IEC 62304 (medical device software) structural requirements. Organizations that already have mature QMS infrastructure have a significant head start.
ISO 42001:2023: The Management System Framework You Can Certify To
While the EU AI Act tells you what outcomes to achieve, ISO 42001:2023 gives you an organizational system for achieving them. It's the world's first certifiable AI management system standard, published in December 2023, and I've been working with clients on implementation and gap assessments since its release.
Citation hook: ISO 42001:2023 is structured identically to ISO 9001 and ISO 27001 using the High-Level Structure (HLS), meaning organizations with existing certified management systems can integrate AI governance without building a parallel bureaucracy.
Key Clauses to Understand
- Clause 4 (Context of the Organization): Establish who your AI impacts—internal stakeholders, affected communities, and society broadly. AI-specific context includes your organization's role (provider, deployer, or both) and the AI policy.
- Clause 6.1 (Actions to address risks and opportunities): ISO 42001 introduces Annex A controls and Annex B implementation guidance for AI-specific risk. Clause 6.1.2 specifically requires identification of AI-related risks and opportunities.
- Clause 8 (Operation): This covers AI system lifecycle—design, development, deployment, and monitoring. It aligns with software development lifecycle (SDLC) frameworks many regulated companies already use.
- Clause 9 (Performance evaluation): Monitoring and measurement of AI systems, internal audits, and management review.
- Clause 10 (Improvement): Nonconformity handling and continual improvement—familiar territory for anyone running a QMS.
The standard also references three important annexes: - Annex A: AI-specific controls (33 controls across 9 control categories) - Annex B: Implementation guidance for Annex A controls - Annex C: Guidance on AI system impact assessment
For regulated organizations asking me where to start, my standard answer is: get certified to ISO 42001 first. It creates the documented management system that regulators—FDA, EMA, financial supervisors—will expect to see when they audit your AI practices.
NIST AI RMF: The U.S. Federal Anchor
The National Institute of Standards and Technology's AI Risk Management Framework (AI RMF 1.0), released in January 2023, has become the de facto governance reference for U.S. federal agencies and their contractors. Its four core functions—Govern, Map, Measure, Manage—provide a practical vocabulary for AI risk that's showing up in federal contracts, state legislation, and sector-specific guidance.
NIST also released AI RMF Playbook content and the Generative AI Profile (NIST AI 600-1) in 2024, which addresses unique risks from large language models including hallucination, data poisoning, and prompt injection. For organizations deploying LLM-based tools internally or in products, NIST AI 600-1 is essential reading.
The RMF is not a certification scheme, but several accreditation bodies and procurement officers are beginning to treat alignment with NIST AI RMF as a prerequisite. In federal healthcare contracting specifically, I expect NIST AI RMF alignment to become a contract requirement within 12–18 months.
Sector-Specific Developments You Cannot Ignore
Life Sciences and Medical Devices
The FDA's approach to AI in medical devices is crystallizing around two concepts: Predetermined Change Control Plans (PCCPs) and Total Product Lifecycle (TPLC) oversight. Draft guidance published in 2023–2024 requires manufacturers to document in advance how an AI/ML model may change post-market and what controls govern those changes.
For combination products and AI-enabled diagnostics, the intersection of FDA software validation requirements, EU AI Act high-risk classification, and ISO 42001 is where the real compliance complexity lives. Organizations that haven't mapped all three simultaneously are building siloed compliance programs that will create redundant work and audit exposure.
Financial Services
The SEC's AI-related disclosure rules—specifically around conflicts of interest in AI-driven investment advice (adopted 2024) and cybersecurity incident disclosure (which now explicitly covers AI system failures)—are creating new obligations for registered investment advisers and broker-dealers. FINRA's 2024 Report on AI highlighted model risk management, explainability, and supervisory system adequacy as primary exam focus areas.
For banks, the OCC, FDIC, and Federal Reserve's joint model risk management guidance (SR 11-7) remains the foundational reference, but examiners are now applying it explicitly to AI models. If your model inventory doesn't distinguish between traditional statistical models and AI/ML systems, that gap will surface in your next examination.
Employment and HR Technology
This is the most active U.S. state-level battleground. New York City Local Law 144 (automated employment decision tools) has been in effect since 2023. Illinois, Colorado, and Maryland have passed or advanced similar laws. These laws require bias audits, candidate notifications, and in some cases, opt-out rights. If your organization uses AI for resume screening, interview scoring, or promotion decisions, you are almost certainly subject to at least one of these laws today.
What's Coming Next: The 2025–2027 Horizon
Citation hook: Between 2025 and 2027, organizations operating in regulated industries face at least four major AI governance compliance deadlines simultaneously—a convergence that makes integrated, standards-based AI management systems not merely advisable but operationally necessary.
Here's the timeline that should be driving your planning:
| Timeline | Event |
|---|---|
| Feb 2025 | EU AI Act prohibited practices ban effective |
| Aug 2025 | EU AI Act GPAI obligations effective; governance requirements for providers of general-purpose AI models |
| 2025 (ongoing) | U.S. state AI legislation wave continues; expect 15+ new state laws |
| Aug 2026 | EU AI Act high-risk AI system obligations fully effective |
| 2026 | Expected FDA final guidance on AI/ML-based SaMD |
| 2026–2027 | EU AI Liability Directive expected to be finalized |
| Aug 2027 | EU AI Act obligations for AI in regulated products (medical devices, machinery) effective |
| 2027+ | ISO 42001 certification expected to become a de facto market requirement in regulated sectors |
Beyond regulatory timelines, watch for these structural shifts:
1. Conformity Assessment Bodies Getting Up to Speed: Third-party audit bodies are actively training assessors on ISO 42001 and EU AI Act conformity assessment. The bottleneck in 2025 is assessor capacity—if you want a 2026 certification or notified body review, start your preparation now.
2. AI Governance in Procurement and M&A Due Diligence: Investment banks and private equity firms are beginning to include AI governance maturity in due diligence questionnaires. Target companies without documented AI management systems are increasingly flagged as compliance risks.
3. Insurance and AI Risk: Cyber insurers are beginning to ask about AI governance controls as part of underwriting. The connection between AI model failures, data breaches, and cyber liability is increasingly well-understood by underwriters.
4. International Regulatory Convergence: The EU AI Act's extraterritorial effect, combined with the OECD AI Principles (adopted by 46+ countries), is creating pressure toward a global floor of AI governance requirements—even in jurisdictions without their own binding AI law.
How to Build Your AI Governance Program: A Practical Starting Point
Across my work with 200+ clients, the organizations that build effective AI governance programs share a common approach. They don't start with policy documents—they start with inventory and risk stratification.
Step 1: AI System Inventory
Document every AI system your organization uses or provides. Include vendor-supplied tools, embedded AI features in enterprise software (ERP, CRM, HRIS), and internally developed models. This inventory is required by ISO 42001 clause 8 and expected by every regulatory framework covered in this article.
Step 2: Risk Classification
For each system, apply the EU AI Act risk tier classification and your sector-specific regulatory classification (FDA device classification, OCC model risk tier, etc.). Identify which systems are high-risk under multiple frameworks simultaneously—these require immediate prioritization.
Step 3: Gap Assessment Against ISO 42001
A structured gap assessment against ISO 42001:2023 will simultaneously surface your readiness for EU AI Act compliance, NIST AI RMF alignment, and sector-specific AI governance expectations. This is the single highest-leverage investment most organizations can make right now.
Step 4: Integrated Compliance Architecture
Don't build a standalone AI compliance program. Integrate AI governance into your existing QMS, ERM, and information security management systems. ISO 42001's HLS structure is specifically designed to enable this integration.
Step 5: Certification and Regulatory Engagement
Pursue ISO 42001 certification on a timeline that positions you ahead of the August 2026 EU AI Act high-risk obligations. Simultaneously, engage proactively with sector regulators—FDA, OCC, FINRA—about your AI governance approach. Regulators consistently treat proactive engagement as a mitigating factor in enforcement.
For organizations that want structured support through this process, explore our AI governance advisory services at regulatedai.consulting or visit Certify Consulting to learn more about our full compliance practice.
FAQ: AI Governance Regulations and Standards
Does the EU AI Act apply to U.S. companies?
Yes. The EU AI Act applies to any provider that places an AI system on the EU market, any deployer using an AI system in the EU, and any provider or deployer located outside the EU whose AI system outputs are used within the EU. U.S. companies with EU customers, EU employees, or EU operations need to assess their compliance obligations now.
What's the difference between ISO 42001 and the EU AI Act?
ISO 42001:2023 is a voluntary international standard that specifies requirements for an AI management system—it tells you how to organize and govern your AI activities. The EU AI Act is binding law that specifies what outcomes AI systems must achieve and what restrictions apply. ISO 42001 certification can support EU AI Act conformity assessment but does not replace it. They're complementary, not interchangeable.
How long does ISO 42001 certification take?
For organizations with an existing certified management system (ISO 9001, ISO 27001), ISO 42001 certification typically takes 6–12 months from gap assessment to certification audit. For organizations starting from scratch, 12–18 months is a more realistic timeline. Starting now positions most organizations comfortably ahead of the August 2026 EU AI Act high-risk deadline.
Is NIST AI RMF mandatory for U.S. companies?
The NIST AI RMF is currently voluntary for private sector organizations. However, it is increasingly referenced in federal contracts, grant requirements, and state legislation. For federal contractors and healthcare organizations, alignment with NIST AI RMF is effectively becoming a market requirement even in the absence of formal mandate.
What should regulated industries prioritize first?
Start with an AI system inventory and risk classification. Without knowing what AI systems you operate and how they're classified under applicable frameworks, you cannot prioritize correctly. From there, a gap assessment against ISO 42001 is the highest-leverage next step—it creates visibility across EU AI Act, NIST AI RMF, and sector-specific requirements simultaneously. Learn more about our AI governance gap assessment approach.
The Bottom Line
AI governance in 2025 is not a future problem. The EU AI Act's prohibited practice ban is already in effect. High-risk AI obligations arrive in August 2026. State-level employment AI laws are active today. FDA and financial regulators are conducting AI-specific examinations now.
The organizations that will navigate this landscape successfully are those treating AI governance as a management system discipline—structured, documented, integrated with existing quality and compliance infrastructure, and continuously monitored. That's exactly the model ISO 42001 provides and exactly what every major regulatory framework is moving toward.
At Certify Consulting, I've maintained a 100% first-time audit pass rate across 8+ years of complex regulatory work. The disciplines that drive that outcome—thorough gap assessment, integrated compliance architecture, proactive regulatory engagement—apply directly to AI governance. The organizations that engage now will have structured programs when August 2026 arrives. The organizations that wait will be scrambling.
Last updated: 2025-06-10
Jared Clark is the principal consultant at Certify Consulting, specializing in AI governance, quality management systems, and regulatory compliance for life sciences, financial services, and other regulated industries.
Jared Clark
Certification Consultant
Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.