Guide 12 min read

How to Build an AI Risk Management Framework from Scratch

J

Jared Clark

March 16, 2026

Last updated: 2026-03-16


If you lead quality, compliance, or regulatory affairs at a regulated organization and someone in your executive suite has just asked, "Do we have an AI risk management framework?"—this article is for you. Over the past eight-plus years and across more than 200 client engagements, the question I hear most often isn't whether to manage AI risk; it's where to start when you have no existing structure to build on.

The answer is: you start with a framework—a documented, repeatable system that identifies, evaluates, controls, and monitors the risks specific to artificial intelligence in your operating environment. Below is the step-by-step methodology I use with clients ranging from early-stage medtech companies to large pharmaceutical manufacturers and financial services institutions.


Why a Dedicated AI Risk Management Framework Is No Longer Optional

Regulators and standards bodies have moved decisively. The EU AI Act, which entered into force in August 2024, mandates risk management systems for providers and deployers of high-risk AI systems—with penalties reaching €30 million or 6% of global annual turnover for non-compliance. ISO 42001:2023, the first international standard for AI management systems, dedicates clause 6.1 entirely to actions that address risks and opportunities. In the United States, the FDA's AI/ML-Based Software as a Medical Device (SaMD) Action Plan and the NIST AI Risk Management Framework (AI RMF 1.0, published January 2023) both reinforce the expectation that organizations operating AI in regulated contexts will maintain formal risk controls.

A 2023 McKinsey survey found that only 21% of organizations had formally embedded AI risk management into their enterprise risk programs, despite 55% reporting active AI deployments. That gap is exactly where regulatory scrutiny is concentrated—and where a well-built framework creates competitive advantage.


The Six Core Components of an AI Risk Management Framework

Before diving into the build steps, it helps to understand the architecture. A mature AI risk management framework is built on six interdependent components:

Component Purpose Reference Standard
1. Governance & Accountability Define ownership, roles, and oversight bodies ISO 42001:2023 clause 5.1; EU AI Act Art. 9
2. AI Inventory & Classification Catalog all AI systems and assign risk tiers NIST AI RMF – GOVERN 1.1; EU AI Act Art. 6
3. Risk Identification & Assessment Identify and evaluate AI-specific hazards ISO 31000:2018; ISO 42001:2023 clause 6.1.2
4. Risk Controls & Mitigation Implement technical and procedural safeguards EU AI Act Art. 9(2); NIST AI RMF – MANAGE
5. Monitoring & Performance Measurement Detect drift, degradation, and new risks ISO 42001:2023 clause 9.1; FDA AI/ML SaMD
6. Documentation & Audit Readiness Maintain evidence for regulatory review ISO 42001:2023 clause 7.5; EU AI Act Art. 11

Each component must be operational before you can call your framework complete. Let's build it, step by step.


Step 1: Establish Governance Before You Write a Single Policy

The most common mistake I see organizations make is jumping straight to a risk register without first answering a fundamental question: Who owns AI risk?

Without clear ownership, every subsequent step collapses. Under ISO 42001:2023 clause 5.3, top management must assign and communicate responsibility and authority for AI management system roles. In practice, this means designating:

  • An AI Risk Owner (often the Chief Risk Officer, VP of Quality, or Head of Regulatory Affairs) with executive accountability for the overall framework
  • An AI Ethics/Governance Committee with cross-functional representation (Legal, IT, Clinical/Medical, Compliance, Operations)
  • System-Level Owners for each deployed AI application who are responsible for day-to-day risk controls and incident escalation

Action: Draft a one-page RACI matrix that maps each framework component to a named role. Get executive sign-off before proceeding. This single document will become the spine of your governance structure and will be one of the first things an auditor or regulator requests.


Step 2: Build Your AI System Inventory

You cannot manage what you have not catalogued. Before assigning risk levels, you need a comprehensive, living inventory of every AI system your organization uses, builds, or procures—including embedded AI in third-party software.

For each system, capture:

  • System name and vendor (if applicable)
  • Intended use and decision context (e.g., clinical decision support, fraud detection, HR screening)
  • Data inputs and outputs
  • Deployment environment (cloud, on-premise, hybrid)
  • User population (internal staff, patients, customers)
  • Integration dependencies
  • Regulatory classification status

A 2024 Gartner report found that 41% of organizations discovered AI systems in production that their IT and compliance teams were unaware of—a phenomenon sometimes called "shadow AI." Your inventory process must include a discovery phase, not just a declaration phase. Shadow AI systems represent unmanaged risk and are a direct audit exposure.

Citation hook: An AI system inventory is the foundational prerequisite for any risk classification or control activity; organizations that skip this step routinely underestimate their AI risk surface by a factor of two or more.


Step 3: Classify AI Systems by Risk Tier

Once you have your inventory, classify each system by risk level. The EU AI Act provides the most operationally useful public framework for this, distinguishing between Unacceptable Risk (prohibited), High Risk (regulated), Limited Risk (transparency obligations), and Minimal Risk (no mandatory requirements). ISO 42001:2023 and the NIST AI RMF offer complementary classification logic.

For most regulated organizations, I recommend a four-tier internal model:

Risk Tier Characteristics Control Intensity
Critical Directly affects patient/customer safety, regulatory-controlled decisions, or legal outcomes Full AI RMF controls, validation, independent audit
High Influences significant business decisions, involves sensitive personal data, or is subject to sectoral regulation Documented risk assessment, controls, periodic review
Moderate Internal operational use, limited external impact, human-in-the-loop oversight Baseline controls, annual review
Low Productivity tools, no regulated use case, no sensitive data Registration in inventory, basic usage policy

Classification drives resource allocation. A Critical-tier clinical decision support algorithm demands a completely different control regime than a Low-tier AI email drafting assistant. Treating them the same wastes resources and leaves real risks unmanaged.


Step 4: Conduct Structured Risk Assessments for Each AI System

This is where the framework moves from administrative structure to substantive risk management. For each system in your inventory—prioritizing Critical and High tiers—conduct a structured AI risk assessment that goes beyond a standard IT risk assessment.

AI introduces risk categories that traditional risk tools don't adequately capture:

  • Model risk: Bias, hallucination, distributional shift, and model degradation over time
  • Data risk: Training data quality, representativeness, privacy, and provenance
  • Transparency/explainability risk: Inability to explain or audit a model's decisions to regulators or affected individuals
  • Third-party/supply chain risk: Risks inherited from foundation model providers, data vendors, or AI platform operators
  • Deployment context risk: The same model may carry different risk profiles in different operating environments
  • Emergent behavior risk: Unintended capabilities that surface after deployment, particularly in large language models

ISO 42001:2023 clause 6.1.2 requires organizations to determine risks that could affect the AI management system's ability to achieve its intended outcomes. The NIST AI RMF's MEASURE function provides specific practices for quantifying and prioritizing these AI-specific risk categories.

For each identified risk, document: 1. Risk description and affected AI system(s) 2. Likelihood and impact rating (use a consistent 3×3 or 5×5 matrix) 3. Current controls in place 4. Residual risk rating after controls 5. Risk owner and review date

Action: Use a risk assessment template that explicitly includes AI-specific hazard categories. Generic enterprise risk templates will miss model risk and data provenance issues every time.


Step 5: Design and Implement Risk Controls

Risk controls for AI systems span three layers—technical, procedural, and organizational:

Technical Controls

  • Input validation and anomaly detection to catch out-of-distribution data
  • Model performance monitoring dashboards with defined alert thresholds
  • Explainability tools (e.g., SHAP, LIME) for high-stakes decision systems
  • Adversarial robustness testing before and after deployment
  • Version control and rollback capability for all production models

Procedural Controls

  • Human-in-the-loop review requirements for Critical-tier decisions
  • Defined escalation pathways for AI system incidents and anomalies
  • Periodic model revalidation schedules tied to performance drift thresholds
  • Third-party AI vendor assessment procedures (covering EU AI Act Art. 28 obligations for deployers)

Organizational Controls

  • AI-specific training for users, reviewers, and system owners
  • A clear AI Acceptable Use Policy that addresses generative AI and shadow AI
  • An AI incident response plan integrated with your existing quality event system

Citation hook: Effective AI risk controls must operate simultaneously at the technical, procedural, and organizational layers; controls deployed at only one layer leave exploitable gaps that regulators and auditors will identify.


Step 6: Build Continuous Monitoring Into the Framework Architecture

AI risk management is not a point-in-time event. Unlike a traditional software system, an AI model's risk profile can change materially after deployment—as the operating environment shifts, new data patterns emerge, or the model is retrained. The FDA's 2021 AI/ML SaMD Action Plan specifically calls for a Predetermined Change Control Plan (PCCP) that anticipates and governs these post-market changes.

Key monitoring activities to operationalize:

  • Model performance KPIs tracked against deployment baselines (accuracy, precision, recall, fairness metrics by population subgroup)
  • Data drift detection to identify when input distributions have shifted from training data
  • Incident and near-miss logging tied to your corrective and preventive action (CAPA) system
  • Periodic re-risk-assessment triggered by defined events (major model update, new deployment context, material change in user population, regulatory guidance update)
  • Third-party AI vendor monitoring to track changes to underlying models, APIs, or data sources you depend on

Under ISO 42001:2023 clause 9.1, organizations must evaluate the performance of their AI management system and determine what needs to be monitored, measured, analyzed, and evaluated. This is not aspirational language—it is an auditable requirement.


Step 7: Document Everything for Audit Readiness

In regulated industries, if it isn't documented, it didn't happen. Your AI risk management framework must generate a complete, traceable documentation set that survives personnel changes, system updates, and regulatory inspections.

Minimum documentation set:

Document Purpose Retention Trigger
AI System Inventory Demonstrates scope and coverage Updated continuously
Risk Classification Records Shows tier assignment rationale Per system lifecycle
Risk Assessment Reports Evidence of structured hazard analysis Per assessment cycle
Control Implementation Records Proves controls are in place and effective Ongoing
Monitoring Logs & KPI Reports Demonstrates continuous oversight Per monitoring cycle
Training Records Shows competency of AI system users/owners Per training event
Incident and CAPA Records Documents response to AI-specific events Per event
Third-Party Assessment Records Covers supply chain risk Per vendor relationship

EU AI Act Article 11 requires technical documentation for high-risk AI systems to be drawn up before the system is placed on the market and kept up to date. ISO 42001:2023 clause 7.5 requires documented information to be controlled, protected, and available. These are not optional documentation elements—they are auditable obligations.

Citation hook: A well-maintained AI documentation set is simultaneously a compliance artifact and a business continuity asset; organizations that treat documentation as an afterthought will fail both regulatory inspections and operational resilience tests.


Common Mistakes That Derail AI Risk Frameworks

After 200+ client engagements with a 100% first-time audit pass rate, I've seen the same failure patterns emerge repeatedly:

  1. Treating AI risk as purely an IT responsibility. AI risk sits at the intersection of data science, clinical/operational context, legal exposure, and quality systems. IT alone cannot own it.
  2. Copying an existing cybersecurity or software risk framework. AI-specific risks—bias, drift, explainability, emergent behavior—are systematically missed by frameworks designed for traditional software.
  3. Building the framework but not maintaining it. A risk register that was last updated 18 months ago is worse than no risk register at all—it creates false assurance and regulatory exposure simultaneously.
  4. Ignoring third-party and embedded AI. Most organizations are deploying far more AI than they realize, much of it embedded in ERP, CRM, and SaaS platforms they already use.
  5. Failing to connect AI risk to the enterprise quality system. AI incidents, near-misses, and CAPAs should flow through your existing quality event infrastructure, not exist in a separate silo.

How Long Does It Take to Build an AI Risk Management Framework?

The timeline varies by organizational size, existing quality maturity, and AI deployment complexity. Based on client engagements at Certify Consulting, typical build timelines are:

Organization Type Starting Maturity Typical Timeline to Operational Framework
Small regulated company (< 200 staff) Low (no existing AI governance) 3–4 months
Mid-size regulated company (200–2,000 staff) Moderate (some IT risk processes) 4–7 months
Large regulated enterprise (2,000+ staff) Variable 6–12 months
Any size, ISO 42001 certification target Moderate+ Add 3–6 months for certification readiness

These timelines assume dedicated internal resources and external advisory support. Organizations attempting a solo build without prior AI governance experience should add 30–50% to these estimates.


Your Next Step

Building an AI risk management framework from scratch is achievable—but only if you treat it as a formal project with executive sponsorship, cross-functional ownership, and structured methodology. The organizations that struggle most are those that try to bolt AI risk management onto an existing framework without acknowledging that AI creates genuinely new categories of risk that require genuinely new controls.

If you're ready to start the build—or if you've already started and need a gap assessment against ISO 42001:2023 or the EU AI Act—our team at Certify Consulting is ready to help. We've guided organizations through this process across life sciences, financial services, and technology sectors, and we bring both the regulatory depth and the practical implementation experience to get you to an audit-ready state efficiently.

For more on the standards that underpin AI risk management, explore our resources on ISO 42001:2023 compliance for regulated organizations and EU AI Act readiness for high-risk AI deployers.


Last updated: 2026-03-16

J

Jared Clark

Certification Consultant

Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.