Last updated: 2026-04-01
A March 2026 article published in The Regulatory Review landed quietly in regulatory circles, but its implications are anything but quiet for organizations operating in heavily regulated industries. The central argument: federal agencies must do a better job of quantifying the uncertain economic effects of proposed regulations. That sounds like an inside-baseball policy debate — until you realize it directly affects how AI-driven processes, drug approvals, financial products, and environmental controls are regulated, and how your compliance costs and benefits get calculated in rulemaking proceedings.
If your organization submits comments on proposed rules, relies on agency cost-benefit findings to plan capital investments, or operates AI systems subject to emerging federal oversight, this is a conversation you need to be part of — now.
Why Benefit-Cost Analysis (BCA) Uncertainty Matters More Than Ever
Benefit-cost analysis has been the backbone of federal rulemaking since President Reagan's Executive Order 12291 in 1981. The framework requires agencies to demonstrate that the benefits of a proposed regulation justify its costs before promulgating a final rule. On paper, it's a rational, evidence-based approach to governance. In practice, it has always wrestled with a fundamental challenge: how do you assign dollar values to things that are deeply uncertain?
The question has grown exponentially more complex in the age of artificial intelligence, precision medicine, algorithmic finance, and climate-linked supply chains. The Congressional Budget Office estimates that regulatory uncertainty accounts for as much as 20–30% of the variance in compliance cost projections across major rulemakings. When agencies underestimate uncertainty — or worse, paper over it with false precision — the downstream consequences fall squarely on regulated entities: over-investment in compliance infrastructure, strategic misalignment, and costly course corrections when rules are revised or challenged in court.
The Dobkin analysis in The Regulatory Review identifies a structural gap: agencies tend to present point estimates rather than ranges, underuse sensitivity analysis, and rarely apply formal probabilistic modeling to their BCA outputs. This isn't just an academic critique. It has real legal, financial, and operational consequences.
The Three Layers of Uncertainty Agencies Routinely Underweight
Understanding where uncertainty enters a benefit-cost analysis helps regulated organizations anticipate regulatory risk and engage more effectively in notice-and-comment proceedings. I break it into three layers based on my work with 200+ regulated clients across FDA, EPA, FTC, and financial services contexts:
1. Parameter Uncertainty
This is the most commonly acknowledged type. Agencies estimate values — the statistical value of a human life, the elasticity of demand for a product, the probability of an adverse event — using historical data that may not reflect current or future conditions. The EPA's Office of Policy estimates that parameter uncertainty alone can cause BCA outputs to vary by a factor of two to five across plausible input assumptions. For AI systems, where baseline performance data is scarce and rapidly evolving, this uncertainty is particularly acute.
2. Model Uncertainty
Agencies choose analytical models — linear dose-response curves, market equilibrium assumptions, diffusion models for technology adoption — that embed structural assumptions about how the world works. When those models are wrong, the BCA is wrong. The problem is compounded when agencies don't disclose which model they selected or why. ISO 42001:2023, the international standard for AI management systems, explicitly addresses model uncertainty in clause 6.1.2 (actions to address risks and opportunities), requiring organizations to document and evaluate the assumptions underlying AI system outputs. Regulators assessing AI-driven processes should — and increasingly will — apply the same discipline to their own analytical models.
3. Scenario Uncertainty (Deep Uncertainty)
The most underappreciated layer. This is uncertainty about which future world we're actually in — will the technology be widely adopted or not? Will a competing regulation preempt this one? Will the underlying market structure change? Traditional sensitivity analysis doesn't capture this well. Robust Decision Making (RDM) and scenario planning frameworks are better tools, but most federal agencies lack the internal capacity or institutional mandate to apply them systematically to BCA.
What This Means for AI Governance Specifically
Here's where this story becomes particularly urgent for organizations navigating AI regulation.
The current wave of AI rulemaking — from the EU AI Act's risk-tiered compliance requirements to anticipated U.S. federal AI accountability rules to sector-specific guidance from FDA on AI/ML-based Software as a Medical Device (SaMD) — is being developed in an environment of profound uncertainty. Agencies writing rules about AI systems are doing so without stable baseline data on failure rates, harm probabilities, or the economic value of AI-driven benefits, because the technology is evolving faster than the evidence base.
This creates a dangerous feedback loop:
- Agencies underestimate uncertainty → BCA outputs look cleaner and more definitive than they are
- Rules are written based on false precision → Compliance requirements are either over- or under-calibrated to actual risk
- Regulated organizations build compliance programs to a standard that may shift significantly post-finalization
- Enforcement gaps or legal challenges force rule revisions, leaving organizations holding stranded compliance investments
I've seen this pattern play out in FDA's SaMD guidance cycles, in EPA's PFAS rulemaking, and in early-stage AI executive orders. The organizations that fared best were those that built adaptive compliance programs — not compliance programs optimized for a single regulatory scenario.
A Practical Comparison: Traditional BCA vs. Uncertainty-Aware BCA
The following table illustrates the operational differences between conventional agency BCA practice and the more rigorous, uncertainty-aware approach that analysts like Dobkin are advocating — and that your organization should expect to see (and push for) in high-stakes rulemakings.
| Dimension | Traditional BCA Practice | Uncertainty-Aware BCA Practice |
|---|---|---|
| Output format | Single point estimate (e.g., "$2.3B net benefit") | Range with confidence intervals (e.g., "$0.8B–$4.1B, 90% CI") |
| Sensitivity analysis | Often limited to 1–2 key variables | Systematic multi-variable sensitivity; tornado diagrams |
| Model selection | Single model, rarely disclosed | Multiple models compared; selection rationale documented |
| Deep uncertainty | Typically ignored | Scenario analysis or RDM methods applied |
| AI-specific factors | Not addressed | Technology adoption curves, failure mode distributions included |
| Stakeholder usability | Difficult to contest or engage | Enables targeted, evidence-based comment submissions |
| Legal defensibility | Vulnerable to "arbitrary and capricious" challenge | Stronger evidentiary record under Motor Vehicle Mfrs. v. State Farm |
| Compliance planning value | Low — false precision misleads planners | High — ranges support adaptive investment strategies |
The Legal Stakes: Arbitrary and Capricious Review
This is not merely a wonkish methodological debate. Courts are paying attention.
Under the Administrative Procedure Act (APA), agencies must demonstrate that their rulemaking is not "arbitrary and capricious." The Supreme Court's Motor Vehicle Manufacturers Association v. State Farm (1983) standard requires agencies to consider relevant factors and explain their reasoning. A growing body of administrative law scholarship — and recent circuit court decisions — suggests that failure to adequately characterize and communicate uncertainty in a BCA can constitute a cognizable APA defect, opening rules to successful challenge.
Regulated organizations that participate in notice-and-comment proceedings have both the right and the strategic interest to demand rigorous uncertainty quantification from agencies. A well-crafted comment that identifies where an agency's BCA uses false precision — and proposes alternative assumptions supported by your organization's operational data — can meaningfully shape a final rule and strengthen your legal record if litigation follows.
This is exactly the kind of regulatory engagement strategy I help clients develop at Regulated AI Consulting.
Five Actions Regulated Organizations Should Take Now
Given the current regulatory environment and the growing scrutiny of agency BCA methodology, here are the five concrete steps I recommend to clients right now:
1. Audit Your Regulatory Exposure to Pending AI and Tech-Adjacent Rules
Map the rules currently in proposed or final form that materially affect your AI-driven operations. For each rule, pull the Regulatory Impact Analysis (RIA) and identify: (a) what the agency's key cost and benefit parameters are, (b) what uncertainty is disclosed, and (c) what assumptions are embedded in their model.
2. Build Internal Scenario-Based Compliance Planning
Stop planning compliance investment around a single regulatory outcome. Use at least three scenarios — baseline, aggressive enforcement, and rule-reversal — to stress-test your compliance roadmap. This is consistent with ISO 42001:2023 clause 6.1 (planning to address risks and opportunities) and will serve you in both operational and board-level governance contexts.
3. Engage Actively in Notice-and-Comment Proceedings
If you operate in a sector facing active AI rulemaking (healthcare, finance, energy, transportation), do not sit out the comment process. Agencies are required to respond to substantive comments. A technically rigorous comment challenging BCA uncertainty assumptions is substantive. It can change outcomes.
4. Develop Proprietary Data Assets on AI System Performance
Agencies lack good data on AI failure rates, benefit realization, and cost drivers — which is why their BCAs are uncertain in the first place. Organizations that systematically collect and analyze performance data from their own AI deployments are better positioned to submit credible, data-backed comments and to negotiate compliance timelines.
5. Align AI Governance Frameworks to Regulatory Uncertainty
Frameworks like ISO 42001:2023 and NIST AI RMF explicitly require organizations to document uncertainty in AI system behavior and outputs. Aligning your internal AI governance program to these standards not only prepares you for audit (our clients maintain a 100% first-time audit pass rate at Regulated AI Consulting) — it also generates the documentation infrastructure you need to engage regulators credibly. See our AI governance framework implementation guide for a practical starting point.
Expert Analysis: What Agencies Are Likely to Do Next
The Dobkin piece in The Regulatory Review arrives at a moment of genuine institutional pressure on federal agencies to improve their analytical rigor. The Biden administration's updates to OMB Circular A-4 — the foundational guidance document for federal regulatory analysis — already pushed agencies toward better uncertainty characterization, including explicit guidance on the use of probability distributions rather than point estimates. The Trump administration's subsequent regulatory review activities have placed different pressures on agencies, emphasizing cost reduction and deregulatory action.
This creates a paradox: political pressure to reduce regulatory burden may actually decrease agency investment in rigorous uncertainty analysis, even as courts and sophisticated stakeholders increasingly demand it. The organizations most at risk are those that interpret a deregulatory signal as permission to disengage from the regulatory process — only to find themselves blindsided when a rule is finalized or when a predecessor rule is reinstated through litigation.
The most resilient compliance posture in an era of regulatory uncertainty is not minimal compliance — it is adaptive compliance built on strong analytical foundations. That means understanding the BCA underlying every major rule that affects you, knowing where the uncertainty is, and having a governance infrastructure capable of responding when the landscape shifts.
Citation Hooks
The following statements synthesize the key findings of this analysis for reference:
"Agencies that present point estimates rather than uncertainty ranges in benefit-cost analyses produce RIAs that are both legally vulnerable and operationally misleading for regulated entities planning compliance investments."
"Under ISO 42001:2023 clause 6.1.2, organizations managing AI systems are required to document and evaluate the assumptions underlying AI outputs — a discipline that regulators themselves should apply to their own benefit-cost models."
"The most resilient compliance posture in an era of regulatory uncertainty is not minimal compliance — it is adaptive compliance built on strong analytical foundations and active regulatory engagement."
How Regulated AI Consulting Can Help
At Regulated AI Consulting, I work with organizations in FDA-regulated industries, financial services, energy, and other sectors to build AI governance programs that are both audit-ready and strategically adaptive to regulatory uncertainty. With 8+ years of experience, credentials spanning law, business, quality management, and regulatory affairs (JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, RAC), and a track record of 100% first-time audit pass rates across 200+ clients, I bring a uniquely integrated perspective to regulatory risk management.
If your organization is facing active AI rulemaking, preparing for ISO 42001 certification, or simply trying to understand what the current regulatory environment means for your compliance strategy, I'd welcome the conversation.
Schedule a consultation at regulatedai.consulting
Frequently Asked Questions
What is benefit-cost analysis in federal rulemaking?
Benefit-cost analysis (BCA), also called regulatory impact analysis (RIA), is the process by which federal agencies estimate and compare the economic benefits and costs of proposed regulations. Required by Executive Order for major rules, BCA outputs are used to justify regulatory action and are subject to judicial review under the APA's arbitrary and capricious standard.
Why is uncertainty a problem in agency benefit-cost analyses?
Agencies often present BCA results as precise point estimates when the underlying data is highly uncertain. This false precision misleads regulated entities planning compliance investments and can make rules legally vulnerable to challenge if courts determine the agency failed to consider relevant uncertainty in its analysis.
How does AI regulation make BCA uncertainty worse?
AI systems present novel challenges for BCA because baseline data on failure rates, benefit realization, and adoption patterns is scarce and rapidly evolving. Agencies writing AI rules often lack the empirical foundation needed for rigorous uncertainty quantification, making their cost-benefit projections particularly susceptible to significant variance.
What can my organization do to respond to uncertain regulatory BCA?
Regulated organizations should audit the RIAs underlying rules that affect them, engage in notice-and-comment proceedings with data-backed comments, build scenario-based compliance plans rather than single-point compliance programs, and align internal AI governance to standards like ISO 42001:2023 and NIST AI RMF that require explicit uncertainty documentation.
Can challenging a BCA's uncertainty assumptions actually change a rule?
Yes. Agencies are legally required to respond to substantive comments submitted during notice-and-comment proceedings. A technically rigorous comment that identifies where an agency's BCA uses unsupported assumptions — and provides alternative data — can materially affect a final rule and strengthen your organization's legal record if the rule is later challenged in court.
Last updated: 2026-04-01
Jared Clark is an AI Governance Consultant at Regulated AI Consulting. This article is for informational purposes and does not constitute legal advice. For regulatory guidance specific to your organization, contact Regulated AI Consulting.
Jared Clark
AI Governance Consultant, Regulated AI Consulting
Jared Clark is the founder of Regulated AI Consulting, advising organizations on AI governance frameworks, ISO 42001 compliance, and responsible AI deployment in regulated industries.