Compliance 12 min read

Regulating AI Across Borders: What It Means for Your Business

J

Jared Clark

March 27, 2026

Last updated: 2026-03-27


The conversation around artificial intelligence governance has officially gone global — and it's moving faster than most compliance teams anticipated. In March 2026, The Regulatory Review published a detailed spotlight featuring Duke University's Lee Tiedrich, who laid out the rapidly evolving landscape of both domestic and international regulatory approaches to general-purpose AI (The Regulatory Review, March 22, 2026). What Tiedrich's analysis confirms — and what I've been advising clients on for years — is that regulated organizations can no longer treat AI governance as a local compliance checkbox. The regulatory perimeter has expanded. If your organization develops, deploys, or procures AI systems, the borders of your compliance obligations just got much wider.

Here's what you need to understand, and more importantly, what you need to do.


The Global AI Regulatory Landscape in 2026: A Snapshot

The pace of AI regulation has been nothing short of extraordinary. In just the past three years, governments across every major economic bloc have moved from exploratory policy frameworks to binding legal obligations.

Key developments shaping the cross-border regulatory environment right now include:

  • The EU AI Act entered its phased enforcement period, with prohibitions on unacceptable-risk AI systems in effect since February 2025 and obligations for high-risk AI systems cascading through 2026 and 2027.
  • The United States remains in a patchwork state — federal AI legislation has stalled in Congress, but the executive branch, sector regulators (FDA, SEC, OCC, CFPB), and more than 40 states have issued AI-specific guidance, rules, or legislation.
  • The United Kingdom has taken a principles-based, sector-led approach through its AI Safety Institute and existing regulatory bodies, explicitly rejecting a single AI Act equivalent — at least for now.
  • China has enacted a series of targeted AI regulations covering deep synthesis, recommendation algorithms, and generative AI, each with its own compliance obligations.
  • Canada's Artificial Intelligence and Data Act (AIDA) remains in legislative limbo following the prorogation of Parliament, creating uncertainty for organizations operating in the Canadian market.
  • Brazil, Japan, South Korea, India, and Singapore have each issued frameworks, voluntary codes, or binding regulations addressing different dimensions of AI risk.

According to the OECD, more than 70 countries have now adopted or are actively developing national AI policies or regulatory frameworks, making cross-border AI governance one of the most complex compliance challenges of the decade.


Why "General-Purpose AI" Is the Regulatory Flashpoint

Tiedrich's Regulatory Review spotlight zeroes in on general-purpose AI (GPAI) — systems like large language models (LLMs) that can perform a wide range of tasks across multiple domains. This is exactly where regulatory divergence creates the most acute compliance risk.

The EU AI Act treats GPAI models as a distinct regulatory category. Under Article 51 of the Act, GPAI models with systemic risk — defined in part by training compute exceeding 10²⁵ FLOPs — face enhanced obligations including adversarial testing, incident reporting, and transparency to downstream deployers. The EU's definition of "systemic risk" in GPAI models sets a global precedent that other jurisdictions are actively studying and, in some cases, adopting.

Meanwhile, the U.S. has no equivalent federal GPAI classification. The Biden-era Executive Order on Safe, Secure, and Trustworthy AI (EO 14110) introduced dual-use foundation model reporting requirements, but subsequent executive actions in 2025 significantly reshaped those requirements. U.S.-based organizations building or deploying GPAI tools face a fragmented landscape of sector-specific guidance rather than a unified classification system.

This divergence creates a real operational problem: the same AI system can be categorized differently — and regulated differently — depending on the jurisdiction in which it is deployed.


The Compliance Gap Most Organizations Are Ignoring

Here is the hard truth I tell every client who walks through my door: most regulated organizations are managing AI compliance as if they operate in a single jurisdiction, even when they clearly do not.

Consider a mid-sized U.S.-based medical device manufacturer. They develop an AI-assisted diagnostic tool trained on U.S. patient data, deployed domestically and in the EU. In the U.S., they're subject to FDA's AI/ML-Based Software as a Medical Device (SaMD) Action Plan and emerging Digital Health Center of Excellence guidance. In the EU, that same tool likely qualifies as a high-risk AI system under Annex III of the EU AI Act — triggering conformity assessment requirements, CE marking obligations, a post-market monitoring plan, and mandatory registration in the EU AI database.

That's two substantially different regulatory regimes, with overlapping but non-identical documentation requirements, risk management obligations, and audit expectations. If you're only building your AI governance program around one of them, you have a compliance gap — and that gap is a liability.

At Regulated AI Consulting, we've helped more than 200 clients navigate exactly this kind of multi-jurisdictional challenge, achieving a 100% first-time audit pass rate by building AI governance programs that are designed for regulatory convergence, not just minimum local compliance.


A Comparative View: Major AI Regulatory Frameworks at a Glance

The following table provides a high-level comparison of the major AI regulatory frameworks relevant to most regulated organizations operating internationally.

Jurisdiction Primary Framework Approach Risk Classification GPAI Rules? Enforcement Status
European Union EU AI Act (2024/1689) Mandatory, risk-tiered Unacceptable / High / Limited / Minimal Yes (Articles 51-56) Phased (2024-2027)
United States Sector-specific + State laws Fragmented, voluntary at federal level Sector-dependent No federal classification Active (varies by sector)
United Kingdom Pro-innovation, principles-based Sector-led via existing regulators No statutory classification No Guidance-stage
China Generative AI Measures, Algorithm Regs Mandatory, targeted Use-case specific Partial (Generative AI rules) Active
Canada AIDA (pending) + PIPEDA Voluntary + proposed mandatory High-impact system concept No Legislative limbo
Singapore Model AI Governance Framework Voluntary, principles-based No statutory classification No Guidance-stage
Brazil AI Bill (pending Senate) Risk-tiered (EU-influenced) High / Low risk Under discussion Legislative stage

Reading this table: Organizations operating in the EU face the most prescriptive mandatory obligations today. Organizations with a U.S.-only footprint face a patchwork of sector rules and state laws. Those operating across both face a convergence challenge requiring a unified governance architecture.


What Regulatory Divergence Actually Costs You

Let's move beyond frameworks and talk about business impact, because this is where the conversation gets real.

Regulatory divergence creates four categories of organizational cost:

1. Compliance Duplication

Without a unified AI governance architecture, teams end up building separate documentation, risk assessments, and audit trails for each jurisdiction. In organizations I've worked with, this can mean 30-50% redundant compliance effort — resources that a well-designed cross-border program could eliminate.

2. Market Access Risk

Non-compliance with the EU AI Act for high-risk systems carries fines of up to €30 million or 6% of global annual turnover — whichever is higher. But the less-discussed risk is market exclusion: AI systems that cannot demonstrate conformity may simply be blocked from EU deployment, cutting off access to a market of 450 million consumers.

3. Procurement Liability

If your organization procures AI from a third-party vendor — which most do — you inherit compliance obligations. Under the EU AI Act, deployers of high-risk AI systems bear significant obligations even if they didn't build the system. Third-party AI procurement without governance due diligence is one of the fastest-growing sources of regulatory liability for regulated organizations in 2026.

4. Reputational and Litigation Exposure

Class action litigation around AI-driven decisions is accelerating in the U.S., particularly in employment, lending, and healthcare. Several states — including California, Illinois, and Colorado — have enacted or are enforcing laws requiring impact assessments, disclosure, or opt-out rights for automated decision-making. A cross-border AI governance program that accounts for these obligations proactively is far less expensive than one built in response to a lawsuit.


ISO 42001:2023: The Global Governance Standard That Bridges Jurisdictions

One of the most actionable recommendations I make to clients navigating multi-jurisdictional AI compliance is to anchor their governance program to ISO 42001:2023, the international standard for AI management systems.

ISO 42001:2023 clause 6.1.2 specifically addresses AI risk assessment, requiring organizations to identify risks associated with AI systems across their lifecycle — a requirement that maps directly onto the risk assessment obligations in the EU AI Act, FDA's SaMD guidance, and sector-specific AI rules globally. Because ISO 42001 is jurisdiction-neutral by design, organizations that build their AI governance program around it gain a portable compliance foundation that can be adapted to specific regulatory requirements without being rebuilt from scratch.

ISO 42001:2023 certification also increasingly functions as a credibility signal in enterprise procurement. As more regulated industries require AI governance attestations from vendors, certification provides documented evidence of systematic governance — the kind of evidence that satisfies due diligence requirements across multiple regulatory environments simultaneously.


The U.S. Federal Vacuum: What It Means for Your Compliance Strategy

One of the most important points in Tiedrich's analysis — and one that deserves direct commentary — is the ongoing absence of comprehensive federal AI legislation in the United States. As of early 2026, no federal AI Act equivalent has passed Congress. This is not a temporary gap; it reflects deep structural disagreements about federal preemption, sectoral versus horizontal regulation, and the appropriate role of liability in AI governance.

For U.S.-based organizations, this creates a deceptively comfortable compliance environment. Without a single federal mandate, it can feel like there's nothing urgent to do. That perception is wrong, for three reasons:

First, state-level AI regulation is accelerating to fill the federal vacuum. The Colorado AI Act, the Illinois Artificial Intelligence Video Interview Act, and California's suite of AI transparency and liability bills collectively create a patchwork of obligations that, taken together, affect nearly every large U.S. employer and many consumer-facing AI deployments.

Second, existing federal sector regulations increasingly incorporate AI-specific requirements without labeling them as "AI regulation." The FDA's predetermined change control plan requirements for SaMD, the SEC's predictive data analytics rules, and the CFPB's adverse action notice guidance for algorithmic credit decisions all impose AI governance obligations that are already in force.

Third, if your organization operates internationally — or if a major trading partner adopts EU-style AI regulation — U.S. federal inaction provides no protection from extraterritorial obligations. The EU AI Act's territorial scope explicitly covers AI systems placed on the EU market or used in the EU, regardless of where the developer is headquartered.


Expert Analysis: Three Things Regulated Organizations Must Do Now

Drawing on more than eight years of AI governance advisory work and the current regulatory trajectory, here are the three most critical actions for regulated organizations facing cross-border AI compliance challenges:

1. Build a Cross-Jurisdictional AI Inventory

You cannot govern what you cannot see. Every regulated organization needs a comprehensive, maintained inventory of AI systems — including third-party tools — that captures: the system's function, the data it uses, the decisions it informs, and the jurisdictions in which it operates. This inventory is the foundation of every subsequent compliance action and maps directly to requirements under ISO 42001:2023 clause 8.4, the EU AI Act's technical documentation requirements, and FDA's AI/ML SaMD lifecycle guidance.

2. Adopt a "Highest Common Denominator" Governance Architecture

Rather than building separate compliance programs for each jurisdiction, design your AI governance architecture around the most demanding requirements you face — typically the EU AI Act for high-risk systems — and then verify that architecture satisfies your other jurisdictional obligations. This approach eliminates duplication, scales efficiently as new regulations emerge, and creates a defensible governance posture across all markets simultaneously.

3. Formalize Third-Party AI Procurement Due Diligence

Establish a formal AI vendor assessment process that evaluates compliance documentation, transparency obligations, and contractual accountability before procurement — not after. Under the EU AI Act, deployers of third-party high-risk AI systems must verify that providers have fulfilled their obligations. Without a documented due diligence process, that verification is impossible, and the compliance liability flows to you.


The Bottom Line: Borders Are a Compliance Risk Factor

The global regulatory environment for AI has fundamentally changed. Organizations that treat AI governance as a local compliance function are accumulating cross-border regulatory risk with every quarter they delay building a unified governance program. The jurisdictional divergence is real, the enforcement timelines are active, and the cost of reactive compliance — whether measured in fines, market exclusion, or litigation — consistently exceeds the cost of proactive governance.

Lee Tiedrich's spotlight on cross-border AI regulation is a timely reminder that this is not a future problem. It is a present operational reality. The question for regulated organizations is not whether to build cross-border AI governance capabilities — it's whether to do it before or after your first enforcement action.

At Regulated AI Consulting, I work with regulated organizations to build AI governance programs that are designed for the world as it is — multi-jurisdictional, rapidly evolving, and increasingly enforced. If your organization is navigating the complexity of cross-border AI compliance, explore our AI governance advisory services at regulatedai.consulting or review our ISO 42001 implementation resources to understand where to start.


Last updated: 2026-03-27

Source referenced: Lee Tiedrich, "Regulating Artificial Intelligence Across Borders," The Regulatory Review, March 22, 2026. Available at: https://www.theregreview.org/2026/03/22/spotlight-regulating-artificial-intelligence-across-borders/

J

Jared Clark

AI Governance Consultant, Regulated AI Consulting

Jared Clark is the founder of Regulated AI Consulting, advising organizations on AI governance frameworks, ISO 42001 compliance, and responsible AI deployment in regulated industries.