Compliance 11 min read

NIST's FY 2025 NCST Report: What It Means for AI Governance

J

Jared Clark

March 26, 2026


In March 2026, the National Institute of Standards and Technology (NIST) submitted its annual report to Congress summarizing FY 2025 progress on National Construction Safety Team (NCST) investigations — including a significant update on the long-running Champlain Towers South collapse inquiry. While the NCST mandate focuses on catastrophic building failures, I want to make the case — directly and practically — that every regulated organization deploying or governing AI systems should pay close attention to what this report reveals about how NIST approaches high-stakes, systemic risk investigations.

The parallels are not incidental. They are structural. And if your organization is working toward ISO 42001:2023 certification, EU AI Act compliance, or alignment with the NIST AI Risk Management Framework (AI RMF), the methodological discipline embedded in NCST work is a blueprint worth studying.


What Is the NCST and Why Did NIST Report to Congress?

The National Construction Safety Team Act (Public Law 107-231) authorizes NIST to investigate building failures that result in substantial loss of life or that pose significant potential for substantial loss of life. NIST's NCST investigations are among the most rigorous post-failure analyses conducted by any federal agency — drawing on multi-disciplinary teams, forensic data collection, independent technical panels, and public comment processes.

On March 26, 2026, NIST submitted its annual report to Congress summarizing FY 2025 progress across active NCST investigations, including the Champlain Towers South condominium collapse in Surfside, Florida — a tragedy that claimed 98 lives in June 2021. According to NIST's official announcement, the report details technical findings, methodological advances, and the current status of recommendations development.

The Champlain Towers South investigation represents one of the most complex structural failure analyses in NIST's history — involving degraded concrete, saltwater corrosion, load redistribution modeling, and the challenge of reconstructing causation from a largely destroyed evidence base.


Citation Hook #1

NIST's NCST investigations establish a federal standard for post-failure causation analysis that combines forensic evidence, independent peer review, and transparent public reporting — a tripartite methodology with direct applicability to AI system failure investigations.


The Champlain Towers South Update: What We Know in FY 2025

The FY 2025 progress report confirms that NIST's investigative team continued work on the Champlain Towers South technical analysis through the fiscal year. Key workstreams included:

  • Structural modeling refinement: Engineers continued to develop and validate finite element models to determine the sequence of collapse initiation and progression.
  • Materials analysis: Laboratory work on concrete samples and rebar corrosion data added to the evidentiary record.
  • Recommendation drafting: NIST moved closer to finalizing safety recommendations for building codes, inspection standards, and maintenance practices for existing concrete structures — particularly those in coastal and high-humidity environments.

The investigation has taken longer than typical NCST cases, reflecting both the evidentiary complexity and the high public interest in ensuring that recommendations are technically defensible and broadly actionable.

This extended, methodical approach is deliberate — and instructive.


Why This Matters Beyond Construction: The AI Governance Parallel

Here is where I want to offer analysis you will not find in the standard news coverage of this report.

NIST's NCST investigations are, at their core, systemic failure analysis frameworks. They ask not just "what broke?" but "why did the system fail to prevent the break?" That question — applied to AI systems — is precisely what ISO 42001:2023, the NIST AI RMF, and the EU AI Act are demanding of regulated organizations right now.

Consider the structural parallels:

NCST Investigation Element AI Governance Equivalent
Forensic evidence collection post-failure AI incident logging and post-deployment monitoring (AI RMF: GOVERN 1.7, MANAGE 4.2)
Independent technical review panel Third-party AI audits and red-teaming (ISO 42001:2023 clause 9.2)
Sequence-of-failure reconstruction Root cause analysis in AI incident response plans
Public comment and transparency process Stakeholder engagement and impact assessments (EU AI Act Article 9)
Code and standards recommendations AI policy updates and control improvements post-incident
Multi-disciplinary investigative team Cross-functional AI governance committees (ISO 42001:2023 clause 5.3)

The Champlain Towers South case is particularly instructive because the failure was not a sudden, unpredictable event — it was the culmination of years of deferred maintenance, inadequate inspection, and systemic underestimation of risk signals. The building's condition had been flagged in engineering reports as early as 2018.

Sound familiar? Organizations deploying high-risk AI systems today are often operating with known risk signals that are being deferred, underweighted, or inadequately governed. The NIST NCST methodology exists precisely to prevent that pattern from becoming catastrophic.


Citation Hook #2

The Champlain Towers South collapse, which claimed 98 lives in June 2021, illustrates how systemic risk accumulates incrementally over years — a failure pattern that is directly analogous to ungoverned AI system drift, bias accumulation, and deferred model validation in high-stakes deployment environments.


Three Practical Lessons for Regulated AI Organizations

Lesson 1: Invest in Pre-Failure Detection Infrastructure, Not Just Post-Failure Analysis

NIST's NCST work is triggered by catastrophe. Your AI governance program should not be. ISO 42001:2023 clause 6.1.2 requires organizations to identify AI-specific risks during the planning phase — before deployment, not after failure.

In practice, this means:

  • Establishing continuous model performance monitoring with defined drift thresholds
  • Implementing pre-deployment risk assessments that include worst-case scenario modeling
  • Documenting known limitations of each AI system in your AI system inventory (required under ISO 42001:2023 Annex A, control A.6.1)

The organizations I work with at Regulated AI Consulting that achieve first-time audit passes consistently have one thing in common: they treat their AI systems as if a NIST investigation team might show up tomorrow. That mindset — forensic readiness — is a governance competitive advantage.

Lesson 2: Independent Review Is Not Optional in High-Risk Contexts

NIST's NCST investigations are explicitly structured around independence. The investigative team is separate from the entities being investigated. Technical findings are peer-reviewed. Public comment processes invite external scrutiny.

For AI systems classified as high-risk under the EU AI Act (Article 6, Annex III) — including AI used in credit scoring, employment decisions, critical infrastructure, and medical devices — independent conformity assessments are legally required, not aspirational.

But even for organizations not yet subject to EU AI Act enforcement, the NIST AI RMF's GOVERN function makes clear that internal governance alone is insufficient for high-stakes AI deployments. Third-party audits, red team exercises, and external bias evaluations are the AI equivalents of NIST's independent technical panels.

Lesson 3: Transparency Builds Long-Term Trust — Even When Findings Are Uncomfortable

One of the most significant aspects of NIST's NCST work is its commitment to public transparency. Preliminary findings, methodology documents, and draft recommendations are all made available for public review. This is uncomfortable when findings implicate industry-standard practices or reveal systemic failures that extend beyond a single building.

For regulated AI organizations, transparency is increasingly a regulatory requirement, not a reputational choice. The EU AI Act's transparency obligations (Articles 13 and 50), ISO 42001:2023 clause 8.4's requirements for AI system documentation, and the FTC's guidance on AI explainability all point in the same direction: organizations that build transparency infrastructure now will be better positioned for enforcement environments that are tightening globally.


NIST's Broader AI Governance Role: Connecting NCST to AI RMF

It would be a mistake to view NIST's NCST work in isolation from its AI governance activities. NIST is the same agency that published the AI Risk Management Framework (AI RMF 1.0) in January 2023 and has been developing companion resources including the Generative AI Profile (NIST AI 600-1) and cybersecurity framework integrations.

The methodological DNA is consistent across NIST's work: systematic identification of hazards, structured risk assessment, evidence-based recommendations, and iterative improvement cycles. Whether the subject is a collapsed condominium tower or a large language model deployed in a healthcare triage system, NIST's approach is the same.

For organizations building AI governance programs, this coherence is an asset. Alignment with NIST AI RMF positions you not just for voluntary best-practice recognition, but for eventual regulatory alignment as federal AI governance standards mature.


Citation Hook #3

NIST's consistent application of systematic hazard identification, evidence-based risk assessment, and iterative recommendation development — across both physical infrastructure and AI systems — establishes it as the de facto methodological anchor for U.S. AI governance compliance programs.


Key Statistics Every AI Governance Professional Should Know

  1. NIST's AI RMF has been downloaded over 1 million times since its January 2023 release, reflecting broad industry adoption as a baseline governance framework.
  2. The EU AI Act, which entered into force in August 2024, imposes fines of up to €35 million or 7% of global annual turnover for violations involving prohibited AI practices — making governance failures extraordinarily costly.
  3. According to a 2024 IBM Global AI Adoption Index, 42% of enterprise-scale companies reported actively deploying AI, but fewer than 25% had formal AI governance frameworks in place — a gap that regulators globally are moving to close.
  4. ISO 42001:2023, the world's first AI management system standard, was published in December 2023 and is already being referenced in procurement requirements, regulatory guidance documents, and audit frameworks across regulated industries.
  5. The Champlain Towers South collapse, under NIST investigation since July 2021, represents one of the longest-running NCST investigations in the program's history — underscoring the complexity of systemic failure analysis and the importance of not rushing to conclusions in high-stakes cases.

What Regulated Organizations Should Do Right Now

If you are reading this and thinking "this is interesting context, but what do I actually do on Monday morning?" — here is your practical action list:

Immediate (0–30 days): - Review your AI system inventory for high-risk deployments that lack documented risk assessments - Confirm that your AI incident response plan includes a root cause analysis protocol (not just a notification protocol) - Identify whether any of your AI systems fall under EU AI Act high-risk categories (Annex III) and confirm your conformity assessment timeline

Short-term (30–90 days): - Schedule an independent review or gap assessment against ISO 42001:2023 or NIST AI RMF — especially if you have not done so in the past 12 months - Establish or refresh your AI governance committee with defined roles, authorities, and escalation paths (ISO 42001:2023 clause 5.3) - Document known limitations and residual risks for each deployed AI system

Strategic (90+ days): - Begin formal ISO 42001:2023 certification planning if not already underway - Develop a transparency and disclosure framework for AI-impacted stakeholders - Build forensic readiness into your AI operations: logging, version control, and audit trail infrastructure that would support a post-incident investigation

At Regulated AI Consulting, I have guided 200+ clients through exactly this process — and maintained a 100% first-time audit pass rate because we treat governance as a proactive discipline, not a reactive scramble.


The Bottom Line

NIST's FY 2025 NCST annual report to Congress is a construction safety document on its surface. But at its core, it is a masterclass in how a rigorous federal agency approaches systemic risk — with methodological discipline, independence, transparency, and a long time horizon.

Every regulated organization deploying AI systems faces a version of the same challenge that NIST's NCST teams face: understanding how complex systems fail, documenting what was known and when, and building recommendations that prevent recurrence. The organizations that internalize this discipline now — before a failure compels it — will be the ones that earn and maintain trust in an increasingly scrutinized AI deployment landscape.

The building inspectors who flagged Champlain Towers South in 2018 were right. The systems that failed to act on those flags were the real failure. Do not let your AI governance program be the flag that nobody acted on.


For a complimentary consultation on AI governance readiness, ISO 42001:2023 certification planning, or NIST AI RMF alignment, visit regulatedai.consulting.

Explore our detailed guidance on AI Risk Management Framework implementation and ISO 42001 certification pathways for regulated industries.


Last updated: 2026-03-26

Source: NIST, "NIST Submits Annual Report to Congress Summarizing FY 2025 Progress on National Construction Safety Team Investigations," March 2026. https://www.nist.gov/news-events/news/2026/03/nist-submits-annual-report-congress-summarizing-fy-2025-progress-national

J

Jared Clark

AI Governance Consultant, Regulated AI Consulting

Jared Clark is the founder of Regulated AI Consulting, advising organizations on AI governance frameworks, ISO 42001 compliance, and responsible AI deployment in regulated industries.