AI Governance Frameworks: NIST AI RMF vs EU AI Act
The EU AI Act is law. NIST AI RMF is a voluntary framework with growing regulatory adoption. Engineering teams building AI systems in 2026 need to understand what each requires, where they align, and what the compliance gaps look like in practice.

AI governance is no longer a future concern. The EU AI Act has been progressively applying since August 2024, with high-risk system obligations fully in force by August 2026. In the US, NIST's AI Risk Management Framework has become the de facto reference for federal contractors and is being cited in state-level AI regulations. Engineering teams that ignored governance while building are now scrambling to retrofit it.
Retrofitting is expensive. Governance designed into a system from the start costs a fraction of what it costs to add later. This post is about understanding both frameworks well enough to make architectural decisions that satisfy both — and about where the two frameworks genuinely diverge.
EU AI Act: Risk Tiers and What They Mean
The EU AI Act uses a four-tier risk classification: unacceptable risk (banned), high risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). The tier your system falls into determines the compliance burden.
Unacceptable risk systems are prohibited. These include real-time biometric surveillance in public spaces, social scoring by governments, and systems that exploit psychological vulnerabilities. If your product falls here, there is no compliance path — it cannot be deployed in the EU.
High-risk systems face the most significant obligations. The list includes AI in critical infrastructure, education and vocational training, employment decisions, access to essential private and public services, law enforcement, migration, and administration of justice. For high-risk systems: mandatory conformity assessment, technical documentation, data governance requirements, transparency and human oversight, accuracy and robustness standards, and mandatory logging.
- Risk management system: Documented, tested, continuously monitored throughout the lifecycle.
- Data governance: Training and test data must be relevant, representative, free of errors. Data provenance documented.
- Technical documentation: Full documentation of system purpose, architecture, training methodology, performance metrics.
- Transparency: Users must know they are interacting with an AI system. Clear instructions for use.
- Human oversight: Systems must be designed to allow human intervention and override. Cannot be designed to circumvent oversight.
- Accuracy and robustness: Performance metrics must meet defined thresholds. Robustness against adversarial inputs required.
- Logging: Automatic logging sufficient to ensure traceability across system lifetime.
NIST AI RMF: The Voluntary Framework Built for Adoption
The NIST AI RMF is organized around four functions: GOVERN, MAP, MEASURE, and MANAGE. GOVERN establishes the organizational policies and accountability structures. MAP identifies and categorizes AI risks in context. MEASURE quantifies those risks using qualitative and quantitative methods. MANAGE deploys responses to identified risks.
What makes the RMF useful is its specificity about process without being prescriptive about technology. It does not tell you which bias mitigation algorithm to use — it tells you that you need a process for identifying, measuring, and addressing bias. This leaves room for engineering judgment while ensuring governance gaps do not persist.
| Dimension | NIST AI RMF | EU AI Act |
|---|---|---|
| Legal status | Voluntary (US); state laws reference it | Mandatory law in EU |
| Scope | All AI systems | Risk-tiered; obligations scale with risk |
| Prescriptiveness | Process-oriented, technology-neutral | Specific technical requirements for high-risk |
| Enforcement | Market pressure + emerging regulation | National market surveillance authorities |
| Penalties | None directly | Up to €35M or 7% global revenue |
| Documentation | Guidance-based, flexible format | Mandatory standardized technical documentation |
| Best for | US organizations building AI governance culture | Organizations deploying AI in EU markets |
Where the Frameworks Align
Despite different origins and legal force, NIST RMF and the EU AI Act converge on several key principles. Both require continuous risk assessment, not point-in-time evaluation. Both emphasize human oversight as a design requirement. Both require documented testing against performance metrics. Both address bias and fairness explicitly. Both require transparency — about the system's nature and its limitations.
This alignment means that a governance program built around NIST RMF provides substantial coverage for EU AI Act compliance, particularly for the documentation and risk management requirements. The EU Act adds specific legal requirements around conformity assessment and CE marking for high-risk systems that NIST does not address, but the underlying governance practices overlap heavily.
Engineering for Governance: Practical Steps
Building a Governance-Ready AI System
Before writing a line of code, determine whether your system falls under EU AI Act high-risk categories and which NIST RMF risk tier it maps to. This decision shapes architecture choices, not just documentation choices. If you are near a high-risk boundary, design as if you are in it.
Both frameworks require audit trails. Logging sufficient for governance means capturing: inputs to the model, outputs from the model, confidence scores or uncertainty estimates, any human overrides, and the version of the model used. Design the log schema before you design the feature.
Document your model's intended use, limitations, performance across demographic groups, and known failure modes. This is required by the EU AI Act's technical documentation requirement and aligns with NIST RMF's MAP function. Update these documents when the model changes.
Every AI decision that affects a person must have a path for that person to contest or escalate. This is both an EU AI Act requirement for high-risk systems and a NIST RMF best practice. Build the UI and process for human review before launch, not in response to complaints after.
Bias does not stay constant. Model performance across demographic groups shifts as real-world data distributions shift. Schedule regular bias evaluations — quarterly at minimum — and define the thresholds that trigger remediation. Document both the methodology and the results.
The Governance Gap Most Teams Miss
Most engineering teams focus on the model: they document it, test it, measure its bias. The governance gap is usually in the system surrounding the model — the data pipeline that feeds it, the post-processing logic that transforms its outputs, the human interfaces that display those outputs. A well-governed model embedded in a poorly governed system does not satisfy either framework.
“Governance is not a model problem. It is a system problem. Everything the model touches is in scope.”
Third-party model use creates additional complexity that teams frequently underestimate. If you use a foundation model API for a high-risk application, you are responsible for the governance of the resulting system even though you do not control the model. The EU AI Act places obligations on the deployer, not just the developer. Understand what your model provider can and cannot attest to — and document where the governance responsibility boundary sits.
Side-by-Side Framework Comparison
The NIST AI RMF and the EU AI Act approach AI governance from fundamentally different angles. The NIST RMF is a voluntary framework for any organisation building or deploying AI — it provides structure and vocabulary for risk management without mandating specific outcomes. The EU AI Act is binding law with enforcement mechanisms, risk-based classifications, and specific technical requirements. Understanding both is essential even if you only operate in one jurisdiction.
| Dimension | NIST AI RMF (USA) | EU AI Act |
|---|---|---|
| Type | Voluntary framework | Binding regulation (EU law) |
| Scope | Any organisation building or using AI globally | AI systems placed on EU market or affecting EU persons |
| Risk classification | Tiered (Govern, Map, Measure, Manage) | Prohibited / High-risk / Limited-risk / Minimal-risk |
| Enforcement | None — market-driven adoption | National authorities; fines up to €35M or 7% global revenue |
| Documentation required | Recommended practices (AI RMF Playbook) | Mandatory technical documentation for high-risk AI |
| Conformity assessment | Self-assessment or third-party (voluntary) | Mandatory third-party for certain high-risk categories |
| Effective date | Published January 2023 (living document) | Phased: prohibited uses Aug 2024, high-risk 2025-2026 |
| Penalties | None | Up to €35M / 7% global annual turnover |
Implementation Timeline for Each Framework
The EU AI Act phased implementation means different obligations activate at different times. Prohibited AI practices (social scoring, real-time biometric surveillance in public) became enforceable in August 2024. General-purpose AI model obligations (for foundation models above 10^25 FLOPs training compute) apply from August 2025. High-risk AI system requirements — the most substantive category covering hiring, credit, healthcare diagnostics, education, law enforcement, and critical infrastructure — apply from August 2026. Compliance infrastructure needs to be built 12-18 months before obligations kick in.
- August 2024: Prohibited AI practices enforceable (social scoring, manipulative AI, real-time biometric ID in public)
- August 2025: GPAI model rules apply (foundation models >10^25 FLOPs); codes of practice finalised
- February 2025: AI literacy obligations for deployers and providers
- August 2026: High-risk AI system requirements (Annex III categories) fully enforceable
- August 2027: High-risk AI embedded in regulated products (medical devices, machinery) must comply
NIST AI RMF implementation has no external deadline — the driver is internal risk appetite and customer/partner requirements. That said, US federal agencies are increasingly referencing the RMF in procurement requirements, and enterprise B2B customers in regulated industries expect to see RMF-aligned governance documentation as part of vendor due diligence.
Which Framework to Prioritise: Decision Framework by Company Profile
| Company profile | Primary framework | Secondary | Why |
|---|---|---|---|
| EU market, >50 employees | EU AI Act (mandatory) | NIST RMF (operational guidance) | Legal obligation; RMF helps implement EU Act requirements |
| US-only, no EU customers | NIST AI RMF | ISO 42001 (if selling to enterprise) | No legal mandate; RMF provides credibility with US enterprises |
| Global enterprise B2B | Both simultaneously | ISO 42001 for unified certification | Enterprise customers in both regions; dual compliance is table stakes |
| Healthcare AI (US) | HIPAA + NIST AI RMF | FDA SaMD guidance | Sector regulation takes precedence; RMF fills governance gaps |
| Seed/Series A startup, US-only | NIST AI RMF (lightweight adoption) | None initially | Full compliance is premature; adopt RMF vocabulary and model cards |
For companies that have already implemented SOC 2, the NIST AI RMF maps cleanly onto the control families you have already established. The Govern and Manage functions of the RMF align with SOC 2's Change Management and Risk Assessment criteria. You are not starting from zero — you are extending existing governance to cover AI-specific risks. Our SOC 2 compliance guide covers the foundational control framework that AI governance builds on top of.
ISO 42001: The Emerging Standard for AI Management Systems
ISO 42001 (published December 2023) is the international standard for AI Management Systems (AIMS). Like ISO 27001 for information security, ISO 42001 provides a certifiable management system framework for AI governance. It is structured around the Plan-Do-Check-Act cycle and covers: AI policy, risk assessment and treatment, AI objectives, operational planning and control, performance evaluation, and continual improvement.
ISO 42001 certification is not yet widespread, but it is gaining traction in regulated industries and enterprise procurement as a mechanism to demonstrate structured AI governance. Organisations already certified on ISO 27001 will find significant overlap — the management system structure is nearly identical, and an integrated ISMS/AIMS is achievable without doubling the compliance overhead.
The relationship between ISO 42001 and the EU AI Act is complementary: ISO 42001 certification is expected to be recognised as evidence of conformity for some EU AI Act requirements (particularly around governance processes for high-risk AI). Companies targeting EU markets should evaluate ISO 42001 as a path to demonstrating EU AI Act readiness. For the supply chain security aspect of AI governance — which both frameworks address — see our AI dependency audit guide.
Practical AI Governance Checklist
Catalogue all AI systems in use or development. For each, document: intended use, training data sources, model type, output type, deployment context, and who is affected by decisions. Classify each system by the EU AI Act risk tier (or equivalent internal risk classification) to prioritise compliance effort.
Publish internal model cards documenting: model architecture and training data summary, intended use and out-of-scope uses, performance metrics across demographic subgroups, known limitations and failure modes, and update/versioning history. Model cards are required for GPAI models under the EU AI Act and expected by enterprise customers in regulated industries.
Integrate AI risk assessment into your existing risk management cycle. For each new AI deployment, assess: data quality and bias risks, output accuracy and reliability, potential for discriminatory or harmful outcomes, data privacy exposure, and adversarial attack surface. Document the assessment, residual risks, and mitigation controls.
For AI systems that influence consequential decisions (hiring, credit, content moderation, medical triage), implement explicit human-in-the-loop checkpoints. Document what "meaningful human oversight" means for each system — a rubber-stamp review is not sufficient. EU AI Act high-risk category requires demonstrable human control.
Monitor production AI for distribution shift, performance degradation, and unexpected outputs. Define what constitutes an AI incident (output causing harm, significant accuracy drop, bias detection) and integrate AI incidents into your existing incident response process with appropriate escalation paths.
Practical Implementation: Starting Your Governance Program
Regardless of which framework you follow, the practical starting point is the same: inventory your AI systems, classify their risk level, and document their behaviour. Most organisations discover during inventory that they have more AI systems than they thought — ML models in recommendation engines, AI features in third-party SaaS tools, automated decision-making in HR software. The inventory is the foundation; without it, governance is aspirational rather than operational.
For each AI system, document: what data it uses (inputs), what decisions it makes or influences (outputs), who is affected by those decisions (stakeholders), what happens when it fails (failure modes), and how its performance is monitored (metrics). This documentation is required by both NIST AI RMF and EU AI Act for high-risk systems, and it is good practice for all systems regardless of regulatory requirement. Teams that maintain this documentation find that it also improves debugging, incident response, and onboarding — it is the AI equivalent of an architecture decision record. For teams operating in regulated industries, this documentation feeds directly into SOC 2 and HIPAA compliance requirements that already mandate system documentation.
The Role of Model Cards and System Documentation
Both NIST AI RMF and the EU AI Act require documentation of AI systems, but they differ in what they require and for whom. NIST recommends model cards (a standardised format for documenting model capabilities, limitations, and intended use) as a best practice. The EU AI Act mandates technical documentation for high-risk systems including: a detailed description of the AI system and its purpose, the design specifications and development process, data governance and management practices, human oversight measures, and instructions for use.
The practical approach: adopt model cards as your standard documentation format now, regardless of whether you are subject to the EU AI Act. Model cards force you to document the questions that matter — what was the model trained on, what are its known limitations, for what populations was it validated, and what are the failure modes. This documentation has operational value beyond compliance: it improves onboarding for new team members, provides context for incident responders, and serves as the basis for customer-facing AI transparency disclosures. Google, Hugging Face, and Microsoft all publish model cards for their major models, and the format is becoming an industry standard.
Need this kind of thinking applied to your product?
We build AI agents, full-stack platforms, and engineering systems. Same depth, applied to your problem.
Enjoyed this? Get the weekly digest.
Research highlights and AI news, delivered every Thursday. No spam.
Related articles

Understanding AI Governance and Compliance in 2026

Post-Quantum Encryption Explained for People Who Don’t Write Code

Building for SOC 2 Compliance from Day One

The Identity Crisis: Your AI Agents Have No Identity and No Accountability
