UPDATED · RSAC 2026 + FIELD RESEARCH · MARCH 2026

NIST AI RMF &
ISO 42001 Alignment

How Microsoft's AI security controls map to NIST AI RMF and ISO 42001 — with gap analysis per clause. Updated with RSAC 2026 GA announcements and the Zero Trust Workshop tool reference.

🛠️ Zero Trust Workshop & Assessment Tool

Microsoft's Zero Trust Workshop (microsoft.github.io/zerotrustassessment) is a free, open-source guided assessment framework built by the Microsoft Security CxE team. It provides pillar-specific assessment checks, a step-by-step deployment guide using a first-then-next structure, app permissions analysis, and workshop documentation. It is built from learnings across thousands of customer deployments. A formal AI pillar for the assessment tool is in development — expected summer 2026. Until then, architects should use the existing Identity, Data, and Networking pillar assessments alongside the new Zero Trust for AI reference architecture published at RSAC 2026.

NIST AI RMF

NIST AI Risk Management Framework — Four Functions

GOVERN
Policies, roles, accountability
Establish AI risk governance structure
Agent 365 · Purview
Define roles and accountabilities for AI
Entra Agent ID
⚠ preview · Modern Agents only
Establish AI lifecycle policies
SDL for AI · ZT4AI
Govern Classic vs Modern agent estate
Power Platform Admin + AIAgentsInfo KQL
Govern multi-tenant AI environments
Entra Tenant Governance
Preview · RSAC 2026
Manage third-party AI and MCP risk
Defender for Cloud Apps
Workforce AI literacy and training
⚠ Not a product control
MAP
Context, risks, and impacts
Inventory all AI systems in use
Security Dashboard for AI
✓ Now GA
Inventory agent authentication posture
AIAgentsInfo Advanced Hunting (Defender)
Identify Classic vs Modern agents
Entra Agent ID portal · AIAgentsInfo KQL
⚠ Name sync bug complicates this
Identify sensitive data exposure
Purview DSPM for AI
Identify shadow AI deployment
Entra Internet Access Shadow AI
GA Mar 31
Identify threat actors and attack vectors
Defender · Sentinel · ZT4AI
AI bias and fairness assessment
⚠ Responsible AI tools (separate)
MEASURE
Analyse, assess, benchmark
Continuous AI risk monitoring
Security Dashboard · Defender
Dashboard Now GA
Measure no-auth and ownerless agents
AIAgentsInfo KQL queries
Evaluate model safety pre-deployment
Foundry Red Teaming + Evals
Detect credential exposure in data
Data Security Posture Agent
Preview · RSAC 2026
Benchmark AI security posture
ZT Workshop + ZT Assessment Tool
⚠ AI pillar: summer 2026
Runtime anomaly detection
Sentinel · Defender for AI
MANAGE
Treat, respond, recover
Respond to AI security incidents
Sentinel SOAR · Security Copilot
Enforce access controls on AI systems
Entra CA · Foundry Guardrails
⚠ Modern Agents only for CA
Block unauthenticated agent access
Power Platform Managed Environments
Available now
Manage agent lifecycle (onboard/retire)
Entra Agent ID
⚠ preview · Modern only
Enforce data governance in AI workflows
Purview · DLP for Copilot
DLP: GA Mar 31
Limit blast radius during active attack
Defender Predictive Shielding
Preview · RSAC 2026
Recover identity infrastructure
Entra Backup and Recovery
Preview · RSAC 2026
ISO 42001

ISO/IEC 42001:2023 — AI Management System

ClauseRequirementMicrosoft ControlsGap / Caveat
4.2 — Interested PartiesIdentify stakeholders and AI-related requirementsAgent 365 governance; Purview compliance; Entra Tenant Governance (preview)Organisational process — not a product control
5.2 — AI PolicyEstablish and maintain an AI policySDL for AI; ZT for AI framework; Zero Trust Workshop (microsoft.github.io/zerotrustassessment)Policy content is customer-defined; Microsoft provides scaffolding and guided workshop
6.1 — Risk AssessmentAI-specific risk identification and assessment processSecurity Dashboard for AI (now GA); Purview DSPM; AIAgentsInfo Advanced Hunting; Foundry Red TeamingQuantitative risk scoring still limited; qualitative posture now available via GA dashboard. Classic Agent estate requires separate inventory.
6.1.3 — AI Impact AssessmentAssess impacts on individuals and societyMicrosoft Responsible AI Impact Assessment tools (separate from Security)Outside security product scope; separate RAI tooling required
8.4 — AI System DevelopmentSecurity in AI development lifecycleSDL for AI; GitHub Advanced Security; Foundry Red Teaming; Classic→Modern Agent migrationClassic Agent legacy complicates this — agents built before Agent ID may have no secure development baseline
8.6 — Data for AI SystemsData quality, provenance, and governancePurview Information Protection; DSPM for AI; DLP for Copilot (GA March 31)Training data provenance still limited; inference-time data controls now stronger. Maker credentials can bypass data governance if not configured correctly.
9.1 — Monitoring & MeasurementContinuous monitoring of AI system performance and risksSecurity Dashboard (GA); Sentinel + MCP Entity Analyzer; Defender for AI; AIAgentsInfo KQL; Purview AI ObservabilityGood coverage when fully deployed. AI Agent Inventory requires Defender + Power Platform admin collaboration — complex setup.
10.2 — Continual ImprovementImprove AIMS based on incidents and audit findingsSentinel incident management; SDL feedback loops; ZT Workshop; ZT Assessment (AI pillar summer 2026)ZT Assessment AI pillar not until summer 2026. Classic Agent name sync bug makes agent-level policy improvement tracking difficult.
📌 Framework Coverage — Updated Post-RSAC 2026 + Field Research

The GA of Security Dashboard for AI strengthens MAP and MEASURE function coverage. The discovery of the Classic vs Modern agent distinction reveals a gap across all four functions — most organisations cannot claim complete GOVERN, MAP, MEASURE, or MANAGE coverage until their Classic Agent estate is migrated to Modern Agents. This is the most significant framework compliance gap identified from field research and is not visible from Microsoft's product documentation alone.