How Microsoft's AI security controls map to NIST AI RMF and ISO 42001 — with gap analysis per clause. Updated with RSAC 2026 GA announcements and the Zero Trust Workshop tool reference.
Microsoft's Zero Trust Workshop (microsoft.github.io/zerotrustassessment) is a free, open-source guided assessment framework built by the Microsoft Security CxE team. It provides pillar-specific assessment checks, a step-by-step deployment guide using a first-then-next structure, app permissions analysis, and workshop documentation. It is built from learnings across thousands of customer deployments. A formal AI pillar for the assessment tool is in development — expected summer 2026. Until then, architects should use the existing Identity, Data, and Networking pillar assessments alongside the new Zero Trust for AI reference architecture published at RSAC 2026.
| Clause | Requirement | Microsoft Controls | Gap / Caveat |
|---|---|---|---|
| 4.2 — Interested Parties | Identify stakeholders and AI-related requirements | Agent 365 governance; Purview compliance; Entra Tenant Governance (preview) | Organisational process — not a product control |
| 5.2 — AI Policy | Establish and maintain an AI policy | SDL for AI; ZT for AI framework; Zero Trust Workshop (microsoft.github.io/zerotrustassessment) | Policy content is customer-defined; Microsoft provides scaffolding and guided workshop |
| 6.1 — Risk Assessment | AI-specific risk identification and assessment process | Security Dashboard for AI (now GA); Purview DSPM; AIAgentsInfo Advanced Hunting; Foundry Red Teaming | Quantitative risk scoring still limited; qualitative posture now available via GA dashboard. Classic Agent estate requires separate inventory. |
| 6.1.3 — AI Impact Assessment | Assess impacts on individuals and society | Microsoft Responsible AI Impact Assessment tools (separate from Security) | Outside security product scope; separate RAI tooling required |
| 8.4 — AI System Development | Security in AI development lifecycle | SDL for AI; GitHub Advanced Security; Foundry Red Teaming; Classic→Modern Agent migration | Classic Agent legacy complicates this — agents built before Agent ID may have no secure development baseline |
| 8.6 — Data for AI Systems | Data quality, provenance, and governance | Purview Information Protection; DSPM for AI; DLP for Copilot (GA March 31) | Training data provenance still limited; inference-time data controls now stronger. Maker credentials can bypass data governance if not configured correctly. |
| 9.1 — Monitoring & Measurement | Continuous monitoring of AI system performance and risks | Security Dashboard (GA); Sentinel + MCP Entity Analyzer; Defender for AI; AIAgentsInfo KQL; Purview AI Observability | Good coverage when fully deployed. AI Agent Inventory requires Defender + Power Platform admin collaboration — complex setup. |
| 10.2 — Continual Improvement | Improve AIMS based on incidents and audit findings | Sentinel incident management; SDL feedback loops; ZT Workshop; ZT Assessment (AI pillar summer 2026) | ZT Assessment AI pillar not until summer 2026. Classic Agent name sync bug makes agent-level policy improvement tracking difficult. |
The GA of Security Dashboard for AI strengthens MAP and MEASURE function coverage. The discovery of the Classic vs Modern agent distinction reveals a gap across all four functions — most organisations cannot claim complete GOVERN, MAP, MEASURE, or MANAGE coverage until their Classic Agent estate is migrated to Modern Agents. This is the most significant framework compliance gap identified from field research and is not visible from Microsoft's product documentation alone.