πŸ“Œ Author's note: This site synthesises the author's own understanding from publicly available Microsoft documentation, official Microsoft Security blog posts, RSAC 2026 announcements, and insights from Microsoft Security professionals and MVPs. It is independent and not affiliated with or endorsed by Microsoft. Microsoft updates products and documentation frequently β€” always verify current status directly with Microsoft before making architecture or purchasing decisions.
UPDATED Β· RSAC 2026 + FIELD RESEARCH Β· MARCH 2026

Frameworks, Standards
& Compliance

How Microsoft's AI security controls map to NIST AI RMF, ISO 42001, and the OWASP Agentic AI Top 10 β€” with gap analysis per clause. Includes upcoming regulatory deadlines for organisations deploying AI agents.

πŸ› οΈ Zero Trust Workshop & Assessment Tool

Microsoft's Zero Trust Workshop (microsoft.github.io/zerotrustassessment) is a free, open-source guided assessment framework built by the Microsoft Security CxE team. It provides pillar-specific assessment checks, a step-by-step deployment guide using a first-then-next structure, app permissions analysis, and workshop documentation. It is built from learnings across thousands of customer deployments. A formal AI pillar for the assessment tool is in development β€” expected summer 2026. Until then, architects should use the existing Identity, Data, and Networking pillar assessments alongside the new Zero Trust for AI reference architecture published at RSAC 2026.

OWASP LLM Top 10 (2025)

Top 10 security risks for LLM applications and AI agents

Source: OWASP β€” LLM Top 10 for AI Applications (2025)

Distinct from the OWASP Agentic AI Top 10 below β€” the LLM Top 10 covers the full range of LLM application risks, while the Agentic Top 10 focuses specifically on multi-agent systems. Every AI agent you deploy is exposed to all ten. Use PyRIT to test for them before deployment.

#RiskWhat it means for AI agentsPrimary control
LLM01Prompt InjectionMalicious instructions override agent instructions. Includes XPIA (cross-prompt injection) via documents, emails, or web content the agent retrieves.Prompt Shields, ATG, input validation
LLM02Sensitive Information DisclosureAgent leaks PII, credentials, system prompts, or proprietary data in responses or via tool outputs.Purview DLP, DSPM for AI, output filtering
LLM03Supply ChainCompromised model weights, poisoned training data, malicious plugins, or unsafe third-party MCP servers.Foundry model governance, MCP server vetting, Agent Governance Toolkit
LLM04Data and Model PoisoningAdversarially modified training or fine-tuning data causes model to behave incorrectly or unsafely.Foundry evaluation pipelines, model provenance tracking
LLM05Improper Output HandlingAgent outputs passed unsanitised to downstream systems β€” SQL injection via agent-generated queries, XSS via agent-generated HTML, command injection via agent-generated scripts.Output validation, sandboxed execution, Foundry code execution controls
LLM06Excessive AgencyAgent has more permissions, tools, or autonomy than needed. Least agency principle violated.Minimal connector/tool assignment, ATG tool allowlisting, least-privilege permissions
LLM07System Prompt LeakageAgent reveals its system prompt or instructions β€” exposing business logic and enabling targeted attacks.System prompt hardening, Prompt Shields, jailbreak detection
LLM08Vector and Embedding WeaknessesAdversarial inputs manipulate RAG retrieval β€” poisoned documents inserted into knowledge base alter agent behaviour.Document ingestion controls, retrieval validation, SAM RCD for SharePoint
LLM09MisinformationAgent generates confident but incorrect information β€” dangerous in compliance, legal, medical, or financial workflows.Human-in-the-loop for high-stakes decisions, Foundry evaluation, grounding with verified sources
LLM10Unbounded ConsumptionAgent consumes excessive compute, tokens, or API calls β€” enabling denial of service or cost-based attacks.Rate limiting, token budgets, ATG blocking, Azure AI throttling
πŸ“Œ Two risk surfaces β€” test both before deployment

AI red teaming requires testing two surfaces simultaneously: security vulnerabilities (LLM01–LLM10) and responsible AI harms (bias, toxicity, manipulation). Traditional security testing focuses on only one. Microsoft's PyRIT automates testing across both surfaces β€” see the Products page for details and Playbooks for the pre-deployment workflow.

OWASP Agentic AI

OWASP Top 10 for Agentic Applications 2026

In December 2025, OWASP published the first formal taxonomy of risks specific to autonomous AI agents. Unlike the existing OWASP Top 10 for LLM applications (which focuses on model-level risks), the Agentic AI Top 10 covers risks that emerge when AI agents act autonomously β€” making decisions, invoking tools, and interacting with other agents. Microsoft's Agent Governance Toolkit (open source, April 2026) maps to all 10 risks.

OWASP RiskDescriptionMicrosoft ControlAGT Coverage
Goal HijackingAdversary manipulates agent's objective through prompt injection or environmental dataPrompt Shield, Entra Internet Access Prompt Injection ProtectionSemantic intent classifier in Agent OS policy engine
Tool MisuseAgent invokes tools beyond intended scope β€” accessing unauthorised APIs, data, or systemsFoundry Guardrails, Defender for Cloud Apps CASBCapability sandboxing + MCP security gateway
Identity AbuseAgent impersonates users or other agents, acquires excessive permissionsEntra Agent ID, CA for Agents, ID Protection for AgentsDID-based identity + behavioural trust scoring
Supply Chain RisksCompromised model, plugin, or dependency introduced into agent pipelineDefender for Cloud AI model scanning, GitHub Advanced SecurityPlugin signing with Ed25519 + manifest verification
Unsafe Code ExecutionAgent executes unvalidated code or scripts with excessive privilegesFoundry execution sandboxingExecution rings with resource limits
Memory PoisoningAdversarial data injected into agent memory or RAG grounding dataDSPM for AI (grounding data blocking)Cross-Model Verification Kernel (CMVK) with majority voting
Insecure CommunicationsUnencrypted or unauthenticated agent-to-agent communicationEntra Agent ID A2A protocolInter-Agent Trust Protocol (IATP) encryption
Cascading FailuresFailure or compromise in one agent propagates through multi-agent chainSentinel AI analytics rules, automated response rulesCircuit breakers + SLO enforcement
Human-Agent Trust ExploitationAgent manipulates human oversight β€” bypassing approval workflows or creating false urgencyHuman-in-the-loop controls in Copilot StudioApproval workflows with quorum logic
Rogue AgentsAgent operates outside intended boundaries β€” ignoring instructions, self-replicatingPower Platform admin kill switch, Entra CA for Modern AgentsRing isolation, trust decay, automated kill switch
πŸ“Œ Sources

OWASP Top 10 for Agentic Applications 2026 Β· Microsoft Agent Governance Toolkit (GitHub, April 2026)

Regulatory Deadlines

Upcoming Compliance Dates β€” AI Agents

Two regulatory frameworks become enforceable in 2026 that directly apply to organisations deploying autonomous AI agents. These are not hypothetical β€” they have hard enforcement dates.

RegulationEnforcement DateWho It AffectsKey Obligations for AI Agents
EU AI Act β€” High-Risk AI Obligations August 2026 Any organisation deploying AI systems classified as high-risk in the EU market Risk management system, data governance, technical documentation, human oversight, accuracy and robustness requirements, logging and auditability obligations
Colorado AI Act June 2026 Developers and deployers of high-risk AI systems affecting Colorado consumers Impact assessments, transparency disclosures, human oversight mechanisms, discrimination risk mitigation, consumer complaint process
⚠️ The Classic Agent gap compounds regulatory risk

Most existing Copilot Studio agents are Classic agents β€” outside the Entra security perimeter with no lifecycle governance, no audit trail in Entra, and no automated kill switch. If your high-risk AI deployments include Classic agents, meeting EU AI Act auditability and human oversight obligations will require either migration to Modern agents or compensating controls. Microsoft's planned migration tool does not yet exist.

βœ“ Start here β€” AI Baseline in Purview Compliance Manager

If you're standing up AI governance from zero, the first thing to run is the AI Baseline assessment in Purview Compliance Manager (Purview portal β†’ Compliance Manager β†’ Assessments β†’ AI Baseline). It's a pre-built evaluation that automatically scores your tenant against the EU AI Act, NIST AI RMF 1.0, and ISO 42001 β€” surfacing remediation actions mapped to Purview, Entra, and Defender controls. Run it once to establish your baseline; re-run quarterly to track trend.

πŸ› οΈ Closing the gaps operationally β€” Purview Compliance Manager

Beyond the AI Baseline, Compliance Manager includes additional AI-specific regulatory assessment templates that evaluate your tenant against specific obligations and surface prioritised improvement actions for data protection, auditability, and AI usage controls. It's the operational tool that turns each deadline into a task list. Access via the Microsoft Purview portal β†’ Compliance Manager.

⚠ Compliance Manager score β‰  audit-ready compliance assessment

The Compliance Manager AI Baseline produces a posture score β€” useful for tracking trend and prioritising remediation. It is not the same as a structured compliance assessment with evidence collection, control testing, gap analysis, and a written findings report suitable for the ICO, EU AI Office, internal audit, or board sign-off. Regulated sectors (financial services, healthcare, public sector) typically need both: the score for operational tracking, and an independently validated assessment for regulator submission. Treating the score as the assessment is a common and significant misconception.

Governance operating model

The human layer β€” forums, cadences, and decision rights

Most failed AI security programmes fail at governance, not technology. Compliance Manager produces evidence; Sentinel produces alerts; PyRIT produces findings. What turns those into sustained risk reduction is the human layer that meets to review them. The forums below are the minimum viable AI governance operating model β€” they sit alongside (not instead of) existing security governance.

ForumWhat it ownsAttendeesFrequency
AI Security Working GroupCross-functional review of new agent deployments, the risk register, compliance posture, weekly KPI trends. Owns the agenda for everything below.IT, Security, Data Protection, Legal, key business unit repsMonthly
Agent Lifecycle BoardApproves new agents, reviews ownerless agents, owns the Classic-to-Modern migration roadmap, signs off on risk-tier overrides. Reviews every HIGH-tier agent.Owner (per agent), Sponsor (per agent), IT Approver, security leadMonthly
Quarterly Governance SweepFull Phase 1 KQL re-run, auth-type review, Access Package renewal, DLP exception review, ownerless-agent check cross-referenced with HR data, shadow-AI scan.Security ops, IAM ops, Purview adminQuarterly
Annual AI Risk AssessmentFull estate review against the risk tier rubric, red team prioritisation for the year ahead, compliance framework re-assessment, board pack preparation.Working Group + executive sponsorAnnual
Agent Red Team CycleStructured adversarial testing of HIGH-tier agents, new agents tested pre-production, regression red teaming on significant change. Findings feed back into Agent Lifecycle Board.Internal red team or external partnerPer new HIGH-tier agent + annual for in-production HIGH agents
πŸ“Œ What each forum decides

Working Group: direction and prioritisation. Lifecycle Board: approval and accountability for individual agents. Quarterly Sweep: operational hygiene. Annual Assessment: strategy and budget. Red Team: evidence. The forums escalate up the table β€” a Lifecycle Board cannot override the Working Group; an Annual Assessment cannot override the executive sponsor. Document the escalation path explicitly before the first meeting.

NIST AI RMF

NIST AI Risk Management Framework β€” Four Functions

GOVERN
Policies, roles, accountability
Establish AI risk governance structure
Agent 365 Β· Purview
Define roles and accountabilities for AI
Entra Agent ID
⚠ preview · Modern Agents only
Establish AI lifecycle policies
SDL for AI Β· ZT4AI
Govern Classic vs Modern agent estate
Power Platform Admin + AIAgentsInfo KQL
Govern multi-tenant AI environments
Entra Tenant Governance
Preview Β· RSAC 2026
Manage third-party AI and MCP risk
Defender for Cloud Apps
Workforce AI literacy and training
⚠ Not a product control
β†— Learn More
MAP
Context, risks, and impacts
Inventory all AI systems in use
Security Dashboard for AI
βœ“ Now GA
Inventory agent authentication posture
AIAgentsInfo Advanced Hunting (Defender)
Identify Classic vs Modern agents
Entra Agent ID portal Β· AIAgentsInfo KQL
⚠ Name sync bug complicates this
Identify sensitive data exposure
Purview DSPM for AI
Identify shadow AI deployment
Entra Internet Access Shadow AI
GA Mar 31 2026
Identify threat actors and attack vectors
Defender Β· Sentinel Β· ZT4AI
AI bias and fairness assessment
⚠ Responsible AI tools (separate)
β†— Learn More
MEASURE
Analyse, assess, benchmark
Continuous AI risk monitoring
Security Dashboard Β· Defender
Dashboard Now GA
Measure no-auth and ownerless agents
AIAgentsInfo KQL queries
Evaluate model safety pre-deployment
Foundry Red Teaming + Evals
Detect credential exposure in data
Data Security Posture Agent
Preview Β· RSAC 2026
Benchmark AI security posture
ZT Workshop + ZT Assessment Tool
⚠ AI pillar: summer 2026
Runtime anomaly detection
Sentinel Β· Defender for AI
β†— Learn More
MANAGE
Treat, respond, recover
Respond to AI security incidents
Sentinel SOAR Β· Security Copilot
Enforce access controls on AI systems
Entra CA Β· Foundry Guardrails
⚠ Modern Agents only for CA
Block unauthenticated agent access
Power Platform Managed Environments
Available now
Manage agent lifecycle (onboard/retire)
Entra Agent ID
⚠ preview · Modern only
Enforce data governance in AI workflows
Purview Β· DLP for Copilot
DLP: GA Mar 31 2026
Limit blast radius during active attack
Defender Predictive Shielding
Preview Β· RSAC 2026
Recover identity infrastructure
Entra Backup and Recovery
Preview Β· RSAC 2026
β†— Learn More
ISO 42001

ISO/IEC 42001:2023 β€” AI Management System

ClauseRequirementMicrosoft ControlsGap / Caveat
4.2 β€” Interested PartiesIdentify stakeholders and AI-related requirementsAgent 365 governance; Purview compliance; Entra Tenant Governance (preview)Organisational process β€” not a product control
5.2 β€” AI PolicyEstablish and maintain an AI policySDL for AI; ZT for AI framework; Zero Trust Workshop (microsoft.github.io/zerotrustassessment)Policy content is customer-defined; Microsoft provides scaffolding and guided workshop
6.1 β€” Risk AssessmentAI-specific risk identification and assessment processSecurity Dashboard for AI (now GA); Purview DSPM; AIAgentsInfo Advanced Hunting; Foundry Red TeamingQuantitative risk scoring still limited; qualitative posture now available via GA dashboard. Classic Agent estate requires separate inventory.
6.1.3 β€” AI Impact AssessmentAssess impacts on individuals and societyMicrosoft Responsible AI Impact Assessment tools (separate from Security)Outside security product scope; separate RAI tooling required
8.4 — AI System DevelopmentSecurity in AI development lifecycleSDL for AI; GitHub Advanced Security; Foundry Red Teaming; Classic→Modern Agent migrationClassic Agent legacy complicates this — agents built before Agent ID may have no secure development baseline
8.6 β€” Data for AI SystemsData quality, provenance, and governancePurview Information Protection; DSPM for AI; DLP for Copilot (GA March 31 2026)Training data provenance still limited; inference-time data controls now stronger. Maker credentials can bypass data governance if not configured correctly.
9.1 β€” Monitoring & MeasurementContinuous monitoring of AI system performance and risksSecurity Dashboard (GA); Sentinel + MCP Entity Analyzer; Defender for AI; AIAgentsInfo KQL; Purview AI ObservabilityGood coverage when fully deployed. AI Agent Inventory requires Defender + Power Platform admin collaboration β€” complex setup.
10.2 β€” Continual ImprovementImprove AIMS based on incidents and audit findingsSentinel incident management; SDL feedback loops; ZT Workshop; ZT Assessment (AI pillar summer 2026)ZT Assessment AI pillar not until summer 2026. Classic Agent name sync bug makes agent-level policy improvement tracking difficult.
πŸ“Œ Framework Coverage β€” Updated Post-RSAC 2026 + Field Research

The GA of Security Dashboard for AI strengthens MAP and MEASURE function coverage. The discovery of the Classic vs Modern agent distinction reveals a gap across all four functions β€” most organisations cannot claim complete GOVERN, MAP, MEASURE, or MANAGE coverage until their Classic Agent estate is migrated to Modern Agents. This is the most significant framework compliance gap identified from field research and is not visible from Microsoft's product documentation alone.

Zero Trust for AI

Applying Zero Trust to AI Workloads

Zero Trust isn't just for users and devices. The three core principles apply directly to AI agents, but the implementation looks very different from user-centric Zero Trust. Here's what each principle means in practice β€” and where the hardest gaps are today.

πŸ”
Verify Explicitly
PRINCIPLE 01 Β· IDENTITY & AUTHENTICATION
For users, this means MFA and Conditional Access. For AI agents, it means ensuring every agent has a verified identity β€” not just a name β€” before it can access resources or communicate with other agents. In practice: require Entra ID authentication for all agent interactions, register agents in the Agent 365 Registry, and use modern Agent ID authentication (OAuth 2.0) where available. For Copilot Studio agents, this means enforcing one of the four authentication patterns rather than allowing No Authentication. The hardest part: Classic Copilot Studio agents authenticate as service principals or OBO β€” they don't use modern Agent ID and therefore can't be verified by CA for Agents or ID Protection. This is the single biggest gap in Microsoft's current Zero Trust for AI story.
πŸ”’
Use Least Privilege
PRINCIPLE 02 Β· ACCESS & PERMISSIONS
For users, this means JIT/JEA and PIM. For AI agents, it means scoping each agent's permissions to exactly what it needs for its specific task β€” no broader. In practice: avoid Application permissions (tenant-wide) in favour of Delegated permissions (user-scoped), avoid maker credentials (which grant the maker's full permission set to every user), use access packages for time-bound agent resource assignments, and configure Custom Security Attributes to classify agent access levels for attribute-based CA policies. The hardest part: agents are often provisioned broadly "to make sure they work" and permissions are rarely reviewed. Agent lifecycle workflows and access reviews are the operational controls that enforce this principle over time.

Least agency β€” an extension of least privilege for AI: The ZT4AI framework introduces a more specific concept. It is not enough to give an agent a limited set of data sources β€” you must also limit the APIs, UI actions, and side effects it can invoke. Each connector added to an agent (CRM, ticketing system, database, line-of-business app) expands its blast radius if compromised or manipulated via prompt injection. Least agency means giving agents the minimum set of tools and actions required for the specific task β€” not everything that might be convenient. In Copilot Studio terms: restrict which connectors and MCP tools are available per agent, not just which data sources it can read.
πŸ›‘οΈ
Assume Breach
PRINCIPLE 03 Β· DETECTION & RESPONSE
For users, this means SIEM, SOAR, and EDR. For AI agents, it means assuming any agent could be compromised via prompt injection, malicious tool output, or credential theft β€” and building detection and containment accordingly. In practice: deploy Sentinel AI analytics rules, configure Defender real-time agent protection, build AI incident response playbooks, and set up automated response rules for high-risk AI activity. The hardest part: agent compromise often looks like normal agent behaviour β€” the agent is doing what it was told, just by an attacker rather than a legitimate user. Detection requires behavioural baselines, not just signature matching.
πŸ“Œ Three concepts from the ZT4AI announcement worth knowing

"Double agents" framing: Overprivileged, manipulated, or misaligned agents can act like double agents β€” working against the very outcomes they were built to support. This is Microsoft's framing for why standard least-privilege and assume-breach thinking must extend to AI agents, not just users.

Ephemerality Controls (JIT for agents): Agents should be granted short-lived credentials that expire the moment their specific task is completed. This Just-in-Time model limits blast radius if an agent is compromised mid-task β€” the attacker's access window is minutes, not days. Entra Agent ID supports this via time-bound access packages and lifecycle workflows.

Full AI lifecycle scope: ZT4AI covers not just agent runtime but the entire AI lifecycle β€” data ingestion, model training, deployment, and agent behavior. Supply chain and model security are in scope, not just the agent identity and access layer.

πŸ—ΊοΈ
ZT4AI Β· OFFICIAL RESOURCE Β· RSAC 2026
Zero Trust for AI β€” Reference Architecture
Microsoft published a dedicated Zero Trust for AI reference architecture at RSAC 2026 β€” an extension of the existing Zero Trust reference architecture. It shows how policy-driven access controls, continuous verification, monitoring, and governance work together to secure AI systems and increase resilience when incidents occur. Covers the full AI lifecycle: data ingestion, model training, deployment, and agent behaviour. Free to use and a practical starting point for any organisation building an AI security roadmap, regardless of how much of the Microsoft stack is in use.
β†’ ZT4AI Announcement β†’ ZT Workshop AI Controls
Access Fabric

The Access Fabric β€” Microsoft's Architectural Framing for AI-Scale Access

Alongside ZT4AI, Microsoft has introduced the Access Fabric concept β€” an architectural approach that treats access as a continuous, end-to-end system rather than a set of point controls. It uses identity as the consistent decision point and enforces those decisions across environments in near real time.

πŸ—οΈ What an Access Fabric provides
A common identity foundation for employees, workloads, and AI agents. Near-real-time enforcement of access decisions across the network. Continuous signal sharing across identity, network, and security tools. Faster propagation of policy and risk changes without manual stitching between tools.
⚠️ Why fragmentation is the enemy
Microsoft research found organisations use an average of 5 identity solutions and 4 network access tools β€” often from different vendors. Nearly half of security leaders report being overwhelmed by vendor sprawl. With AI agents operating at machine speed, static decisions and delayed enforcement create exploitable gaps.
πŸ“Œ Why this matters for AI agent security specifically

AI agents operate continuously, interact with multiple systems, and often require broad access. In a fragmented access environment, policy changes take longer to propagate, visibility is partial, and gaps between tools create openings. The Access Fabric model is directly relevant to Microsoft's agent security story β€” the same integrated Entra + Defender + Purview platform that Microsoft markets for agent governance is its implementation of this concept. The Classic vs Modern agent gap is a concrete example of what fragmentation looks like in practice: agents outside the Entra perimeter get zero coverage from the access fabric regardless of what other controls are deployed.

Zero Trust Maturity Model

Where to start β€” a staged approach

Don't try to implement all 700+ ZT Workshop AI controls (116 logical groups, 33 swim lanes) at once. This three-stage model gives organisations a practical sequence from zero visibility to full automation.

STAGE 01
Visibility
Know what you have before you try to control it. Most organisations skip this and jump to controls β€” then discover the controls don't apply to most of their agents.
Discover agents in Agent 365 Registry
Enable AI Agent Inventory (Defender)
Run Playbook 01 KQL audit queries
Identify Classic vs Modern agents
Triage no-auth and maker-cred agents
Assign owners to all published agents
STAGE 02
Control
Apply identity and access controls to the agents you've inventoried. Focus on the highest-risk patterns first β€” no-auth, maker credentials, org-wide sharing.
Enforce Entra ID auth on all agents
Deploy CA posture for Modern Agents
Enable ID Protection for Agents
Configure Global Secure Access for agents
Deploy DSPM for AI + DLP policies
Enable Defender real-time protection
STAGE 03
Automation
Operationalise your controls so they scale without manual effort. Governance that requires manual review of every agent will break down as agent count grows.
Lifecycle workflows for mover/leaver
Access reviews for agent permissions
Automated response rules in Sentinel
Graph API agent registry management
Recurring AI threat review cadence
Red teaming cadence for all agents
Priority Controls

The highest-impact ZT Workshop AI controls

From the 700+ controls in the Microsoft Zero Trust Assessment Workshop AI section, these are the ones security architects should prioritise first.

AI_000 Β· IDENTITY
Require Entra ID Auth for All Agent Interactions
Ensure every agent that interacts with users or data authenticates via Entra ID. No anonymous or no-auth agents in production. Foundation for everything else.
MEDIUM EFFORT
AI_001–002 Β· VISIBILITY
Discover, Inventory and Assign Ownership
Use Agent 365 Registry to discover all agents. Triage each one and assign an accountable owner. Unowned agents are your highest sprawl risk.
LOW EFFORT
AI_005 Β· IDENTITY
Custom Security Attributes for Agent Classification
Tag agents with custom attributes (risk level, data sensitivity, environment). Enables attribute-based CA policies that scale to hundreds of agents without per-agent rules β€” directly addresses the agent name sync gap.
MEDIUM EFFORT
AI_006 Β· IDENTITY
ID Protection + Risk-Based CA for Agents
Enable Identity Protection risk signals for Modern Agents and deploy risk-based CA policies. Automatically blocks high-risk agents without manual intervention.
MEDIUM EFFORT
AI_014 Β· GOVERNANCE
Lifecycle Workflows for Sponsor Mover/Leaver
When the person who sponsors an agent leaves the organisation, a workflow must reassign sponsorship or decommission the agent. Without this, agents become orphaned and unmanaged over time.
MEDIUM EFFORT
AI_072 Β· RUNTIME
Content Safety SDK for All Agent Inputs
Require all agents to pass inputs through Azure AI Content Safety before processing. Detects prompt injection, harmful content, and jailbreak attempts at the input layer before the model sees them.
HIGH EFFORT
AI_077 Β· MCP
APIM Gateway for All MCP Server Deployments
Require Azure API Management as a governance layer in front of all custom MCP servers. Provides authentication, rate limiting, logging, and policy enforcement at the tool layer.
HIGH EFFORT
AI_080 Β· DATA
Sensitivity Label Inheritance for AI Outputs
AI-generated content should inherit the highest sensitivity label of its source data. Without this, a Confidential document summarised by an agent produces an Unclassified output β€” bypassing your data protection controls.
MEDIUM EFFORT
AI_090–091 Β· DETECT
Sentinel AI Analytics Rules
Enable AI-specific analytics rules for prompt injection detection and create custom rules for agent anomaly detection. Also configure AI threat detection workbooks for ongoing visibility.
MEDIUM EFFORT
AI_094 Β· RESPOND
Automated Response Rules for High-Risk AI Activity
Configure SOAR-style automated containment for high-risk AI activity β€” automatic agent suspension, access revocation, or alert escalation without waiting for manual triage.
HIGH EFFORT
AI_081–083 Β· RUNTIME
AI Red Teaming Cadence
Configure AI Red Teaming Agent in Microsoft Foundry. Establish red teaming as a requirement for all new agent deployments and a recurring validation cadence (quarterly recommended) for existing agents.
HIGH EFFORT
AI_128 Β· MCP
MCP Management Server
Deploy a dedicated MCP Management Server as the control plane for all custom MCP server deployments. Provides centralised approval, discovery, and governance of the tool layer β€” the MCP equivalent of an app catalogue.
HIGH EFFORT
πŸ“Œ Full control list

The highest-impact ZT Workshop AI controls

From the 700+ controls in the Microsoft Zero Trust Assessment Workshop AI section, these are the ones security architects should prioritise first.

AI_000 Β· IDENTITY
Require Entra ID Auth for All Agent Interactions
Ensure every agent that interacts with users or data authenticates via Entra ID. No anonymous or no-auth agents in production. Foundation for everything else.
MEDIUM EFFORT
AI_001–002 Β· VISIBILITY
Discover, Inventory and Assign Ownership
Use Agent 365 Registry to discover all agents. Triage each one and assign an accountable owner. Unowned agents are your highest sprawl risk.
LOW EFFORT
AI_005 Β· IDENTITY
Custom Security Attributes for Agent Classification
Tag agents with custom attributes (risk level, data sensitivity, environment). Enables attribute-based CA policies that scale to hundreds of agents without per-agent rules β€” directly addresses the agent name sync gap.
MEDIUM EFFORT
AI_006 Β· IDENTITY
ID Protection + Risk-Based CA for Agents
Enable Identity Protection risk signals for Modern Agents and deploy risk-based CA policies. Automatically blocks high-risk agents without manual intervention.
MEDIUM EFFORT
AI_014 Β· GOVERNANCE
Lifecycle Workflows for Sponsor Mover/Leaver
When the person who sponsors an agent leaves the organisation, a workflow must reassign sponsorship or decommission the agent. Without this, agents become orphaned and unmanaged over time.
MEDIUM EFFORT
AI_072 Β· RUNTIME
Content Safety SDK for All Agent Inputs
Require all agents to pass inputs through Azure AI Content Safety before processing. Detects prompt injection, harmful content, and jailbreak attempts at the input layer before the model sees them.
HIGH EFFORT
AI_077 Β· MCP
APIM Gateway for All MCP Server Deployments
Require Azure API Management as a governance layer in front of all custom MCP servers. Provides authentication, rate limiting, logging, and policy enforcement at the tool layer.
HIGH EFFORT
AI_080 Β· DATA
Sensitivity Label Inheritance for AI Outputs
AI-generated content should inherit the highest sensitivity label of its source data. Without this, a Confidential document summarised by an agent produces an Unclassified output β€” bypassing your data protection controls.
MEDIUM EFFORT
AI_090–091 Β· DETECT
Sentinel AI Analytics Rules
Enable AI-specific analytics rules for prompt injection detection and create custom rules for agent anomaly detection. Also configure AI threat detection workbooks for ongoing visibility.
MEDIUM EFFORT
AI_094 Β· RESPOND
Automated Response Rules for High-Risk AI Activity
Configure SOAR-style automated containment for high-risk AI activity β€” automatic agent suspension, access revocation, or alert escalation without waiting for manual triage.
HIGH EFFORT
AI_081–083 Β· RUNTIME
AI Red Teaming Cadence
Configure AI Red Teaming Agent in Microsoft Foundry. Establish red teaming as a requirement for all new agent deployments and a recurring validation cadence (quarterly recommended) for existing agents.
HIGH EFFORT
AI_128 Β· MCP
MCP Management Server
Deploy a dedicated MCP Management Server as the control plane for all custom MCP server deployments. Provides centralised approval, discovery, and governance of the tool layer β€” the MCP equivalent of an app catalogue.
HIGH EFFORT
πŸ“Œ Full control list

The 12 controls shown above are the highest-impact subset of the full Zero Trust Workshop AI catalogue. For the complete list of controls including effort, dependencies, and implementation notes, see the dedicated Zero Trust for AI page.

STAY UPDATED
Get notified when Microsoft AI security changes
Monthly updates on new controls, GA announcements, and critical gaps β€” direct to your inbox.
Subscribe to updates β†’
aiagentsecurity.substack.com Β· Free Β· No spam