๐Ÿ“Œ Author's note: This site synthesises the author's own understanding from publicly available Microsoft documentation, official Microsoft Security blog posts, RSAC 2026 announcements, and insights from Microsoft Security professionals and MVPs. It is independent and not affiliated with or endorsed by Microsoft.
Agentic AI Defense

A Six-Phase Defense Strategy

A structured approach to securing agentic AI across the enterprise โ€” from initial visibility through to sustained governance. Each phase has prerequisites, produces evidence the next phase consumes, and maps to controls available in your existing Microsoft 365 E5 and Sentinel investment.

Framework: aiagentsecurity.guide ยท Aligned to Microsoft RSAC 2026 announcements and Agent 365 GA

Strategy context ยท RSAC 2026
Security must be ambient and autonomous,
just like the AI it protects.
In the agentic era, agents can become double agents โ€” overprivileged, manipulated, or misaligned, working against the outcomes they were built to support. The answer starts with trust. Security must be woven into and around every layer of the AI estate.
80%
Fortune 500 use agents
1.3B
Agents by 2028 (IDC)
14.4%
Have full security approval
68%
Can't distinguish agent vs human in logs
Before you start

AI Readiness Assessment โ€” the question to answer first

Before kicking off Phase 1, an organisational-level readiness check frames the technical work within the strategic AI roadmap. The output is not a control deployment โ€” it's a documented baseline of attack surface, ungoverned legacy estate (Classic agents, shadow AI tools, unmanaged OAuth grants), and the gap between current AI adoption and current AI governance. This is what turns Phase 1 from "run some KQL queries" into "we know how big the problem is before we measure it."

๐Ÿ“Œ Output of a readiness assessment

Attack surface inventory (shadow AI tools, unsanctioned LLMs, ungoverned plugins); legacy estate scale (Classic agent count, ownerless agents, no-auth agents); governance maturity gap (what's deployed today vs the six phases); commercial path (which Agent 365 capabilities are needed and at what population size). This becomes the brief for Phase 1.

Implementation sequence

Six-phase rollout โ€” where to start if you're starting from zero

Each phase has prerequisites and produces evidence the next phase consumes. Run them in order โ€” skipping ahead leaves controls without the visibility they depend on. Each phase maps to capabilities available in your existing Microsoft 365 E5 and Microsoft Sentinel footprint, with Agent 365 add-ons clearly marked where required.

PHASE 01
๐Ÿ”
Discover & Inventory
PHASE 02
๐Ÿชช
Identity & Governance
PHASE 03
๐Ÿ“‚
Data Security
PHASE 04
๐Ÿ›ก๏ธ
Runtime Protection
PHASE 05
๐Ÿ“ก
Monitoring & Detection
PHASE 06
โš–๏ธ
Compliance & Governance
PhaseWhat you doKey prerequisitesPhase output (input to next phase)
01 โ€” Discover & InventorySet up Security Dashboard for AI. Enable AI Agent Inventory (Defender + Power Platform integration). Run full AIAgentsInfo KQL inventory. Identify no-auth and maker-credential agents. Apply H/M/L risk tier classification. Discover shadow AI via Cloud App Catalog.M365 E5; Defender + Power Platform admin accessTiered agent register ยท no-auth agent list ยท shadow AI baseline
02 โ€” Identity & GovernancePart A (Classic): Enable Managed Environments. Enforce end-user authentication. Set sharing limits. Define Owner / Sponsor / Approver model. Deploy Power Platform DLP. Part B (Modern): Apply Conditional Access policies. Deploy ID Protection. Configure Access Packages.Phase 1 inventory complete ยท Agent 365 licence (Modern controls only)Governed maker estate ยท CA-protected Modern agents ยท auth-type baseline
03 โ€” Data SecurityRun DSPM oversharing assessment. Configure regulated SITs. Enable sensitivity label inheritance. Deploy Purview DLP for Copilot. Apply retention to agent-generated content. Address EU Data Boundary via model inventory KQL. Apply SAM RCD as interim site exclusion. Deploy browser DLP for public LLMs.Phase 2 governance baseline ยท Purview E5Oversharing remediated ยท DLP active ยท label coverage measured
04 โ€” Runtime ProtectionEnable Defender real-time protection for Copilot Studio (three layers). Configure Entra Internet Access prompt-injection protection. Deploy Prompt Shields. Run pre-deployment red teaming with PyRIT (or Foundry Red Teaming Agent for Foundry agents). LLM + Agent red team for High-tier agents.Phase 1 setup ยท Agent 365 licence from July 1, 2026 (network controls)Runtime blocking active ยท red team findings register
05 โ€” Monitoring & DetectionBookmark Security Dashboard for AI. Deploy Microsoft Copilot Sentinel solution (6 analytic rules + workbook). Enable auth-type downgrade Analytics Rule. Hunting queries for sensitive label access, out-of-EUDB models, ownerless agents. Configure ITDR for agent identities.CopilotActivity table ingested ยท Sentinel workspaceSOC alerting live ยท weekly KPI tracking ยท incident workflow
06 โ€” Compliance & GovernanceRun AI Baseline in Purview Compliance Manager (establish score). Map estate to EU AI Act, NIST AI RMF, ISO 42001. Stand up AI Governance Operating Model โ€” Working Group, Lifecycle Board, quarterly sweep, annual review. Board-level quarterly reporting pack. Vet third-party agents pre-publish.Phases 1โ€“5 generating evidence ยท governance forum approvalsCompliance score baseline ยท sustained operating model
๐Ÿ“Œ Why the order matters

Phase 1 produces the agent inventory that Phase 2 governance applies to. Phase 2 produces the governed maker estate that Phase 3 DLP attaches to. Phase 5 monitoring depends on Phases 1, 3, and 4 having generated the underlying telemetry. Phase 6 compliance evidence comes from controls deployed in Phases 1โ€“5. Running phases in parallel is possible โ€” running them out of order is not.

Measurement

Four AI security KPIs to track weekly

The trend matters more than the absolute number. These four metrics give a defensible weekly view that maps directly to controls deployed across the six phases.

KPISourceDefinitionTarget trend
Risky agentsAIAgentsInfoCount of published agents where UserAuthenticationType == "None"Decreasing to zero
Sensitive access eventsPurview Activity ExplorerAI interactions where a sensitivity label of Confidential or above was citedStable โ€” rising trend = label enforcement gaps
DLP policy hitsPurview DLP โ€” Copilot locationCount of blocked or warned responses from DLP policy evaluationStable after initial tuning spike
Blocked tool actionsDefender Incidents โ€” Category: AI, Status: BlockedTool invocations blocked by Defender real-time protection (ATG)Increasing initially (policy working), then stable
๐Ÿ“Œ Operational KQL for each KPI

For the source queries and SOC workflow integration, see Playbooks โ†’ Four AI Security KPIs.

Executive reporting

Quarterly board-level AI risk reporting pack

AI agent risk should be reported alongside conventional cyber risk metrics, not as a separate workstream. The pack below uses outputs from the six phases and the four weekly KPIs, rolled up to a quarterly view suitable for senior leadership.

SectionWhat to includeSource
Agent estate summaryTotal agents, Classic vs Modern split, risk tier distribution (H/M/L), quarter-over-quarter trendPhase 1 inventory ยท AIAgentsInfo
No-auth agent count trendQuarterly trajectory โ€” should be decreasing toward zero. Flag any quarter where it rises.Phase 1 KQL ยท Weekly KPI #1
Sentinel alert volume by categoryJailbreak attempts, auth-type changes, anomalous tool calls, external IP access, plugin tamperingPhase 5 Content Hub analytic rules
DLP hitsVolume and category, split by Copilot location and browser extensionPhase 3 ยท Weekly KPI #3
Compliance score trendPurview Compliance Manager AI Baseline score over time, against EU AI Act, NIST AI RMF, ISO 42001 templatesPhase 6 ยท AI Baseline assessment
Red team findingsCritical findings from PyRIT runs and structured red-team engagements during the quarter, status of remediationPhase 4 ยท Red team cycle
Agent 365 licence complianceAre all users of premium capabilities licensed? Where are the gaps?Phase 6 ยท Procurement
๐Ÿ“Œ Format note

One page or one slide per section is enough โ€” the board is reviewing trends, not drilling into individual agents. The detail lives in the working group and lifecycle board reviews. Consider pairing this with a red/amber/green status indicator per section so the trajectory is readable at a glance.

๐Ÿ“Œ Source & method

The six-phase framework synthesises the Agentic AI Security Framework with Microsoft's RSAC 2026 announcements (Vasu Jakkal โ€” Secure agentic AI end-to-end, March 2026), the Agent 365 GA capabilities, the Zero Trust for AI reference architecture, and field-validated patterns from the Microsoft Security MVP community. Phase ordering is informed by real-world implementation experience across enterprise tenants โ€” not a theoretical sequence.