Home Overview AI Risk Products Agent 365 Identity MCP Threats Frameworks Gaps Playbooks Foundry Strategy Contact
πŸ“Œ Author's note: This site synthesises the author's own understanding from publicly available Microsoft documentation, official Microsoft Security blog posts, RSAC 2026 announcements, and insights from Microsoft Security professionals and MVPs. It is independent and not affiliated with or endorsed by Microsoft.
βœ… GA May 1, 2026 Β· $15/user/month Β· Announced Microsoft Ignite 2025

Microsoft
Agent 365

The enterprise control plane for AI agents. Agent 365 doesn't build or host agents β€” it wraps agents you've already built in enterprise-grade identity, governance, observability, and security controls.

GA May 1, 2026 $15/user/month standalone Included in M365 E7 ($99/user/mo)
πŸ“Œ GA Day announcements β€” May 1, 2026 (Nirav Shah, Rob Lefferts, Jason Roszak)

New capabilities announced at GA:

CapabilityStatusDetail
Registry sync β€” AWS Bedrock + Google CloudPreview nowAutomatically discover and inventory agents on AWS Bedrock and Google Gemini Enterprise Agent Platform. Basic lifecycle governance (start, stop, delete) coming soon.
Defender agent context mappingPreview June 2026Relationship map per agent: devices it runs on, MCP servers configured, associated identities, cloud resources reachable. Blast radius analysis and endpoint-based behaviour investigation.
Intune integration β€” policy controls + runtime blockingPreview June 2026Policy-based controls and runtime blocking/alerting via Intune and Defender for agents on managed devices.

Pricing: USD$15 per user per month. Each licence covers an individual who manages or sponsors agents, or uses agents to do work on their behalf. Also included in Microsoft 365 E7.
Source: Microsoft Security Blog β€” Agent 365 GA (May 1, 2026)

What Agent 365 Is

The enterprise layer above your agent logic

Agent 365 is not an agent builder or hosting platform. It is a management and security layer that sits above whatever platform your agents run on β€” Copilot Studio, Microsoft Foundry, LangChain, OpenAI Agents SDK, or anything else. Once an agent is onboarded to Agent 365, it gains enterprise-grade controls it didn't have before.

πŸͺͺ
Entra-backed Agent Identity
Each agent gets its own identity in Microsoft Entra ID β€” including a dedicated mailbox and user resources for secure authentication. Enables Conditional Access, ID Protection, and lifecycle governance per agent. Blueprint credentials use a two-phase flow: T1 (trust) β†’ T2 (authorisation). Recommended credential type: Federated Identity Credentials (FIC) β€” no stored secrets, trust-based, short-lived OIDC tokens. See Identity page for full T1/T2 detail.
πŸ”Œ
Governed MCP Tool Access
Agents invoke MCP servers under admin control via the Agent Tooling Gateway (ATG). Tool invocations are evaluated by Defender before execution β€” unsafe actions blocked before any data access or harm can occur. When ATG blocks an action, it generates a SOC-ready alert explaining what was stopped, why it was risky, and which agent, user, and tool were involved. Alerts flow directly into Defender XDR. No open-ended permissions.

Critical limitation: ATG only inspects the tool execution path β€” not model reasoning between tool calls.
πŸ“Š
OpenTelemetry Observability
Agent interactions, inference events, and tool usage are instrumented automatically via the Agent 365 SDK. All events routed to Microsoft 365 audit logs β€” visible in Defender Advanced Hunting via the AIAgentsInfo table.
πŸ“‹
Blueprint-Based Governance
Each agent operates within an IT-approved blueprint defining capabilities, required MCP accesses, security constraints, audit requirements, and linked DLP or external access policies. Consistent configuration at scale.
πŸ””
M365 Notifications
Agents can participate in M365 apps like a human participant β€” via @mentions in Teams, comments in Word, and Outlook notifications. Enables agents to surface work to users in their existing workflows.
πŸ›‘οΈ
Defender Integration
Agents registered with Agent 365 appear in the AIAgentsInfo Advanced Hunting table (RegistrySource == "A365"). Near-real-time threat detection, incident alerts, and investigation via Defender portal β†’ Settings β†’ Security for AI.
Platform Support

Agent 365 is platform-agnostic

Agent 365 works with agents built on any platform and hosted anywhere. This is not a Microsoft-only capability.

πŸ€–Copilot Studio
🏭Microsoft Foundry
πŸ”—LangChain SDK
πŸ€–OpenAI Agents SDK
🦾Claude Code SDK
βš™οΈMicrosoft Agent Framework
☁️AWS Bedrock
🌐GCP Vertex AI
πŸ“Œ What "platform-agnostic" means in practice

You add the Agent 365 SDK to your agent code. Once instrumented, the agent registers with Agent 365 regardless of where it runs β€” Azure, AWS, GCP, or your own infrastructure. The SDK handles the Entra identity registration, OpenTelemetry event emission, and ATG integration automatically. You don't rewrite your agent; you add a governance layer around it.

Partner Ecosystem

Enterprise partners and open-source frameworks already integrated

Agent 365 is not a closed Microsoft system. At GA, a growing ecosystem of external agents and frameworks are certified to integrate with Agent 365 β€” inheriting the same identity, governance, and security controls.

Enterprise partners

Adobe SAP ServiceNow Workday Databricks NVIDIA Glean n8n Cognition Genspark Kasisto Manus

Open-source / community

LangChain OpenAI Agents SDK Anthropic SDK Crew.ai Cursor Perplexity Vercel

Source: Microsoft 365 Blog β€” Agent 365: The control plane for AI agents (Nov 2025)

New Purview capability announced alongside Agent 365 GA: visibility into how Copilot and AI apps interact with enterprise data, with retention and deletion policies for AI prompts and outputs. Addresses regulatory obligations as AI adoption scales. Distinct from DLP (blocks at point of use) β€” DLM governs what is retained after. Source: Microsoft Tech Community.

Licensing

Two ways to get Agent 365

$15
PER USER / MONTH Β· GA MAY 1, 2026
Agent 365 Standalone
Agent inventory and governance control plane
Entra Agent ID for all onboarded agents
Defender Security for AI integration
Agent Tooling Gateway (ATG) real-time protection
Agent 365 SDK and CLI
AIAgentsInfo Advanced Hunting (A365 agents)

⚠️ Per-user, not per-agent. Governance scope doesn't automatically scale with agent count. An organisation with 50 licensed users but 500 deployed agents has a coverage gap. Plan accordingly.

$99
PER USER / MONTH Β· GA MAY 1, 2026
Microsoft 365 E7 β€” The Frontier Suite
Everything in Agent 365 standalone
Microsoft 365 Copilot (M365 AI assistant)
Microsoft 365 E5 (full compliance + security stack)
Entra Suite (all Entra products bundled)
Best for orgs deploying Copilot + agents together
Single SKU replacing multiple add-ons

πŸ’‘ When E7 makes sense: If you're buying M365 Copilot + E5 + Entra Suite anyway, E7 likely costs less than the sum of parts. Run the numbers β€” the break-even depends on your existing licence baseline.

Registry Convergence

Two portals β€” one for inventory, one for identity

Agent visibility was previously split across the Entra admin center and the M365 admin center. Microsoft has converged the registry under Agent 365 as the single control plane. Each portal now has a distinct role:

Portal What you do here Agent visibility Role required
Microsoft 365 admin center
admin.microsoft.com β†’ Agents β†’ All agents
Comprehensive agent inventory β€” discover and manage all agents in the organisation, monitor operational activity βœ“ ALL agents
Including agents without Entra Agent ID
AI Administrator or AI Reader (least-privilege)
No licence required for inventory view
Microsoft Entra admin center
entra.microsoft.com
Agent identity and access management β€” blueprints, permissions, Conditional Access, identity governance, security signals ⚠ Agents with Entra Agent ID only Agent ID Administrator
Entra Agent ID licence required for security controls
⚠️ Identity admins need both portals for full coverage

The Entra admin center only shows agents that have a Microsoft Entra Agent ID. Agents without an identity β€” including most Classic Copilot Studio agents β€” are invisible there. For a complete picture, identity admins should use Agent 365 (M365 admin center) for inventory and the Entra admin center for identity governance controls.

Roles for agent visibility

RoleWhat it gives youPortal
AI Reader Read-only view of all agents in the organisation. Recommended least-privilege role for inventory access. M365 admin center (Agent 365)
AI Administrator Full management of all agents in Agent 365 including governance controls. M365 admin center (Agent 365)
Agent ID Administrator Manage agent identities, blueprints, permissions, CA policies in Entra. Required for Blueprint write operations. Entra admin center
πŸ’‘ No licence needed for basic inventory

Viewing all agents in the M365 admin center (Agent 365) requires no specific product licence β€” just the AI Administrator or AI Reader role. Applying security and governance controls (Conditional Access, identity governance policies) requires the appropriate Entra Agent ID licence.

Access & Preview

Frontier programme β€” current preview access

Agent 365 is GA on May 1, 2026. Before that date β€” and for some advanced preview features β€” access is via the Microsoft Frontier programme.

1
Enrol in Frontier β€” go to adoption.microsoft.com/copilot/frontier-program and request access. Frontier gives early access to Agent 365, Entra Agent ID, and related preview capabilities.
2
Enable in Power Platform admin center β€” for Copilot Studio agents: Power Platform admin β†’ Copilot β†’ Settings β†’ Entra Agent Identity for Copilot Studio β†’ On. This makes new Copilot Studio agents Modern agents automatically.
3
Connect Defender β€” Defender portal β†’ Settings β†’ Security for AI agents β†’ enable and connect your Agent 365 tenant. Agents registered with Agent 365 start appearing in AIAgentsInfo.
4
Instrument custom agents β€” add the Agent 365 SDK to agents not on Copilot Studio. This gives them Entra identity, OpenTelemetry observability, and ATG real-time protection. Available on PyPI and npm.
Security Coverage

What you get β€” by agent type

Coverage depth varies depending on how the agent was built and whether the Agent 365 SDK is integrated.

Agent type Discovery (AIAgentsInfo) Threat detection Real-time protection (ATG) Requires
Copilot Studio agents βœ“ Automatic
RegistrySource == "PowerPlatform"
βœ“ Extended alert set
Audit logs sent by default
βœ“ Available Power Platform connector enabled in Defender
Agent 365 SDK agents βœ“
RegistrySource == "A365"
βœ“ Near-real-time
Requires M365 audit log routing
βœ“ ATG Agent 365 licence + SDK integration
Foundry / Bedrock / Vertex AI βœ“ UI inventory ⚠ Limited β€” no SDK ❌ Without SDK Agent 365 SDK required for detection + ATG
Classic Copilot Studio agents βœ“ via PowerPlatform connector ⚠ Basic only βœ“ Existing Defender RT No Agent 365 needed β€” but no Entra Agent ID
Three Agent Operating Modes

What Agent 365 governs β€” by how agents work

Source: Microsoft Security Blog β€” Agent 365 GA (May 1, 2026)

ModeStatusHow it worksExample
Agents working on behalf of users
Delegated access
GA Agent acts on behalf of a signed-in user using delegated permissions. Operates in response to user prompts. Uses the user's identity context. An agent that helps an employee organise their inbox or summarise emails
Agents operating behind the scenes
Own access / autonomous
GA Agent operates with its own credentials and permissions, without user context. Runs autonomously in the background on scheduled or event-triggered tasks. An agent autonomously triaging support tickets or running nightly data reconciliation
Agents participating in team workflows
Own access / collaborative
Preview Agent operates with its own access while participating in team channels, meetings, or shared workspaces. Interacts with multiple users and agents in collaborative contexts. An agent added to a Teams channel that monitors project activity and responds to @mentions
πŸ“Œ Local and cloud agent discovery β€” new as of GA

Users are installing local AI agents (OpenClaw, GitHub Copilot CLI, Claude Code) on their devices and adopting SaaS agents outside traditional governance. Agent 365 now addresses this with a new Shadow AI page in the M365 admin center.

What IT can do today (Frontier programme): See if OpenClaw agents are being used in the organisation, which devices they are running on, and enable two Intune security policies from the Shadow AI page:

PolicyIntune policy createdWhat it does
Continuously detect managed devices A365 - Monitor OpenClaw
Device configuration Β· Properties catalog profile
Creates a read-only Properties catalog profile using the new Local AI Agent Settings Catalog node. Runs via Intune Management Extension (IME) β€” inspects disk and memory on managed Windows devices. Safe to deploy β€” reads from device, does not configure it. Refreshes every 24 hours.
Block AI Agents from OpenClaw A365 - Block OpenClaw
Security baseline policy
Blocks common methods of running OpenClaw on managed devices via Intune security baseline policy. See rollback caveat below before enabling.
Eight properties collected by A365 - Monitor OpenClaw
PropertyWhat it captures
Agent NameCanonical identifier for the agent type (e.g. "OpenClaw")
Agent VersionVersion string of the installed agent
Host ProcessParent process executing the agent β€” identifies the execution context
Install LocationFilesystem path of the agent installation
Install ScopePer-user vs per-machine installation
Install Scope Platform User IDWindows SID of the installing user
Install Scope User IDEntra ID user identifier (UPN) of the installing user
Local AI Agent Execution ContextPrivilege/security context β€” user / elevated / SYSTEM. ⚠️ SYSTEM-level execution is high risk.
πŸ“Œ Why the Execution Context property matters: An agent running at SYSTEM privilege has access to far more resources than one running as the logged-in user. The Local AI Agent Execution Context field lets IT and security teams immediately identify which devices have agents running with elevated or SYSTEM-level privileges β€” a key risk signal for triage.
⚠️ Block policy rollback caveat (Derk van der Woude, May 2026): Once the Block AI policy is enabled, it cannot be disabled via the Agent 365 portal. Rollback requires deleting the A365 - Block OpenClaw security policy directly in Intune. The Agent 365 portal does not expose a disable control.
Coming soon β€” additional Shadow AI detections:
Claude Code CLI Β· Ollama Desktop Β· OpenAI Β· Cursor Β· Poe Desktop

Coming in June 2026: Defender asset context mapping per local agent β€” devices, MCP servers, identities, cloud resources reachable. Runtime blocking if malicious behaviour detected.

Windows 365 for Agents

A secured, managed execution environment for AI agents

Source: Windows IT Pro Blog β€” Windows 365 for Agents public preview (May 1, 2026)

πŸ“Œ Why this exists β€” the execution gap

Many enterprise applications have no APIs β€” critical work still happens through user interfaces where context, data, and intent are conveyed visually. To unlock their full potential, AI agents need to interact with applications the way people do: using a computer directly through clicks, typing, and navigation. Today most agents run on ad-hoc infrastructure β€” local machines, shared virtual machines, or unmanaged cloud environments β€” creating gaps in identity, policy enforcement, auditability, and control. This makes it difficult for IT teams to confidently scale agentic workloads beyond API- or MCP-based pilots.

πŸ“Œ The employee analogy

Every employee in an organisation has an identity and works on a managed device β€” typically a Windows 365 Enterprise Cloud PC. Now, each AI agent also has its own identity (governed through Agent 365) and runs on a managed Cloud PC (provided by Windows 365 for Agents). It is the same trust model and the same IT controls β€” extended to AI.

DimensionDetail
What it isA new class of Cloud PCs purpose-built for agentic workloads. Agents run in a fully managed Windows environment with identity, security, policy, and lifecycle management handled by IT.
StatusPublic Preview Β· US only Β· May 1, 2026
Three key benefitsβ‘  Enterprise-grade identity and access controls for every agent Β· β‘‘ Unified device and policy management via Intune Β· β‘’ Global scalability with geo-level data residency for compliance
PrerequisitesAgent 365 licence + Intune licence + active Azure subscription (billing for Cloud PC compute is Azure pay-as-you-go, not included in Agent 365)
Relationship to Agent 365Agent 365 = control plane (what agents can do, governance, policies). Windows 365 for Agents = execution layer (where agents run securely). Together: move from visibility/governance to production-ready deployments.
Who it is forIT administrators, security teams, digital workplace leaders, platform teams. Especially valuable for: legacy/UI-based app workflows with no API, human-in-the-loop scenarios, organisations needing geo-specific data residency.
πŸ“Œ The Microsoft AI four-layer stack (from this announcement)

Microsoft IQ β€” the intelligence layer. Shared context across people, work, and the business. Helps AI understand what matters and make informed decisions. Agents use this to reason.
Windows 365 for Agents β€” the execution layer. Trusted, managed runtime for agents to get work done, especially for UI-based workflows.
Microsoft Azure β€” the foundation. Global cloud for secure, scalable AI. Hosts the Cloud PCs.
Agent 365 β€” the control plane. Governs agent behaviour end-to-end across all platforms.

Setup path: Create agent blueprint β†’ Set up Azure billing β†’ Create Cloud PC pool β†’ Validate scenarios. See Windows 365 for Agents Billing and Cloud PC Agent Pools on Microsoft Learn.
Work IQ

The intelligence layer that grounds agents in your organisation

Work IQ is the contextual intelligence engine that grounds Microsoft 365 Copilot and Agent 365–managed agents in real-time, shared context across the organisation. It enables personalised search, advanced reasoning, and deeper semantic understanding by connecting signals across the Microsoft 365 ecosystem and business systems. Announced at Microsoft AI Tour Paris (March 2026) as a standalone agentic building block. Source: Microsoft Learn β€” Work IQ MCP overview (Preview)

Prerequisite: Microsoft 365 Copilot licence required to use Work IQ MCP servers.

πŸ“Œ Work IQ β€” three integrated layers
LayerWhat it does
DataUnifies signals from files, emails, meetings, chats, and business systems across Microsoft 365 to capture how work happens across the organisation.
MemoryBuilds persistent understanding of how people and teams work. Enables Agent 365–managed agents to stay aligned to priorities and remain consistent across tasks, apps, and sessions.
InferenceBrings together models, skills, and tools so agents can reason and take action using Work IQ MCP tools, while the Agent 365 control plane ensures those actions remain observable, governed, and compliant.
πŸ“Œ Why Work IQ matters for security

Agents grounded via Work IQ inherit your organisation's data governance automatically. Sensitivity labels travel with the data β€” an agent cannot surface Confidential content to a user without the right permissions. Work IQ enforces this at the grounding layer, not just at output. This structural compliance makes Microsoft-native agents contextually superior and inherently more governable than ungoverned third-party alternatives using direct API calls.

Agent Map

Visual risk intelligence across your agent estate

The Agent Map is a dynamic visual in the Agent 365 portal showing which agents communicate with which resources, what access they use, and what risk signals are surfacing from Entra, ID Protection, Purview, and Defender β€” all in one view. Source: Devoteam β€” Microsoft AI Tour Paris (March 2026)

From the Agent Map, administrators can: block a flagged agent with a single click pending security review, approve or reject new agent deployment requests, see risk signals cross-referenced across identity and data signals, and identify orphaned agents with no current owner.

⚠️ Orphaned agents β€” employees who left the company

The most common real-world orphaned agent scenario is not Blueprint deletion (the Entra identity case) β€” it is employees who built agents in Copilot Studio and then left the organisation. Those agents continue running with the builder's original permissions, full access to the tools and data they were connected to, and no accountable owner. Microsoft does not detect or flag these automatically. The Agent 365 portal surfaces them in the Ownerless Agents view and the Agent Map.

Detection KQL:

AIAgentsInfo
| summarize arg_max(Timestamp, *) by AIAgentId
| where AgentStatus == "Published"
| where isempty(OwnerAccountUpns)
| project AIAgentName, CreatorAccountUpn, AgentCreationTime, UserAuthenticationType
Stateful Agents

Long-term memory β€” a data governance concern

Agent 365 agents are stateful β€” powered by Dataverse, they retain memory across sessions. This allows agents to remember user preferences, project details, team roles, and conversation context from previous interactions.

⚠️ Persistent memory is a sensitive data store

The Dataverse memory store accumulates sensitive context over weeks or months of agent interactions β€” meeting summaries, project decisions, user preferences, escalation history. This persistent store needs the same governance controls as any other sensitive data repository: access controls, retention policies, and inclusion in Purview DLP scope. It is not automatically covered by existing M365 data governance policies.

Key KQL β€” Agent 365 Agents

Essential Advanced Hunting queries

Use RegistrySource == "A365" to target Agent 365-registered agents specifically. See Playbook 01 Step 8 for the full query set.

// All A365 registered agents
AIAgentsInfo
| where RegistrySource == "A365"
| summarize arg_max(Timestamp, *) by AIAgentId
| where AgentStatus != "Deleted"
| project AIAgentId, AIAgentName, AgentStatus, IsBlocked, AIModel, Instructions

// Agents with no instructions β€” prompt injection risk
AIAgentsInfo
| where RegistrySource == "A365"
| summarize arg_max(Timestamp, *) by AIAgentId
| where isempty(Instructions) or Instructions == "N/A"
| project AIAgentId, AIAgentName, Instructions

// Agents with MCP tools β€” expanded attack surface
AIAgentsInfo
| where RegistrySource == "A365"
| summarize arg_max(Timestamp, *) by AIAgentId
| where isnotempty(AgentActionTriggers)
| extend Triggers = parse_json(AgentActionTriggers)
| mv-expand Trigger = Triggers
| where Trigger.type == "RemoteMCPServer"
| project AIAgentId, AIAgentName, Trigger.type
STAY UPDATED
Get notified when Microsoft AI security changes
Monthly updates β€” free, no spam.
Subscribe to updates β†’
Copilot Studio vs Foundry

Platform security comparison β€” which platform are you securing?

This section was previously a separate page. It covers the security differences between Copilot Studio and Microsoft Foundry β€” helping security architects understand which controls apply to each platform and where the gaps are.

Platform Overview

Which platform are you securing?

The security controls, gaps, and runbooks are fundamentally different depending on which platform your agents run on. Start here to make sure you're looking at the right controls.

πŸ€–
Microsoft Copilot Studio
LOW-CODE Β· POWER PLATFORM Β· MAKERS
Who uses it: Makers, Copilot admins, Power Platform teams β€” people who build agents with low-code tools, not developers writing code
What it produces: Copilot Studio agents published to Teams, SharePoint, websites β€” conversational agents connecting to M365 data
Critical distinction: Classic agents (most existing deployments) sit outside the Entra security perimeter. Modern agents (new) get full Entra coverage
Primary risk surface: Maker credentials, no-auth agents, org-wide sharing, agent sprawl, any-user-can-change-auth
Primary detection surface: AIAgentsInfo table in Defender Advanced Hunting
🏭
Microsoft Foundry
CODE-FIRST Β· AZURE Β· DEVELOPERS
Who uses it: Developers, solution architects, AI engineers β€” people building custom AI agents and workloads in Azure using SDKs and code
What it produces: Custom AI agents, RAG pipelines, multi-agent orchestration, enterprise AI applications β€” deployed as Azure resources
Critical distinction: Foundry agents use modern Agent ID (OAuth 2.0) β€” CA for Agents and ID Protection apply. Much stronger baseline than Classic Copilot Studio agents
Primary risk surface: Logging gaps (nothing collected by default), content capture governance, RBAC at resource vs project level, supply chain
Primary detection surface: Azure Monitor Diagnostic Settings, Application Insights, Entra ID sign-in logs
Side-by-Side Comparison

Security posture at a glance

Security Control Copilot Studio Microsoft Foundry
Entra Agent ID⚠️ Modern agents only β€” most existing deployments are Classic and excludedβœ… Supported β€” agents are Entra identities by default
Conditional Access for Agents❌ Does NOT apply to Copilot Studio agentsβœ… Applies to Foundry agents (OAuth 2.0 Agent ID)
⚠️ Security Copilot: applies to Microsoft-built agents only. Custom/partner agents use "Connect with existing user account" β€” no Agent ID, CA for Agents does not apply.
ID Protection for Agents❌ Classic agents only β€” Modern agents supportedβœ… Supported
Identity Governance (lifecycle)⚠️ Modern agents onlyβœ… Supported via Entra ID Governance
Defender real-time protectionβœ… Copilot Studio agents (Defender for Cloud Apps)βœ… Defender for Cloud AI security posture
Sentinel analytics rulesβœ… AIAgentsInfo table queriesβœ… Azure Monitor + App Insights tables
Prompt Shield / Content Safetyβœ… Built-in via M365 Copilot layerβœ… Content Safety SDK β€” opt-in per agent
DLP / Purview (policy layer)βœ… DLP for M365 Copilot (GA March 31 2026) β€” covers Copilot experiencesβœ… Azure data governance applies
Browser-layer DLPβœ… Edge for Business inline protection β€” inspects typed prompts to any GenAI app incl. shadow AI. Works on BYOD if signed into Edge for Business profileβœ… Same β€” applies to any browser-based interaction
Network-layer DLP⚠️ Preview β€” Network Data Security via Global Secure Access. Covers unmanaged devices, desktop apps, API calls⚠️ Preview β€” same coverage
SharePoint oversharing controls (SAM)βœ… SharePoint Advanced Management included with Copilot licence β€” RCD, Site Access Reviews, Content Assessment, RAC. Primary tool for Copilot data exposure remediation.⚠️ Not applicable at the same level β€” Foundry agents access data via explicit connections, not broad SharePoint indexing
Agentic data governanceβœ… DLP extends to agent-to-human, agent-to-tools, agent-to-agent. Sensitive files blocked from grounding data. Auto-enrolled for audit at creationβœ… Same β€” agent instances enrolled as auditable entities
Inventory / discoveryβœ… Agent 365 + AIAgentsInfo table⚠️ Azure Resource Manager + Entra Agent ID β€” no unified agent-level inventory table equivalent
Logging β€” default stateβœ… Some data in AIAgentsInfo automatically⚠️ Nothing collected by default β€” all logging is opt-in
Red teaming⚠️ No native Copilot Studio red teaming toolβœ… AI Red Teaming Agent in Microsoft Foundry
Supply chain scanning⚠️ Limited β€” connector risk is the main vectorβœ… Defender for Cloud CSPM, AI model scanning
⚠️ Security Copilot agent identity β€” a third distinction

Security Copilot agents offer two identity options. Microsoft-built agents (Phishing Triage, Threat Intelligence Briefing, Vulnerability Remediation etc.) use a dedicated Entra Agent ID β€” CA for Agents and ID Protection apply. Custom and partner agents use "Connect with existing user account" β€” the agent runs using the configuring user's credentials, inheriting their full access and permissions.

Why this is worse than Copilot Studio maker credentials: Security Copilot users are typically high-privilege accounts β€” Security Admins, SOC engineers, Global Admins. A custom agent configured by a Global Admin silently extends Global Admin-level access to Sentinel incidents, Defender signals, Entra identity risk data, and threat intelligence β€” to every user who runs the agent. The blast radius of a compromised or misconfigured custom Security Copilot agent is significantly larger than a typical Copilot Studio agent.

Mitigation: Use a dedicated low-privilege service account for configuring custom Security Copilot agents. Audit who configures custom agents and what permissions their account holds. Establish an approval gate before production deployment.

Copilot Studio β€” Security Condensed

The five authentication patterns β€” risk at a glance

Every Copilot Studio agent uses one of five authentication patterns. The pattern determines the risk level, what controls apply, and how you detect it.

β‘ 
End User Credentials (OBO)
Auth with Microsoft β†’ End user credentials
LOW RISK
UserAuthenticationType == "Integrated"
β‘‘
Maker-Provided Credentials
Auth with Microsoft β†’ Maker-provided credentials
HIGH RISK
AgentToolsDetails.mode == "Maker"
β‘’
App Registration β€” Delegated
Authenticate manually β†’ Entra ID V2 (delegated)
LOW RISK
HTTP Request + delegated token
β‘£
App Registration β€” Application Permissions
Authenticate manually β†’ Entra ID V2 (application)
VERY HIGH RISK
HTTP to graph.microsoft.com + client creds
β‘€
Agent's User Account
Full human identity β€” mailbox, Teams, SharePoint access
VERY HIGH RISK
Entra ID Governance lifecycle required

The Classic vs Modern gap β€” the most important distinction

Most existing Copilot Studio deployments are Classic agents. They authenticate as service principals or via OBO β€” not as modern Agent ID identities. This means CA for Agents, ID Protection for Agents, and Entra lifecycle governance do not apply. The entire Entra security product stack Microsoft markets for agent security only works with Modern agents.

⚠️ The gap nobody talks about

Microsoft does not clearly document this distinction in its product marketing. Most security teams assume that purchasing Entra Agent ID or enabling CA for Agents covers their Copilot Studio estate. It does not β€” unless agents have been specifically created as Modern agents using the Agent ID framework. Field research confirms this is the default state of most enterprise Copilot Studio deployments.

Two protection layers β€” understanding both

LayerWhat it protectsAlways active?Error message
Responsible AI
content filtering
Conversational level β€” harmful content, jailbreak attempts, prompt injection in user input, copyright. Evaluates what is being discussed. βœ“ Always on β€” no config needed "Content filtered due to Responsible AI restrictions"
Real-time threat protection
(Defender for Cloud Apps)
Action execution level β€” tool invocations, data access patterns, privilege escalation through tool chaining, data exfiltration. Evaluates what the agent is about to do. ⚠️ Must be configured β€” off by default "Blocked by threat protection"
⚠️ 1-second timeout on real-time protection

If Defender for Cloud Apps does not return a block decision within 1 second, the tool invocation proceeds regardless. Fast tool calls on high-latency connections may bypass real-time protection. Treat it as a strong detection and prevention control β€” not a guaranteed prevention guarantee.

Copilot Studio β€” 30-minute audit

Run these in Defender Advanced Hunting to get immediate visibility. Any result from Query 1 or 2 is a critical finding.

// Query 1: No-auth agents (critical β€” run first) AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | where UserAuthenticationType == "None" | project AIAgentName, CreatorAccountUpn, OwnerAccountUpns, AgentCreationTime
// Query 2: Change-detection β€” auth downgraded to None (use as Sentinel Analytics Rule) AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | order by AIAgentName | extend PreviousAuthType = prev(UserAuthenticationType, 1) | where UserAuthenticationType == "None" and PreviousAuthType != "None" | project AIAgentName, PreviousAuthType, UserAuthenticationType, ReportId = tostring(AIAgentId), Timestamp
// Query 3: Maker credentials (field-validated β€” checks both Tools and Topics) let base = AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published"; let directActions = base | mv-expand detail = AgentToolsDetails | where detail.action.connectionProperties.mode == "Maker" | extend ActionType = "FromTools" | project-reorder AgentCreationTime, AIAgentId, AIAgentName, UserAuthenticationType, CreatorAccountUpn; let topicActions = base | mv-expand topic = AgentTopicsDetails | extend topicActionsArray = topic.beginDialog.actions | mv-expand Action = topicActionsArray | where Action.connectionProperties.mode == "Maker" | extend ActionType = "FromTopic" | project-reorder AgentCreationTime, AIAgentId, AIAgentName, AgentStatus, CreatorAccountUpn, OwnerAccountUpns, Action; directActions | union topicActions | sort by AIAgentId, Timestamp desc

Copilot Studio β€” Critical Gaps

GapRiskInterim Mitigation
Classic agents outside Entra perimeter⚠️ CriticalInventory via AIAgentsInfo; enforce end-user auth in Power Platform admin; manually recreate critical agents as Modern
Any user can change another agent's auth type to None⚠️ CriticalDeploy change-detection Sentinel Analytics Rule; restrict Copilot Studio access via Managed Environments
Maker credentials blast radius⚠️ HighEnforce end-user auth per agent; PAM hygiene on developers who build agents; audit via Query 3 above
Portal inventory count inconsistency⚠️ HighTrust AIAgentsInfo table as primary source; treat portal counts as approximate
Agent sprawl β€” no lifecycle enforcement⚠️ HighAssign owners to all agents; use access packages for time-bound permissions; quarterly AIAgentsInfo audit
Microsoft Foundry β€” Security Condensed

The Foundry resource model β€” why it matters for security

Microsoft Foundry uses a layered resource model that most teams bolt security onto after deployment β€” when the decisions that matter most are already harder to change.

Foundry Resource
Microsoft.CognitiveServices/accounts
Networking Β· Private endpoints
RBAC Β· Managed identity
Encryption keys Β· Model deployments
Service connections
Security-sensitive: Management-plane operations (key rotation, RBAC changes, project creation) all originate here
Foundry Projects (one-to-many)
Microsoft.CognitiveServices/accounts/projects
Inherit resource networking + encryption
Agent builds Β· Evaluations Β· Prompt flows
Application Insights connection
Critical: Diagnostic Settings do NOT cascade from resource to projects β€” each project needs its own separate configuration

Microsoft Foundry β€” the four logging layers

Foundry generates telemetry across four distinct layers. The Activity Log is the only one that requires no configuration. Everything else is opt-in and off by default.

LayerWhat it capturesDefault stateSecOps priority
Layer 1 Β· Activity LogResource CRUD, RBAC changes, key rotation, network config, model deploymentsβœ… Automatic⭐⭐⭐ Essential β€” route to Sentinel
Layer 2a Β· Diagnostic Settings (Resource)Audit (data plane access), RequestResponse (inference metadata β€” no prompt content), AzureOpenAIRequestUsage, Trace❌ Off by default β€” explicit opt-in per resource⭐⭐⭐ Enable Audit + RequestResponse for SecOps
Layer 2b Β· Diagnostic Settings (Project)Audit (agent operations β€” runs, file uploads, evaluations), Trace, AllMetrics❌ Off by default β€” separate config per project⭐⭐⭐ Enable Audit per project β€” does NOT inherit from resource
Layer 3 Β· Application InsightsFull agent runtime traces, tool call chains, prompt + completion content (if enabled), exceptions, dependencies❌ Off by default β€” SDK connection per project⭐⭐ Enable for agent-level behavioural visibility
Identity Β· Entra ID logsNon-interactive sign-ins, service principal sign-ins, agent lifecycle events❌ Tenant-level diagnostic setting β€” separate config⭐⭐⭐ Required β€” without this, agent auth plane is a blind spot
⚠️ Two critical Foundry logging gotchas

1. Diagnostic Settings don't cascade. Settings configured at the Foundry resource level do NOT apply to projects. Every new project needs its own separate Diagnostic Settings configuration β€” or you accept the gap silently.

2. RequestResponse does not contain prompt content. By design. If investigation requires content-level visibility, Application Insights with content capture enabled is the only source β€” but enabling AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED creates direct responsibility for storage, access controls, and retention of potentially sensitive data (PII, secrets, business data).

Microsoft Foundry β€” what to enable for SecOps

βœ… ENABLE FOR SECOPS
Priority logging sources
Activity Log β†’ route to Sentinel workspace
Entra ID sign-in + audit logs (tenant-level)
Diagnostic Settings Audit β€” at resource AND each project
Diagnostic Settings RequestResponse β€” at resource level
Application Insights β€” workspace-based, linked to same LAW as Sentinel
⚠️ FOUNDRY-SPECIFIC GAPS
What to watch
No logging by default β€” data never collected cannot be recovered
New projects don't inherit logging config β€” governance process required
Content capture governance must precede enabling prompt logging
App Insights must be workspace-based for Sentinel to query it
RBAC at resource scope cascades to projects β€” least-privilege may require project-level assignments
πŸ“Œ Sources

Copilot Studio content: field research by Derk van der Woude (Microsoft Security MVP) Β· Microsoft Entra security for AI overview (April 2026) Β· Microsoft Zero Trust Assessment Workshop AI section.
Microsoft Foundry logging: Cyphora.io β€” Microsoft Foundry Logging (April 2026) Β· Microsoft Learn documentation.