The enterprise control plane for AI agents. Agent 365 doesn't build or host agents β it wraps agents you've already built in enterprise-grade identity, governance, observability, and security controls.
New capabilities announced at GA:
| Capability | Status | Detail |
|---|---|---|
| Registry sync β AWS Bedrock + Google Cloud | Preview now | Automatically discover and inventory agents on AWS Bedrock and Google Gemini Enterprise Agent Platform. Basic lifecycle governance (start, stop, delete) coming soon. |
| Defender agent context mapping | Preview June 2026 | Relationship map per agent: devices it runs on, MCP servers configured, associated identities, cloud resources reachable. Blast radius analysis and endpoint-based behaviour investigation. |
| Intune integration β policy controls + runtime blocking | Preview June 2026 | Policy-based controls and runtime blocking/alerting via Intune and Defender for agents on managed devices. |
Pricing: USD$15 per user per month. Each licence covers an individual who manages or sponsors agents, or uses agents to do work on their behalf. Also included in Microsoft 365 E7.
Source: Microsoft Security Blog β Agent 365 GA (May 1, 2026)
Agent 365 is not an agent builder or hosting platform. It is a management and security layer that sits above whatever platform your agents run on β Copilot Studio, Microsoft Foundry, LangChain, OpenAI Agents SDK, or anything else. Once an agent is onboarded to Agent 365, it gains enterprise-grade controls it didn't have before.
RegistrySource == "A365"). Near-real-time threat detection, incident alerts, and investigation via Defender portal β Settings β Security for AI.Agent 365 works with agents built on any platform and hosted anywhere. This is not a Microsoft-only capability.
You add the Agent 365 SDK to your agent code. Once instrumented, the agent registers with Agent 365 regardless of where it runs β Azure, AWS, GCP, or your own infrastructure. The SDK handles the Entra identity registration, OpenTelemetry event emission, and ATG integration automatically. You don't rewrite your agent; you add a governance layer around it.
Agent 365 is not a closed Microsoft system. At GA, a growing ecosystem of external agents and frameworks are certified to integrate with Agent 365 β inheriting the same identity, governance, and security controls.
Source: Microsoft 365 Blog β Agent 365: The control plane for AI agents (Nov 2025)
New Purview capability announced alongside Agent 365 GA: visibility into how Copilot and AI apps interact with enterprise data, with retention and deletion policies for AI prompts and outputs. Addresses regulatory obligations as AI adoption scales. Distinct from DLP (blocks at point of use) β DLM governs what is retained after. Source: Microsoft Tech Community.
β οΈ Per-user, not per-agent. Governance scope doesn't automatically scale with agent count. An organisation with 50 licensed users but 500 deployed agents has a coverage gap. Plan accordingly.
π‘ When E7 makes sense: If you're buying M365 Copilot + E5 + Entra Suite anyway, E7 likely costs less than the sum of parts. Run the numbers β the break-even depends on your existing licence baseline.
Agent visibility was previously split across the Entra admin center and the M365 admin center. Microsoft has converged the registry under Agent 365 as the single control plane. Each portal now has a distinct role:
| Portal | What you do here | Agent visibility | Role required |
|---|---|---|---|
Microsoft 365 admin centeradmin.microsoft.com β Agents β All agents |
Comprehensive agent inventory β discover and manage all agents in the organisation, monitor operational activity | β ALL agents Including agents without Entra Agent ID |
AI Administrator or AI Reader (least-privilege) No licence required for inventory view |
Microsoft Entra admin centerentra.microsoft.com |
Agent identity and access management β blueprints, permissions, Conditional Access, identity governance, security signals | β Agents with Entra Agent ID only | Agent ID Administrator Entra Agent ID licence required for security controls |
The Entra admin center only shows agents that have a Microsoft Entra Agent ID. Agents without an identity β including most Classic Copilot Studio agents β are invisible there. For a complete picture, identity admins should use Agent 365 (M365 admin center) for inventory and the Entra admin center for identity governance controls.
| Role | What it gives you | Portal |
|---|---|---|
| AI Reader | Read-only view of all agents in the organisation. Recommended least-privilege role for inventory access. | M365 admin center (Agent 365) |
| AI Administrator | Full management of all agents in Agent 365 including governance controls. | M365 admin center (Agent 365) |
| Agent ID Administrator | Manage agent identities, blueprints, permissions, CA policies in Entra. Required for Blueprint write operations. | Entra admin center |
Viewing all agents in the M365 admin center (Agent 365) requires no specific product licence β just the AI Administrator or AI Reader role. Applying security and governance controls (Conditional Access, identity governance policies) requires the appropriate Entra Agent ID licence.
Agent 365 is GA on May 1, 2026. Before that date β and for some advanced preview features β access is via the Microsoft Frontier programme.
Coverage depth varies depending on how the agent was built and whether the Agent 365 SDK is integrated.
| Agent type | Discovery (AIAgentsInfo) | Threat detection | Real-time protection (ATG) | Requires |
|---|---|---|---|---|
| Copilot Studio agents | β AutomaticRegistrySource == "PowerPlatform" |
β Extended alert set Audit logs sent by default |
β Available | Power Platform connector enabled in Defender |
| Agent 365 SDK agents | βRegistrySource == "A365" |
β Near-real-time Requires M365 audit log routing |
β ATG | Agent 365 licence + SDK integration |
| Foundry / Bedrock / Vertex AI | β UI inventory | β Limited β no SDK | β Without SDK | Agent 365 SDK required for detection + ATG |
| Classic Copilot Studio agents | β via PowerPlatform connector | β Basic only | β Existing Defender RT | No Agent 365 needed β but no Entra Agent ID |
Source: Microsoft Security Blog β Agent 365 GA (May 1, 2026)
| Mode | Status | How it works | Example |
|---|---|---|---|
| Agents working on behalf of users Delegated access |
GA | Agent acts on behalf of a signed-in user using delegated permissions. Operates in response to user prompts. Uses the user's identity context. | An agent that helps an employee organise their inbox or summarise emails |
| Agents operating behind the scenes Own access / autonomous |
GA | Agent operates with its own credentials and permissions, without user context. Runs autonomously in the background on scheduled or event-triggered tasks. | An agent autonomously triaging support tickets or running nightly data reconciliation |
| Agents participating in team workflows Own access / collaborative |
Preview | Agent operates with its own access while participating in team channels, meetings, or shared workspaces. Interacts with multiple users and agents in collaborative contexts. | An agent added to a Teams channel that monitors project activity and responds to @mentions |
Users are installing local AI agents (OpenClaw, GitHub Copilot CLI, Claude Code) on their devices and adopting SaaS agents outside traditional governance. Agent 365 now addresses this with a new Shadow AI page in the M365 admin center.
What IT can do today (Frontier programme): See if OpenClaw agents are being used in the organisation, which devices they are running on, and enable two Intune security policies from the Shadow AI page:
| Policy | Intune policy created | What it does |
|---|---|---|
| Continuously detect managed devices | A365 - Monitor OpenClawDevice configuration Β· Properties catalog profile |
Creates a read-only Properties catalog profile using the new Local AI Agent Settings Catalog node. Runs via Intune Management Extension (IME) β inspects disk and memory on managed Windows devices. Safe to deploy β reads from device, does not configure it. Refreshes every 24 hours. |
| Block AI Agents from OpenClaw | A365 - Block OpenClawSecurity baseline policy |
Blocks common methods of running OpenClaw on managed devices via Intune security baseline policy. See rollback caveat below before enabling. |
| Property | What it captures |
|---|---|
Agent Name | Canonical identifier for the agent type (e.g. "OpenClaw") |
Agent Version | Version string of the installed agent |
Host Process | Parent process executing the agent β identifies the execution context |
Install Location | Filesystem path of the agent installation |
Install Scope | Per-user vs per-machine installation |
Install Scope Platform User ID | Windows SID of the installing user |
Install Scope User ID | Entra ID user identifier (UPN) of the installing user |
Local AI Agent Execution Context | Privilege/security context β user / elevated / SYSTEM. β οΈ SYSTEM-level execution is high risk. |
Local AI Agent Execution Context field lets IT and security teams immediately identify which devices have agents running with elevated or SYSTEM-level privileges β a key risk signal for triage.
A365 - Block OpenClaw security policy directly in Intune. The Agent 365 portal does not expose a disable control.
Source: Windows IT Pro Blog β Windows 365 for Agents public preview (May 1, 2026)
Many enterprise applications have no APIs β critical work still happens through user interfaces where context, data, and intent are conveyed visually. To unlock their full potential, AI agents need to interact with applications the way people do: using a computer directly through clicks, typing, and navigation. Today most agents run on ad-hoc infrastructure β local machines, shared virtual machines, or unmanaged cloud environments β creating gaps in identity, policy enforcement, auditability, and control. This makes it difficult for IT teams to confidently scale agentic workloads beyond API- or MCP-based pilots.
Every employee in an organisation has an identity and works on a managed device β typically a Windows 365 Enterprise Cloud PC. Now, each AI agent also has its own identity (governed through Agent 365) and runs on a managed Cloud PC (provided by Windows 365 for Agents). It is the same trust model and the same IT controls β extended to AI.
| Dimension | Detail |
|---|---|
| What it is | A new class of Cloud PCs purpose-built for agentic workloads. Agents run in a fully managed Windows environment with identity, security, policy, and lifecycle management handled by IT. |
| Status | Public Preview Β· US only Β· May 1, 2026 |
| Three key benefits | β Enterprise-grade identity and access controls for every agent Β· β‘ Unified device and policy management via Intune Β· β’ Global scalability with geo-level data residency for compliance |
| Prerequisites | Agent 365 licence + Intune licence + active Azure subscription (billing for Cloud PC compute is Azure pay-as-you-go, not included in Agent 365) |
| Relationship to Agent 365 | Agent 365 = control plane (what agents can do, governance, policies). Windows 365 for Agents = execution layer (where agents run securely). Together: move from visibility/governance to production-ready deployments. |
| Who it is for | IT administrators, security teams, digital workplace leaders, platform teams. Especially valuable for: legacy/UI-based app workflows with no API, human-in-the-loop scenarios, organisations needing geo-specific data residency. |
Microsoft IQ β the intelligence layer. Shared context across people, work, and the business. Helps AI understand what matters and make informed decisions. Agents use this to reason.
Windows 365 for Agents β the execution layer. Trusted, managed runtime for agents to get work done, especially for UI-based workflows.
Microsoft Azure β the foundation. Global cloud for secure, scalable AI. Hosts the Cloud PCs.
Agent 365 β the control plane. Governs agent behaviour end-to-end across all platforms.
Work IQ is the contextual intelligence engine that grounds Microsoft 365 Copilot and Agent 365βmanaged agents in real-time, shared context across the organisation. It enables personalised search, advanced reasoning, and deeper semantic understanding by connecting signals across the Microsoft 365 ecosystem and business systems. Announced at Microsoft AI Tour Paris (March 2026) as a standalone agentic building block. Source: Microsoft Learn β Work IQ MCP overview (Preview)
Prerequisite: Microsoft 365 Copilot licence required to use Work IQ MCP servers.
| Layer | What it does |
|---|---|
| Data | Unifies signals from files, emails, meetings, chats, and business systems across Microsoft 365 to capture how work happens across the organisation. |
| Memory | Builds persistent understanding of how people and teams work. Enables Agent 365βmanaged agents to stay aligned to priorities and remain consistent across tasks, apps, and sessions. |
| Inference | Brings together models, skills, and tools so agents can reason and take action using Work IQ MCP tools, while the Agent 365 control plane ensures those actions remain observable, governed, and compliant. |
Agents grounded via Work IQ inherit your organisation's data governance automatically. Sensitivity labels travel with the data β an agent cannot surface Confidential content to a user without the right permissions. Work IQ enforces this at the grounding layer, not just at output. This structural compliance makes Microsoft-native agents contextually superior and inherently more governable than ungoverned third-party alternatives using direct API calls.
The Agent Map is a dynamic visual in the Agent 365 portal showing which agents communicate with which resources, what access they use, and what risk signals are surfacing from Entra, ID Protection, Purview, and Defender β all in one view. Source: Devoteam β Microsoft AI Tour Paris (March 2026)
From the Agent Map, administrators can: block a flagged agent with a single click pending security review, approve or reject new agent deployment requests, see risk signals cross-referenced across identity and data signals, and identify orphaned agents with no current owner.
The most common real-world orphaned agent scenario is not Blueprint deletion (the Entra identity case) β it is employees who built agents in Copilot Studio and then left the organisation. Those agents continue running with the builder's original permissions, full access to the tools and data they were connected to, and no accountable owner. Microsoft does not detect or flag these automatically. The Agent 365 portal surfaces them in the Ownerless Agents view and the Agent Map.
Detection KQL:
AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | where isempty(OwnerAccountUpns) | project AIAgentName, CreatorAccountUpn, AgentCreationTime, UserAuthenticationType
Agent 365 agents are stateful β powered by Dataverse, they retain memory across sessions. This allows agents to remember user preferences, project details, team roles, and conversation context from previous interactions.
The Dataverse memory store accumulates sensitive context over weeks or months of agent interactions β meeting summaries, project decisions, user preferences, escalation history. This persistent store needs the same governance controls as any other sensitive data repository: access controls, retention policies, and inclusion in Purview DLP scope. It is not automatically covered by existing M365 data governance policies.
Use RegistrySource == "A365" to target Agent 365-registered agents specifically. See Playbook 01 Step 8 for the full query set.
// All A365 registered agents AIAgentsInfo | where RegistrySource == "A365" | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus != "Deleted" | project AIAgentId, AIAgentName, AgentStatus, IsBlocked, AIModel, Instructions // Agents with no instructions β prompt injection risk AIAgentsInfo | where RegistrySource == "A365" | summarize arg_max(Timestamp, *) by AIAgentId | where isempty(Instructions) or Instructions == "N/A" | project AIAgentId, AIAgentName, Instructions // Agents with MCP tools β expanded attack surface AIAgentsInfo | where RegistrySource == "A365" | summarize arg_max(Timestamp, *) by AIAgentId | where isnotempty(AgentActionTriggers) | extend Triggers = parse_json(AgentActionTriggers) | mv-expand Trigger = Triggers | where Trigger.type == "RemoteMCPServer" | project AIAgentId, AIAgentName, Trigger.type
This section was previously a separate page. It covers the security differences between Copilot Studio and Microsoft Foundry β helping security architects understand which controls apply to each platform and where the gaps are.
The security controls, gaps, and runbooks are fundamentally different depending on which platform your agents run on. Start here to make sure you're looking at the right controls.
| Security Control | Copilot Studio | Microsoft Foundry |
|---|---|---|
| Entra Agent ID | β οΈ Modern agents only β most existing deployments are Classic and excluded | β Supported β agents are Entra identities by default |
| Conditional Access for Agents | β Does NOT apply to Copilot Studio agents | β
Applies to Foundry agents (OAuth 2.0 Agent ID) β οΈ Security Copilot: applies to Microsoft-built agents only. Custom/partner agents use "Connect with existing user account" β no Agent ID, CA for Agents does not apply. |
| ID Protection for Agents | β Classic agents only β Modern agents supported | β Supported |
| Identity Governance (lifecycle) | β οΈ Modern agents only | β Supported via Entra ID Governance |
| Defender real-time protection | β Copilot Studio agents (Defender for Cloud Apps) | β Defender for Cloud AI security posture |
| Sentinel analytics rules | β AIAgentsInfo table queries | β Azure Monitor + App Insights tables |
| Prompt Shield / Content Safety | β Built-in via M365 Copilot layer | β Content Safety SDK β opt-in per agent |
| DLP / Purview (policy layer) | β DLP for M365 Copilot (GA March 31 2026) β covers Copilot experiences | β Azure data governance applies |
| Browser-layer DLP | β Edge for Business inline protection β inspects typed prompts to any GenAI app incl. shadow AI. Works on BYOD if signed into Edge for Business profile | β Same β applies to any browser-based interaction |
| Network-layer DLP | β οΈ Preview β Network Data Security via Global Secure Access. Covers unmanaged devices, desktop apps, API calls | β οΈ Preview β same coverage |
| SharePoint oversharing controls (SAM) | β SharePoint Advanced Management included with Copilot licence β RCD, Site Access Reviews, Content Assessment, RAC. Primary tool for Copilot data exposure remediation. | β οΈ Not applicable at the same level β Foundry agents access data via explicit connections, not broad SharePoint indexing |
| Agentic data governance | β DLP extends to agent-to-human, agent-to-tools, agent-to-agent. Sensitive files blocked from grounding data. Auto-enrolled for audit at creation | β Same β agent instances enrolled as auditable entities |
| Inventory / discovery | β Agent 365 + AIAgentsInfo table | β οΈ Azure Resource Manager + Entra Agent ID β no unified agent-level inventory table equivalent |
| Logging β default state | β Some data in AIAgentsInfo automatically | β οΈ Nothing collected by default β all logging is opt-in |
| Red teaming | β οΈ No native Copilot Studio red teaming tool | β AI Red Teaming Agent in Microsoft Foundry |
| Supply chain scanning | β οΈ Limited β connector risk is the main vector | β Defender for Cloud CSPM, AI model scanning |
Security Copilot agents offer two identity options. Microsoft-built agents (Phishing Triage, Threat Intelligence Briefing, Vulnerability Remediation etc.) use a dedicated Entra Agent ID β CA for Agents and ID Protection apply. Custom and partner agents use "Connect with existing user account" β the agent runs using the configuring user's credentials, inheriting their full access and permissions.
Why this is worse than Copilot Studio maker credentials: Security Copilot users are typically high-privilege accounts β Security Admins, SOC engineers, Global Admins. A custom agent configured by a Global Admin silently extends Global Admin-level access to Sentinel incidents, Defender signals, Entra identity risk data, and threat intelligence β to every user who runs the agent. The blast radius of a compromised or misconfigured custom Security Copilot agent is significantly larger than a typical Copilot Studio agent.
Mitigation: Use a dedicated low-privilege service account for configuring custom Security Copilot agents. Audit who configures custom agents and what permissions their account holds. Establish an approval gate before production deployment.
Every Copilot Studio agent uses one of five authentication patterns. The pattern determines the risk level, what controls apply, and how you detect it.
UserAuthenticationType == "Integrated"
AgentToolsDetails.mode == "Maker"
HTTP Request + delegated token
HTTP to graph.microsoft.com + client creds
Entra ID Governance lifecycle required
Most existing Copilot Studio deployments are Classic agents. They authenticate as service principals or via OBO β not as modern Agent ID identities. This means CA for Agents, ID Protection for Agents, and Entra lifecycle governance do not apply. The entire Entra security product stack Microsoft markets for agent security only works with Modern agents.
Microsoft does not clearly document this distinction in its product marketing. Most security teams assume that purchasing Entra Agent ID or enabling CA for Agents covers their Copilot Studio estate. It does not β unless agents have been specifically created as Modern agents using the Agent ID framework. Field research confirms this is the default state of most enterprise Copilot Studio deployments.
| Layer | What it protects | Always active? | Error message |
|---|---|---|---|
| Responsible AI content filtering |
Conversational level β harmful content, jailbreak attempts, prompt injection in user input, copyright. Evaluates what is being discussed. | β Always on β no config needed | "Content filtered due to Responsible AI restrictions" |
| Real-time threat protection (Defender for Cloud Apps) |
Action execution level β tool invocations, data access patterns, privilege escalation through tool chaining, data exfiltration. Evaluates what the agent is about to do. | β οΈ Must be configured β off by default | "Blocked by threat protection" |
If Defender for Cloud Apps does not return a block decision within 1 second, the tool invocation proceeds regardless. Fast tool calls on high-latency connections may bypass real-time protection. Treat it as a strong detection and prevention control β not a guaranteed prevention guarantee.
Run these in Defender Advanced Hunting to get immediate visibility. Any result from Query 1 or 2 is a critical finding.
| Gap | Risk | Interim Mitigation |
|---|---|---|
| Classic agents outside Entra perimeter | β οΈ Critical | Inventory via AIAgentsInfo; enforce end-user auth in Power Platform admin; manually recreate critical agents as Modern |
| Any user can change another agent's auth type to None | β οΈ Critical | Deploy change-detection Sentinel Analytics Rule; restrict Copilot Studio access via Managed Environments |
| Maker credentials blast radius | β οΈ High | Enforce end-user auth per agent; PAM hygiene on developers who build agents; audit via Query 3 above |
| Portal inventory count inconsistency | β οΈ High | Trust AIAgentsInfo table as primary source; treat portal counts as approximate |
| Agent sprawl β no lifecycle enforcement | β οΈ High | Assign owners to all agents; use access packages for time-bound permissions; quarterly AIAgentsInfo audit |
Microsoft Foundry uses a layered resource model that most teams bolt security onto after deployment β when the decisions that matter most are already harder to change.
Foundry generates telemetry across four distinct layers. The Activity Log is the only one that requires no configuration. Everything else is opt-in and off by default.
| Layer | What it captures | Default state | SecOps priority |
|---|---|---|---|
| Layer 1 Β· Activity Log | Resource CRUD, RBAC changes, key rotation, network config, model deployments | β Automatic | βββ Essential β route to Sentinel |
| Layer 2a Β· Diagnostic Settings (Resource) | Audit (data plane access), RequestResponse (inference metadata β no prompt content), AzureOpenAIRequestUsage, Trace | β Off by default β explicit opt-in per resource | βββ Enable Audit + RequestResponse for SecOps |
| Layer 2b Β· Diagnostic Settings (Project) | Audit (agent operations β runs, file uploads, evaluations), Trace, AllMetrics | β Off by default β separate config per project | βββ Enable Audit per project β does NOT inherit from resource |
| Layer 3 Β· Application Insights | Full agent runtime traces, tool call chains, prompt + completion content (if enabled), exceptions, dependencies | β Off by default β SDK connection per project | ββ Enable for agent-level behavioural visibility |
| Identity Β· Entra ID logs | Non-interactive sign-ins, service principal sign-ins, agent lifecycle events | β Tenant-level diagnostic setting β separate config | βββ Required β without this, agent auth plane is a blind spot |
1. Diagnostic Settings don't cascade. Settings configured at the Foundry resource level do NOT apply to projects. Every new project needs its own separate Diagnostic Settings configuration β or you accept the gap silently.
2. RequestResponse does not contain prompt content. By design. If investigation requires content-level visibility, Application Insights with content capture enabled is the only source β but enabling AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED creates direct responsibility for storage, access controls, and retention of potentially sensitive data (PII, secrets, business data).
Copilot Studio content: field research by Derk van der Woude (Microsoft Security MVP) Β· Microsoft Entra security for AI overview (April 2026) Β· Microsoft Zero Trust Assessment Workshop AI section.
Microsoft Foundry logging: Cyphora.io β Microsoft Foundry Logging (April 2026) Β· Microsoft Learn documentation.