UPDATED · FIELD RESEARCH · MARCH 2026

How AI Breaks
Traditional Security

Traditional security was built for users, endpoints, and applications. AI agents violate all three assumptions. Field research from Microsoft Security professionals and Microsoft's own agent misconfiguration research reveals the real-world risks are worse than most organisations realise.

Agent Properties & Risk

Properties That Create New Attack Surface

PropertyCapability UpsideSecurity DownsideRisk Severity
Self-initiatingAutomates workflows without human promptsMay take unintended actions outside guardrailsHIGH
PersistentContinuous value; handles tasks 24/7Over-permissioning drift; undetected misuse; orphaned agentsHIGH
OpaqueAbstracts complexity; simplifies workflowsLLM black-box; hard to audit; LLM non-determinism makes output unpredictableHIGH
ProlificLow-code / no-code creation accelerates adoptionShadow agents; sprawl; most existing Copilot Studio agents are Classic — outside Entra security perimeter entirelyCRITICAL
Tool-invokingReal actions: email, APIs, file writePrompt injection converts to real-world harm; MCP tools extend this to any connected systemCRITICAL
Context-consumingRich reasoning over enterprise dataSensitive data enters AI context — new exfiltration surfaceCRITICAL
Maker-authenticatedCreator can configure deep integration at build timeCopilot Studio agents authenticate as their maker, not the user — maker's full permissions extended to every user who interacts with the agentCRITICAL
⚠ The Maker Credentials Problem — Worse Than OBO

Our Identity page covers the OBO (On-Behalf-Of) token problem. Copilot Studio introduces a more dangerous variant: maker credentials. The agent authenticates to connected services as the person who built it — not the person using it. If a developer with admin rights builds an agent and shares it org-wide with one toggle, every employee in the organisation can interact with it using the maker's admin-level permissions. This is the most widespread and underappreciated privilege escalation risk in current enterprise AI deployments. Field research by Microsoft Security MVP Derk van der Woude confirms this pattern is common in production environments.

Copilot Studio Specific Risks

Copilot Studio: Real-World Risk Patterns

Microsoft's own security research team identified the top 10 agent misconfigurations observed in customer tenants. The following are the most structurally dangerous:

🔑 No Authentication Agents
Copilot Studio agents can be set to UserAuthenticationType = None. Anyone with access to Teams can use the agent — no login required. If the agent has access to internal data via maker credentials, it becomes an unauthenticated endpoint into your data estate. Detectable via KQL: AIAgentsInfo | where UserAuthenticationType == "None"
⚠ CRITICAL MISCONFIGURATIONAIAgentsInfo table
🌐 Org-Wide Sharing
A single toggle shares a Copilot Studio agent with the entire organisation. Combined with maker credentials and no authentication, this creates a public endpoint into internal data. Managed Environments in Power Platform admin can restrict sharing to numerical limits or security groups — but this is not the default.
⚠ Default RiskPower Platform Admin
🏛️ Classic vs Modern Agents
Copilot Studio agents created before Entra Agent ID was enabled are Classic Agents — registered as Service Principals. Classic Agents cannot be protected by any Entra security products: no ID Protection, no Conditional Access, no lifecycle governance. Most existing Copilot Studio agents in production are Classic. Microsoft plans a migration tool — it does not exist yet.
⚠ Most Agents in Wild Are ClassicNo Entra Protection
🔌 MCP Tools as Attack Surface
Copilot Studio now supports MCP servers as tools. Each MCP tool added to an agent extends the agent's action surface to whatever that MCP server can reach. Microsoft now has an official MCP server catalog — many servers provide broad enterprise access (Azure, GitHub, SharePoint, etc.) that becomes part of the agent's blast radius.
Growing SurfaceMicrosoft MCP Catalog
🏷️ Agent Name Sync Bug
When an agent is renamed in Copilot Studio after creation, the name in Entra Agent ID is not updated — it stays as the original Agent #. Security products like ID Protection and Conditional Access reference these names. In practice this makes per-agent policies in Entra nearly impossible to manage at enterprise scale.
⚠ Active BugCA Policy Impact
🔐 Ownerless Agents
Published agents without an accountable owner are a governance blind spot. Power Platform Inventory and the AIAgentsInfo Advanced Hunting table both surface agents with missing owners. KQL: AIAgentsInfo | where isempty(OwnerAccountUpns). Ownerless agents cannot be governed through the Agent ID sponsor model.
⚠ Common in ProductionKQL Detectable
Attack Surface

The Full Attack Surface Model

Every entry point into an agent is a potential injection or manipulation vector:

Entry Points Into an AI Agent
User prompts
Retrieved data (RAG)
Tool / API responses
MCP server outputs
Agent memory states
Model updates / fine-tuning data
Plugin / extension inputs
Other agent outputs (A2A)
System prompt overrides
Network-layer AI prompts (cross-app)
Maker credentials (via shared agent)
Risk Taxonomy

AI-Specific Risk Categories

RiskDescriptionWho Owns ItPrimary Microsoft Control
Agent sprawlNo inventory of deployed agents; no lifecycle ownershipIT / SecurityAgent 365 ⚠ per-user license
Classic agents — outside Entra perimeterMost existing Copilot Studio agents are Classic Service Principals with no Entra security product coverageIAM / SecurityMigration to Modern Agents ⚠ tool not yet available
Maker credentialsCopilot Studio agents authenticate as their builder — maker's permissions extended to all users of the agentIAM / AppSecPower Platform Managed Environments; enforce end-user auth per agent
No-auth agentsAgents set to no authentication — accessible to anyone in Teams with no loginIT / SecurityAIAgentsInfo KQL detection; Power Platform admin enforcement
Org-wide sharingOne toggle exposes agent to all employees — compounds with maker credentialsIT / SecurityPower Platform Managed Environments — set sharing limits
Over-permissioned accessAgents granted broad access; OBO inherits user's full rightsIAM / SecurityEntra Agent ID ⚠ preview, Modern Agents only
Shadow AI / pluginsBusiness users deploy unsanctioned AI tools and MCP servers outside IT oversightIT / CASBDefender for Cloud Apps + Entra Internet Access GA Mar 31
MCP tool misuseAgents invoke real enterprise tools via MCP — now via official Microsoft MCP server catalogAppSec / SecurityFoundry Guardrails ⚠ preview + Defender for Cloud Apps
Prompt injection / XPIAMalicious inputs hijack agent behaviour mid-taskAppSec / SOCPrompt Shields + Entra Internet Access Prompt Injection Protection GA Mar 31
Data leakageSensitive data enters AI context; exfiltrated via outputs or promptsDLP / CompliancePurview DSPM + Purview DLP for Copilot GA Mar 31
Ownerless agentsNo accountable owner — agents persist indefinitely with no governance reviewIT / IAMPower Platform Inventory; AIAgentsInfo Advanced Hunting
Zero Trust for AI

Applying Zero Trust Principles to AI

🔍 Verify Explicitly
Continuously evaluate the identity and behaviour of AI agents, not just at auth time. For Copilot Studio, this starts with knowing whether agents are Classic (no Entra coverage) or Modern. Sentinel and AIAgentsInfo Advanced Hunting table surface this.
SentinelAIAgentsInfo KQL
🔒 Least Privilege
Restrict access to models, prompts, plugins, and data. For Copilot Studio this means enforcing end-user authentication (not maker credentials), limiting sharing scope, and restricting MCP tools to only what's needed per agent.
⚠ Maker creds undermine thisPower Platform Managed Envs
🛡️ Assume Breach
Design for prompt injection, data poisoning, and lateral movement. Assume any input — including MCP tool responses — may be adversarial. Real-time protection in Defender for Cloud Apps blocks tool invocations during suspicious prompt activity.
Prompt ShieldsDefender for Cloud Apps RT
🤖 New: AI Pillar
Microsoft added a dedicated AI pillar to the Zero Trust framework at RSAC 2026. The Zero Trust Workshop tool (microsoft.github.io/zerotrustassessment) provides guided assessment. The formal AI assessment pillar is due summer 2026.
ZT WorkshopAI Assessment: Summer 2026