๐Ÿ“Œ Author's note: This site synthesises the author's own understanding from publicly available Microsoft documentation, official Microsoft Security blog posts, RSAC 2026 announcements, and insights from Microsoft Security professionals and MVPs. It is independent and not affiliated with or endorsed by Microsoft. Microsoft updates products and documentation frequently โ€” always verify current status directly with Microsoft before making architecture or purchasing decisions.
PRACTICAL PLAYBOOKS ยท MARCH 2026

From architecture
to action

Step-by-step security checklists for the most common AI security tasks. Each playbook distils field experience into the minimum viable set of controls โ€” what to do, in what order, and what to watch for.

4 PLAYBOOKS
Based on field research
โš  Verify steps with Microsoft docs โ€” UIs change
PLAYBOOK 01
Audit Your Copilot Studio Estate
~30 min ยท Copilot Admin ยท No extra licensing
PLAYBOOK 02
Secure a New Copilot Studio Agent
~45 min ยท Power Platform Maker + Admin ยท Managed Environments required
PLAYBOOK 03
โญ Set Up the Security Dashboard for AI
~2 hrs ยท Defender Admin + Power Platform Admin ยท Agent 365 or E7 required
PLAYBOOK 04
Respond to a Suspected Agent Compromise
~1 hr ยท Security Engineer ยท Sentinel + Defender required
PLAYBOOK 01
Audit Your Agent Estate in 30 Minutes
Find no-auth agents, overly shared agents, ownerless agents, and maker credential risks โ€” using KQL in Microsoft Defender Advanced Hunting. The AIAgentsInfo table now covers Copilot Studio, Microsoft Foundry, 3rd-party marketplace, and custom LOB agents. No extra licensing required beyond Defender.

๐Ÿ’ก Before running these KQL queries: consider running the M365 Copilot Automated Readiness Assessment (ARA) first. It evaluates your full tenant posture across 6 service domains in minutes and surfaces gaps in licensing, Entra, Defender, Purview, and Power Platform โ€” before you start the manual KQL audit. Free, open source, read-only API access, no data leaves your tenant.
โœ“ Works for Classic & Modern Agents โœ“ Copilot Studio ยท Foundry ยท Marketplace ยท LOB โš  Requires AI Agent Inventory enabled
P
Enable AI Agent Inventory โ€” Security for AI
In Microsoft Defender portal โ†’ Settings โ†’ Security for AI (previously: Settings โ†’ Cloud Apps โ†’ AI Agents). Then in Power Platform Admin Center โ†’ Security โ†’ Threat Detection โ†’ enable Microsoft Defender โ€” Copilot Studio AI Agents. Dual-admin setup required (Defender admin + Power Platform admin).

April 2026 expansion: The AIAgentsInfo table now includes additional columns covering all agent types โ€” not just Copilot Studio. Foundry agents, 3rd-party marketplace agents, and custom LOB agents are now included where they are registered with Agent 365 or use the Agent 365 SDK.
โš  Takes up to 2 hours for initial data population in the AIAgentsInfo table.
1
Run this KQL in Defender Advanced Hunting
Finds published agents with no authentication configured โ€” anyone with the link can chat with them.
AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | where UserAuthenticationType == "None" | project AIAgentName, CreatorAccountUpn, OwnerAccountUpns, AgentCreationTime, UserAuthenticationType
โš  Any result here is a critical finding. A no-auth published agent is accessible to anyone with the link โ€” including external users if the agent is published to a website.
Also run this change-detection query โ€” use as a Sentinel Analytics Rule to alert the moment any agent is switched to no-auth:
// Alert when UserAuthenticationType changes to "None" AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | order by AIAgentName | extend PreviousAuthType = prev(UserAuthenticationType, 1) | where UserAuthenticationType == "None" and PreviousAuthType != "None" | project AIAgentName, PreviousAuthType, UserAuthenticationType, ReportId = tostring(AIAgentId), Timestamp
๐Ÿ’ก Save this as a Sentinel Analytics Rule to get an incident the moment a published agent is downgraded to no-auth โ€” even if the change was made by someone who isn't the agent owner.
2
Find agents with no accountable owner
Agents without an owner lack accountability โ€” no one is responsible for reviewing or decommissioning them.
AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | where isempty(OwnerAccountUpns) | project AIAgentName, CreatorAccountUpn, AgentCreationTime, AgentStatus
3
Identify agents shared with the entire organisation
Org-wide sharing means every employee can interact with the agent. When combined with maker credentials this is critical โ€” the maker's privileges are extended to everyone.
AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | where SharedWithOrganization == true | project AIAgentName, CreatorAccountUpn, OwnerAccountUpns, UserAuthenticationType
โš  Cross-reference this list with Step 4 (maker credentials). Any agent that is both org-wide shared AND uses maker credentials is your highest blast-radius risk.
4
Find agents using maker credentials (Classic agents with connected services)
Classic Copilot Studio agents authenticate connected services (SharePoint, Outlook etc) using the builder's credentials โ€” not the end user's. Review the creator of each published agent to assess blast radius.
let base = AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published"; let directActions = base | mv-expand detail = AgentToolsDetails | where detail.action.connectionProperties.mode == "Maker" | extend ActionType = "FromTools", Action = detail.action | project-reorder AgentCreationTime, AIAgentId, AIAgentName, UserAuthenticationType, CreatorAccountUpn; let topicActions = base | mv-expand topic = AgentTopicsDetails | extend topicActionsArray = topic.beginDialog.actions | mv-expand Action = topicActionsArray | where Action.connectionProperties.mode == "Maker" | extend ActionType = "FromTopic" | project-reorder AgentCreationTime, AIAgentId, AIAgentName, AgentStatus, CreatorAccountUpn, OwnerAccountUpns, Action; directActions | union topicActions | sort by AIAgentId, Timestamp desc
๐Ÿ’ก This query checks both AgentToolsDetails and AgentTopicsDetails โ€” more precise than checking only UserAuthenticationType. Prioritise agents created by high-privilege users (Global Admins, SharePoint Admins).
4b
Detect agents using Entra App Registrations to call Microsoft Graph
Finds agents with HTTP Request actions calling graph.microsoft.com or management.azure.com. Delegated permissions = low risk. Application permissions (no user context, tenant-wide access, admin consent required) = very high risk. Check each result to determine which pattern is in use.
AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus != "Deleted" | mvexpand Topic = AgentTopicsDetails | where Topic has "HttpRequestAction" | extend TopicActions = Topic.beginDialog.actions | mvexpand action = TopicActions | where action['$kind'] == "HttpRequestAction" | extend Url = tostring(action.url.literalValue) | extend ParsedUrl = parse_url(Url) | extend Host = tostring(ParsedUrl["Host"]) | where Host has_any("graph.microsoft.com", "management.azure.com") | project-reorder AgentCreationTime, AIAgentId, AIAgentName, ParsedUrl, Url, Host, AgentStatus, CreatorAccountUpn, OwnerAccountUpns
โš  Application permissions agents have no user context and can access data across the entire tenant. If you find any, verify admin consent was intentional and review the granted scopes immediately.
5
Cross-check inventory counts
Compare agent counts across three portals โ€” they will likely differ. Use the AIAgentsInfo table as your most reliable source.
// Total agents in AIAgentsInfo AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | summarize Total = count(), Published = countif(AgentStatus == "Published"), Classic = countif(AgentType == "Classic"), Modern = countif(AgentType == "Modern")
โš  Known issue: Agent 365 portal, Security Dashboard for AI, and Entra Agent ID portal show different counts. Microsoft has confirmed this is in progress. Trust the KQL table for detailed audit work.
AUDIT CHECKLIST
AI Agent Inventory enabled and data populatedBoth Defender admin and Power Platform admin steps completed
No-auth agents identified and remediatedEach should have Entra ID auth or be unpublished
Ownerless agents reviewed and assignedEvery published agent should have an accountable owner
Org-wide shared agents reviewedConfirm each has legitimate business justification
High-privilege maker credentials identifiedAgents built by admins with org-wide sharing = critical priority
Findings documented for remediation tracking
Change-detection KQL saved as Sentinel Analytics RuleAlerts on any agent being switched to no-auth, even by non-owners
Modern agents checked for missing Owner + Sponsor (Step 6 PowerShell)Requires Agent ID Administrator โ€” Global Reader returns 403
Orphaned Agent Identities detected and remediated (Step 7 PowerShell)
A365 agents audited โ€” no-instructions, MCP tools, non-HTTPS endpoints (Step 8 KQL)Requires Agent 365 Frontier programme access
6
Find Modern agents missing Owner or Sponsor (PowerShell via Graph)
AIAgentsInfo covers Copilot Studio agents (Classic and Modern) and now also Foundry, marketplace, and LOB agents registered with Agent 365. For Modern agents specifically (Entra Agent ID owner/sponsor check), use this PowerShell script. Requires Agent ID Administrator role โ€” Global Reader returns 403.
Connect-MgGraph -Scopes "AgentIdentity.Read.All" $agents = Invoke-MgGraphRequest -Method GET ` -Uri "https://graph.microsoft.com/beta/servicePrincipals/microsoft.graph.agentIdentity" ` -OutputType PSObject foreach ($agent in $agents.value) { $owners = Invoke-MgGraphRequest -Method GET ` -Uri "https://graph.microsoft.com/beta/servicePrincipals/$($agent.id)/owners" ` -OutputType PSObject $sponsors = Invoke-MgGraphRequest -Method GET ` -Uri "https://graph.microsoft.com/beta/servicePrincipals/$($agent.id)/sponsors" ` -OutputType PSObject $flags = @() if ($owners.value.Count -eq 0) { $flags += "No Owner" } if ($sponsors.value.Count -eq 0) { $flags += "No Sponsor" } if ($flags.Count -gt 0) { Write-Host "$($agent.displayName) | ID: $($agent.id) | $($flags -join ', ')" -ForegroundColor Red } } Disconnect-MgGraph
๐Ÿ’ก Owner = technical admin (credential management, monitoring). Sponsor = business accountable (answers "why does this agent exist?"). Both are optional at creation time but both are required for proper governance. Without a Sponsor, no one can approve Access Packages on behalf of the agent.
7
Find Agent Identities whose Blueprint has been deleted
When a Blueprint is deleted, its Agent Identities are NOT automatically removed. They retain all permissions but can no longer authenticate โ€” identity debt. This three-step script cross-references Agent Identities against active Blueprint Principals to surface orphaned objects.
Connect-MgGraph -Scopes "AgentIdentity.Read.All" # Step 1: Get all Agent Identities and their Blueprint IDs $agents = Invoke-MgGraphRequest -Method GET ` -Uri "https://graph.microsoft.com/beta/servicePrincipals/microsoft.graph.agentIdentity" ` -OutputType PSObject # Step 2: Get all active Blueprint Principals $blueprints = Invoke-MgGraphRequest -Method GET ` -Uri "https://graph.microsoft.com/beta/servicePrincipals?$filter=servicePrincipalType eq 'ManagedIdentity'" ` -OutputType PSObject $activeBlueprintIds = $blueprints.value | Select-Object -ExpandProperty id # Step 3: Find orphaned โ€” Blueprint ID not in active list foreach ($agent in $agents.value) { if ($agent.agentIdentityBlueprintId -notin $activeBlueprintIds) { Write-Host "ORPHANED: $($agent.displayName) | ID: $($agent.id) | Blueprint: $($agent.agentIdentityBlueprintId)" -ForegroundColor Red } } Disconnect-MgGraph
โš  Orphaned Agent Users are more dangerous โ€” they appear as normal user accounts in the Entra portal with no indication they belonged to a deleted agent. They may hold group memberships, licenses, and resource access. Remove both orphaned Agent Identities and their associated Agent Users, then revoke any permissions assigned to them.

Six analytic rules and a dedicated workbook for Copilot telemetry, contributed to the official Microsoft Sentinel GitHub repository by Samik Roy (May 2026). Deploy as a single solution from Sentinel Content Hub โ†’ Microsoft Copilot solution. Requires CopilotActivity table ingestion via the Copilot Data Connector.

Six analytic rules โ€” deploy from Content Hub
๐Ÿ”ด Copilot โ€“ Jailbreak Attempt Detected
๐ŸŸก Copilot โ€“ Access From External IP Address
๐ŸŸก Copilot โ€“ Plugin Created by Non-Admin User
๐ŸŸก Copilot โ€“ Plugin Enabled After Being Disabled
๐ŸŸก Copilot โ€“ Plugin Tampering (Enable and Disable Within 5 Minutes)
๐Ÿ”ต Copilot โ€“ File Uploads Disabled
Workbook sections โ€” Microsoft Copilot Activity Monitoring
1. All Events โ€” Raw CopilotActivity for quick validation and troubleshooting
2. Activity Overview โ€” Timeline by record type, distribution by activity type
3. User Activity Analysis โ€” Top users by activity (who uses Copilot most)
4. Plugin Management โ€” Plugin lifecycle events (create, enable, disable, change)
5. AI Model Usage โ€” Model usage statistics and usage by application host
6. Security Insights โ€” Jailbreak detection events and top source IP addresses
7. Detailed Activity Log โ€” Recent Copilot activities for deep-dive investigation
Deployment path
Step 1: Ensure CopilotActivity logs are ingested
  โ†’ Sentinel Content Hub โ†’ Copilot Data Connector (Public Preview Feb 2026)
  โ†’ Requires Global/Security Administrator role

Step 2: Deploy the Microsoft Copilot solution
  โ†’ Microsoft Sentinel โ†’ Content Hub โ†’ search "Microsoft Copilot" โ†’ Install
  โ†’ Includes: workbook + 6 analytic rules + hunting queries in one solution

  GitHub source:
    Analytic rules: Azure-Sentinel/Solutions/Microsoft Copilot/Analytic Rules
    Hunting queries: Azure-Sentinel/Solutions/Microsoft Copilot/Hunting Queries
    Workbook: Azure-Sentinel/Solutions/Microsoft Copilot/Workbooks/MicrosoftCopilotActivityMonitoring.json

Step 3: Open workbook and validate data against your environment
Step 4: Enable and tune analytic rules to your policy and risk appetite
Why these detections matter โ€” key questions answered
Who is accessing Copilot from external or unusual IP ranges? โ†’ Copilot โ€“ Access From External IP Address
Are plugins created or enabled by non-admin users? โ†’ Plugin Created by Non-Admin User
Signs of jailbreak attempts or prompt abuse? โ†’ Jailbreak Attempt Detected
Plugins being rapidly enabled/disabled (bypass testing)? โ†’ Plugin Tampering (5-minute window)
๐Ÿ“Œ Source: Samik Roy โ€” "Monitor & Detect Microsoft Copilot Activity with Microsoft Sentinel" (May 4, 2026). Contributed to official Azure/Azure-Sentinel GitHub repository.

When you enable "Continuously detect managed devices" in Agent 365 Shadow AI, Intune automatically creates a Properties catalog profile called A365 - Monitor OpenClaw. It uses the new Local AI Agent Settings Catalog node, runs via the Intune Management Extension (IME), and refreshes every 24 hours. Read-only โ€” safe to deploy. Source: Derk van der Woude, May 2026.

Eight properties collected per device
Agent Name โ€” canonical identifier for the agent type
Agent Version โ€” version string of the installed agent
Host Process โ€” parent process executing the agent
Install Location โ€” filesystem path of the installation
Install Scope โ€” per-user vs per-machine
Install Scope Platform User ID โ€” Windows SID of installing user
Install Scope User ID โ€” Entra ID user identifier (UPN)
Local AI Agent Execution Context โ€” user / elevated / SYSTEM โš ๏ธ SYSTEM = high risk
KQL โ€” surface agents running at elevated or SYSTEM context
// Requires Intune device inventory data in Log Analytics
// IntuneDeviceCompliancePolicies or custom inventory table from IME
// Filter for elevated/SYSTEM execution context
IntuneDevices
| where Properties has "Local AI Agent Execution Context"
| extend AgentName = extract("AgentName:([^,]+)", 1, Properties)
| extend ExecContext = extract("ExecutionContext:([^,]+)", 1, Properties)
| extend InstallUser = extract("InstallScopeUserID:([^,]+)", 1, Properties)
| where ExecContext in ("elevated","SYSTEM")
| project DeviceName, AgentName, ExecContext, InstallUser, LastSync
| order by ExecContext desc, LastSync desc

The CloudAppEvents table captures Copilot and agent activity from the M365 Unified Audit Log via Defender for Cloud Apps. Requires: Settings โ†’ Cloud Apps โ†’ App connectors โ†’ M365 activities enabled. Metadata only โ€” no prompt content.

Agent changes โ€” all Copilot agent create/update/delete events
CloudAppEvents
| where Timestamp > ago(7d)
| where ActionType startswith "CopilotAgent" or ActionType startswith "UpdateCopilot"
| project Timestamp, ActionType, AccountDisplayName,
          AgentName = tostring(RawEventData.CopilotAgentName),
          ChangeDetail = tostring(RawEventData)
| order by Timestamp desc
Cross-correlation โ€” agent change followed by suspicious email (same account)
CloudAppEvents
| where Timestamp > ago(7d)
| where ActionType startswith "UpdateCopilotAgent"
| project CopilotActionTime = Timestamp,
          AdminAccount = AccountDisplayName,
          AgentName = tostring(RawEventData.CopilotAgentName),
          Action = ActionType
| join kind=inner (
    EmailEvents
    | where Timestamp > ago(7d)
    | where Subject has_any ("confidential","restricted","sensitive")
) on $left.AdminAccount == $right.SenderFromAddress
| project CopilotActionTime, AdminAccount, AgentName, Action,
          EmailTime = Timestamp1, EmailSubject = Subject
Security Copilot audit โ€” all triggering events
CloudAppEvents
| where Timestamp > ago(30d)
| where ActionType == "CopilotForSecurityTrigger"
| summarize TriggerCount = count(), UniqueUsers = dcount(AccountObjectId)
    by ActionType, Application, bin(Timestamp, 1d)
| order by Timestamp desc

Identifies which AI model each Copilot Studio agent is using by extracting modelNameHint from RawAgentInfo. Flags EU Data Boundary (EUDB) compliance status per agent โ€” critical for EU organisations where Anthropic-hosted models process data outside the EUDB. Source: Blue161616/Agent-Identity.

Agent model inventory with EUDB status
AIAgentsInfo
| summarize arg_max(Timestamp, *) by AIAgentId
| where Platform == "Copilot Studio" and AgentStatus != "Deleted"
| extend ModelNameHint = extract(@"modelNameHint:\s*([A-Za-z0-9_\-\.]+)", 1, RawAgentInfo)
| extend HintLower = tolower(ModelNameHint)
| extend Provider = case(
    HintLower startswith "sonnet" or HintLower startswith "haiku" or HintLower startswith "opus", "Anthropic",
    HintLower startswith "gpt" or HintLower startswith "o1" or HintLower startswith "o3", "OpenAI (Microsoft-hosted)",
    isempty(ModelNameHint), "Environment default",
    "Other"
)
| extend EUDB_Status = case(
    Provider == "Anthropic", "OUT OF EUDB โ€” cross-geo processing",
    Provider == "OpenAI (Microsoft-hosted)", "In EUDB (if environment is in EU)",
    Provider == "Environment default", "Depends on tenant default โ€” verify",
    "Verify"
)
| project AIAgentName, Provider, ModelNameHint, EUDB_Status,
          EnvironmentId, CreatorAccountUpn, OwnerAccountUpns, LastModifiedTime
| sort by Provider asc, AIAgentName asc
โš ๏ธ EUDB implication: Copilot Studio agents using Anthropic models (Sonnet, Haiku, Opus) process data outside the EU Data Boundary regardless of your tenant's geo. This is a compliance risk for EU organisations subject to GDPR or contractual data residency requirements. Run this query to identify affected agents before a data residency audit.
8
Query A365-registered agents using RegistrySource filter
The AIAgentsInfo table now has a RegistrySource column distinguishing agent origin: "A365" (Agent 365 registered) vs "PowerPlatform" (Copilot Studio via Power Platform connector). Use these filters to target the right agent population. The four queries below cover the highest-risk patterns for A365-registered agents.

Portal direct URL: security.microsoft.com/securitysettings/security_for_ai
// Query 8a: All Agent 365 registered agents (latest state) AIAgentsInfo | where RegistrySource == "A365" | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus != "Deleted" | project AIAgentId, AIAgentName, AgentStatus, IsBlocked, AIModel, Instructions, AgentCreationTime
// Query 8b: Published A365 agents with NO INSTRUCTIONS (prompt injection risk) // Empty Instructions = no guardrails = agent can be redirected by adversarial input AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where RegistrySource == "A365" | where IsBlocked == 0 | where isnotnull(Instructions) | where isempty(Instructions) or Instructions == "N/A" | extend RawAgentInfoJson = parse_json(RawAgentInfo) | extend PublishedStatus = RawAgentInfoJson.publishedStatus | where PublishedStatus == "Published" | project AIAgentId, AIAgentName, Instructions, PublishedStatus
// Query 8c: A365 agents with MCP tools configured (expanded attack surface) // Each MCP server = additional entry point into enterprise systems AIAgentsInfo | where RegistrySource == "A365" | summarize arg_max(Timestamp, *) by AIAgentId | where isnotempty(AgentActionTriggers) | extend AgentActionTriggersJson = parse_json(AgentActionTriggers) | mv-expand Trigger = AgentActionTriggersJson | extend ActionType = Trigger.type | where ActionType == "RemoteMCPServer" | project AIAgentId, AIAgentName, ActionType
// Query 8d: A365 agents using non-HTTPS endpoints (insecure MCP connections) AIAgentsInfo | where RegistrySource == "A365" | summarize arg_max(Timestamp, *) by AIAgentId | where isnotempty(AgentActionTriggers) | extend AgentActionTriggersJson = parse_json(AgentActionTriggers) | mv-expand Trigger = AgentActionTriggersJson | extend ServerUrls = Trigger.serverUrls | mv-expand Url = ServerUrls | extend ParsedUrl = parse_url(tostring(Url)) | extend Scheme = tostring(ParsedUrl["Scheme"]) | where isnotempty(Scheme) and Scheme != "https" | project AIAgentId, AIAgentName, Url, Scheme
๐Ÿ’ก For Copilot Studio agents, use RegistrySource == "PowerPlatform" in the same queries. The no-auth query from Step 1 remains the most important for Copilot Studio โ€” Query 8aโ€“8d are specifically designed for A365-registered agents.
๐Ÿ’ก Community Queries tip: In Defender Advanced Hunting โ†’ Community queries, there is a dedicated AI Agents section containing multiple queries created by the Microsoft Product Group. Check this section for the latest detection queries beyond what's listed here.
PLAYBOOK 02
Secure a New Copilot Studio Agent Before Publishing
Minimum security configuration for any new Copilot Studio agent โ€” before it goes live. Covers authentication, sharing controls, MCP tool risk, and what to tell the maker.
โš  Classic Agents: limited controls โœ“ Modern Agents: full Entra stack Managed Environments required
1
Enable Managed Environments in Power Platform
Power Platform Admin Center โ†’ Environments โ†’ select environment โ†’ Enable Managed Environments. This is the prerequisite for all governance controls including sharing limits and DLP policies.
2
Set sharing limits before agents are built
In Power Platform Admin Center โ†’ Managed Environments โ†’ Sharing limits โ†’ configure who makers can share agents with. Setting this before building prevents org-wide sharing by default.
๐Ÿ’ก Recommend: restrict to specific security groups by default. Require explicit approval for org-wide sharing.
3
Brief the maker on maker credentials risk
When a maker adds a connector (SharePoint, Outlook, Teams) to a Classic agent, that connector authenticates as the maker โ€” not the end user. Every user who interacts with the agent effectively acts with the maker's privileges. High-privilege makers (Global Admins, SharePoint Admins) should not build agents that access corporate data.
โš  This is a build-time decision that cannot be fully mitigated after deployment. The right person needs to build the agent.
4
Configure authentication โ€” never leave as "No authentication"
In Copilot Studio โ†’ Settings โ†’ Security โ†’ Authentication โ†’ select "Authenticate with Microsoft" for internal agents. Enable "Require users to sign in". Classic agents: this is the primary identity control available. Modern agents: this plus Entra Agent ID controls.
โš  Copilot Studio shows a warning at publish time if authentication is set to None โ€” but makers can bypass it. Administrators can enforce this at the environment level via data policies.
5
Review MCP tools carefully before adding
Every MCP tool added to a Classic agent uses maker credentials. Each tool expands the blast radius. For each tool ask: (a) does this tool need to authenticate as the maker? (b) could a malicious prompt abuse this tool to exfiltrate data? (c) is there a safer connector alternative?
๐Ÿ’ก Use built-in connectors instead of HTTP request nodes or direct MCP connections where possible โ€” connectors have OAuth governance via Defender for Cloud Apps.
6
Enable Block Images and URLs (external threat detection)
In Copilot Studio โ†’ Settings โ†’ Security โ†’ enable external threat detection and configure Microsoft Defender as the provider. This blocks image-based and URL-based prompt injection before the agent processes the content.
7
Scope sharing to the minimum required audience
In Copilot Studio โ†’ Share โ†’ add only the specific security groups who need access. Avoid "Everyone in [Org]" unless there is a documented business justification and security review.
8
Assign an owner and document the agent
Every published agent should have a named owner accountable for reviewing it quarterly. Document: what connectors it uses, what data it can access, who can interact with it, and who built it.
โœ“
If Modern Agent: configure Entra Agent ID controls
Enable Modern Agent mode in Power Platform Admin Center โ†’ Copilot โ†’ Settings โ†’ Copilot Studio. Once enabled, the agent gets an Entra Agent Identity and you can apply Conditional Access policies, Access Reviews, and ID Protection via Entra. Note: Entra Agent ID is still in preview as of March 2026.
PRE-PUBLISH CHECKLIST
Managed Environments enabledRequired for all governance controls
Authentication set to "Authenticate with Microsoft" + Require sign-inNever publish with No authentication
Maker is not a high-privilege account (Global Admin, SharePoint Admin etc)Maker credentials = agent credentials for Classic agents
All MCP tools and connectors reviewed and justified
Block Images and URLs enabled via Defender external threat detection
Sharing scoped to minimum required audience
Named owner assigned and agent documented
Agent visible in AI Agent Inventory after publishingVerify it appears in Defender Advanced Hunting AIAgentsInfo table
PLAYBOOK 03
Set Up the Security Dashboard for AI GA ยท START HERE
Configure the unified AI security posture view in Microsoft Defender. Requires collaboration between Defender admin and Power Platform admin. Allow up to 2 hours for data population.
โš  Requires Agent 365 or M365 E7 โš  Dual-admin setup โœ“ Works for Classic & Modern Agents
1
Enable preview features in Defender XDR
Microsoft Defender portal โ†’ Settings โ†’ Microsoft Defender XDR โ†’ Preview features โ†’ turn on. The AI Agent Inventory and Security Dashboard for AI features require preview mode enabled.
2
Connect the Microsoft 365 app connector
Defender portal โ†’ Settings โ†’ Security for AI โ†’ connect Microsoft 365 (previously: Settings โ†’ Cloud Apps โ†’ Connected Apps โ†’ Microsoft 365). This is required for Copilot agent telemetry to flow into Defender.
3
Enable Copilot Studio AI Agents
Defender portal โ†’ Settings โ†’ Security for AI โ†’ enable Copilot Studio AI Agents (previously: Settings โ†’ Cloud Apps โ†’ Copilot Studio AI Agents). Copy the URL shown โ€” you will need to share this with your Power Platform admin to complete the next step.
๐Ÿ’ก Save this URL carefully โ€” it encodes your tenant ID and is required for the Power Platform side of setup.
4
Enable external threat detection in Power Platform
Power Platform Admin Center โ†’ Security โ†’ Threat Detection โ†’ Additional threat detection โ†’ enable "Allow Copilot Studio to share data with a threat detection partner" โ†’ paste the URL from Step 3 โ†’ enter the Entra App ID.
โš  The App ID must match exactly. Mismatch causes a silent failure โ€” status will show "pending" indefinitely.
5
Verify connection status
Back in Defender portal โ†’ Settings โ†’ Security for AI โ†’ check that the Power Platform action status shows "Connected". If it shows "Pending" after 30 minutes, re-check the App ID and URL entered in Step 4.
6
Confirm AIAgentsInfo table is populating
Run this query in Defender Advanced Hunting. If it returns rows, setup is complete. If it returns nothing after 2 hours, check the connection status in Step 5.
AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | summarize count() by AgentStatus
7
Open the Security Dashboard for AI
Defender portal โ†’ left nav โ†’ expand Microsoft Sentinel โ†’ AI โ†’ Security Dashboard for AI. You should see agent inventory, posture findings, and risk signals from Entra, Defender, and Purview.
โš  The dashboard shows different agent counts than Entra Agent ID portal or Agent 365. This is a known inconsistency. Use Advanced Hunting for precise counts.
8
Enable RT protection for Copilot Studio agents
This step enables the webhook-based runtime inspection of tool invocations. Defender evaluates each tool call before execution and can block suspicious actions. Defender portal โ†’ Settings โ†’ Security for AI โ†’ enable Real-time protection for Copilot Studio AI Agents.
โš  1-second timeout applies. If Defender doesn't respond within 1 second, the tool invocation is allowed through. This is a deliberate tradeoff for reliability but means high-speed tool calls may not always be evaluated.
SETUP CHECKLIST
Preview features enabled in Defender XDR
Microsoft 365 app connector connected
Copilot Studio AI Agents enabled โ€” URL copied
Power Platform external threat detection configured with correct App ID and URL
Connection status shows "Connected" in Defender portal
AIAgentsInfo table returning data in Advanced Hunting
Security Dashboard for AI accessible in Defender portal
Real-time protection enabled
PLAYBOOK 04
Respond to a Suspected Agent Compromise
Triage and contain a suspected agent abuse incident โ€” prompt injection, data exfiltration via agent, or suspicious agent behaviour. Requires Sentinel and Defender for Cloud Apps.
โš  Time-sensitive โ€” act within 1 hour of detection Sentinel + Defender required
1
Check Defender portal for RT protection alerts
Defender portal โ†’ Incidents & Alerts โ†’ filter by "Copilot Studio". RT protection generates SOC-ready alerts that explain what was stopped, why it was considered risky, and which agent, user, and tool were involved.
2
Query for suspicious agent activity in Advanced Hunting
Run these queries to surface anomalous agent behaviour in the last 24 hours.
// Agents with sudden auth type changes AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | extend PreviousAuthType = prev(UserAuthenticationType, 1) | where UserAuthenticationType == "None" and PreviousAuthType != "None" | project AIAgentName, PreviousAuthType, UserAuthenticationType, Timestamp // High-volume tool invocations in last 24h // (use CopilotActivity table if connector enabled) CopilotActivity | where TimeGenerated > ago(24h) | where Operation contains "Tool" | summarize Count = count() by AgentName, UserId | where Count > 50 | order by Count desc
3
Unpublish the agent immediately
In Copilot Studio โ†’ open the suspect agent โ†’ Settings โ†’ Channels โ†’ unpublish all channels. This immediately stops all user interactions with the agent while investigation continues.
4
Revoke the maker's sessions if maker credentials are involved
If the agent uses Classic maker credentials and you suspect the maker account is compromised: Entra portal โ†’ Users โ†’ select maker โ†’ Revoke all sessions. Also check for new credentials on the associated Enterprise Application in Entra.
โš  For Classic agents, the Enterprise Application owner is "Power Virtual Agent Service" โ€” check if the maker's account has been added as an owner (a known risk pattern that enables credential abuse and bypasses CA/MFA).
5
Review Copilot Data Connector logs in Sentinel
If the Copilot Data Connector is enabled, query the CopilotActivity table in Sentinel for the time window of the suspected incident. Look for CopilotAgentManagement events (config changes), unusual CopilotInteraction volumes, and CopilotPlugin lifecycle events.
CopilotActivity | where TimeGenerated between (ago(48h) .. now()) | where AgentName == "<>" | project TimeGenerated, UserId, Operation, AgentName, PromptContent, ResponseContent | order by TimeGenerated desc
6
Use Security Copilot or Security Analyst Agent for triage
In Defender portal, open the Security Copilot pane and ask it to summarise the incident, identify the affected users, and recommend next steps. The Security Analyst Agent (Preview, March 2026) can autonomously triage the incident against your Sentinel data.
7
Remediate and rebuild with security controls
Before republishing: apply all controls from Playbook 02. If the agent is Classic, evaluate whether it should be migrated to Modern (requires enabling Modern Agent mode in Power Platform). Update your DLP policies if data exfiltration occurred via prompts.
8
Document and update your agent security policy
Record the incident in your risk register. Update your agent security checklist with any gaps this incident revealed. Consider running Playbook 01 across your entire estate as a follow-up audit.
INCIDENT RESPONSE CHECKLIST
Defender RT protection alerts reviewed
Advanced Hunting queries run for anomalous activity
Suspect agent unpublished from all channels
Maker sessions revoked if account compromised
Enterprise Application checked for rogue credentials or owners
CopilotActivity logs reviewed in Sentinel
Incident documented in risk register
Agent rebuilt with Playbook 02 controls before republishing
05
Microsoft Foundry โ€” Enable Security Logging
FOUNDRY ยท AZURE MONITOR ยท SENTINEL
Enable and route the four Foundry logging layers into Microsoft Sentinel before your workload carries real traffic. Log gaps in production are possible to close retrospectively โ€” but data that was never collected cannot be recovered.
!
Foundry resource โ‰  Foundry project โ€” they are separate Azure Monitor resources
A Foundry resource (Microsoft.CognitiveServices/accounts) can contain many Foundry projects (Microsoft.CognitiveServices/accounts/projects). Diagnostic Settings configured at the resource level do not cascade to projects. Every new project needs its own separate configuration โ€” or you accept the gap silently. RBAC assigned at resource scope does cascade to projects, but least-privilege access may require project-level RBAC assignments.
โš  This is the most common Foundry security misconfiguration โ€” teams enable logging on the resource and assume projects are covered. They are not.
1
Route the Activity Log to your Sentinel Log Analytics Workspace
The Activity Log is the only Foundry logging layer that requires no opt-in โ€” it is generated automatically by Azure Resource Manager. It captures resource creation/deletion, RBAC role assignment changes, key rotation events, network config changes, and model deployment operations. It does not route to your Sentinel workspace by default.
Azure Portal โ†’ Foundry Resource โ†’ Monitoring โ†’ Diagnostic settings โ†’ Add diagnostic setting โ†’ Select: "Activity Log" โ†’ Destination: Send to Log Analytics workspace (your Sentinel LAW) โ†’ Save
๐Ÿ’ก Route to the same Log Analytics Workspace as your Sentinel instance. Foundry resource logs and Entra ID logs must share the same workspace for Sentinel analytics rules to correlate them.
2
Enable Audit and RequestResponse at the Foundry resource level
Data plane logging must be explicitly enabled โ€” nothing is collected by default. At the resource level, enable these two categories for SecOps. Note: data can take up to 2 hours before it is available to query.
Azure Portal โ†’ Foundry Resource โ†’ Monitoring โ†’ Diagnostic settings โ†’ Add diagnostic setting โ†’ Name: "SecOps-Resource-Logs" โ†’ Enable categories: โœ… Audit (data plane access โ€” key retrievals, connection access, admin API calls) โœ… RequestResponse (inference metadata โ€” model, operation, status, latency, tokens) โŒ AzureOpenAIRequestUsage (no SecOps value) โŒ Trace (not a detection source under normal conditions) โŒ AllMetrics (no SecOps value) โ†’ Destination: Log Analytics workspace (Sentinel LAW) โ†’ Save
โš  RequestResponse captures metadata about every inference call โ€” model name, operation type, status codes, latency. It does NOT include prompt text or model-generated completions. That is a deliberate design choice by Microsoft to reduce sensitive data exposure through platform-level logging.
3
Repeat Diagnostic Settings for every Foundry project โ€” separately
This is a separate configuration from Step 2. Projects are separate resources in Azure Monitor. For each project, enable the Audit category โ€” it records agent operations such as runs, file uploads, and evaluations at project scope. This is the only source that tells you which identities accessed which Foundry capabilities and when.
Azure Portal โ†’ Foundry Resource โ†’ Projects โ†’ [select each project] โ†’ Monitoring โ†’ Diagnostic settings โ†’ Add diagnostic setting โ†’ Name: "SecOps-Project-Logs" โ†’ Enable categories: โœ… Audit (agent runs, file uploads, evaluations, data plane access at project scope) โŒ Trace โŒ AllMetrics โ†’ Destination: Log Analytics workspace (same Sentinel LAW) โ†’ Save โ†’ Repeat for every project
๐Ÿ’ก Build a governance process: every new Foundry project created must have Diagnostic Settings configured before it receives any production traffic. Make this a deployment checklist item.
4
Configure Entra ID diagnostic settings at the tenant level
Foundry agents are Entra ID identities. Without Entra ID logs, non-interactive sign-ins, service principal activity, and agent lifecycle events are invisible. Entra ID diagnostic settings are configured at the tenant level โ€” not at the Foundry resource. Route to the same Log Analytics Workspace as your Foundry resource logs.
Entra admin center โ†’ Monitoring โ†’ Diagnostic settings โ†’ Add diagnostic setting โ†’ Enable: โœ… SignInLogs (interactive sign-ins) โœ… NonInteractiveUserSignInLogs (service principal / agent sign-ins) โœ… ServicePrincipalSignInLogs โœ… AuditLogs (agent lifecycle events, RBAC changes) โ†’ Destination: Same Log Analytics workspace as Foundry logs โ†’ Save
โš  This is a tenant-level setting โ€” it affects all Entra logs, not just Foundry. Confirm with your identity team before enabling if this workspace doesn't already receive Entra logs.
5
Connect Application Insights for agent-level runtime visibility
Application Insights is the deepest logging layer โ€” it surfaces agent-level behaviours absent from Foundry resource logs: anomalous tool call chains, unexpected external dependencies, unusual exception patterns, and (optionally) prompt and completion content. It must be a workspace-based Application Insights instance linked to the same Log Analytics Workspace as Sentinel โ€” otherwise Sentinel analytics rules cannot query it.
Azure Portal โ†’ Foundry Project โ†’ Settings โ†’ Application Insights โ†’ Connect to workspace-based Application Insights resource โ†’ Must use the SAME Log Analytics workspace as Sentinel // Key tables available once connected: AppDependencies โ†’ model inference calls, tool calls AppTraces โ†’ agent execution traces, orchestration steps AppExceptions โ†’ errors during inference or tool execution AppRequests โ†’ inbound requests (if agent exposed via HTTP)
โš  Content capture (prompt and completion logging) is OFF by default. To enable: set AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED=true in your SDK config. Before enabling, ensure you have governance controls in place for storage, access, and retention โ€” prompt content may contain PII, secrets, or business data.
โœ… Playbook 05 Checklist
Activity Log routed to Sentinel LAW
Diagnostic Settings (Audit + RequestResponse) enabled at Foundry resource level
Diagnostic Settings (Audit) enabled at every Foundry project level separately
Entra ID sign-in + audit logs routing to same LAW
Application Insights connected โ€” workspace-based, same LAW as Sentinel
Governance process established: every new project gets Diagnostic Settings before production traffic
Content capture decision documented โ€” enabled or explicitly deferred with governance rationale
PLAYBOOK 06
Pre-Deployment AI Agent Red Teaming with PyRIT
Test your AI agents for OWASP LLM Top 10 vulnerabilities before they ship. Microsoft's PyRIT framework automates adversarial testing across 53+ datasets, 70+ converters, and 6 attack strategies โ€” with CI/CD release gate integration. Source: Microsoft Tech Community โ€” PyRIT for AI Agent Security (Apr 28, 2026)
โœ“ Open Source ยท MIT ยท github.com/microsoft/PyRIT โœ“ Battle-tested on 100+ Microsoft products including Copilot โš  Pre-deployment only โ€” not a runtime control
Install and configure
pip install pyrit

# Required env vars (for Azure OpenAI target + scorer)
export AZURE_OPENAI_API_KEY="your-key"
export AZURE_OPENAI_ENDPOINT="https://your-instance.openai.azure.com/"
export AZURE_OPENAI_DEPLOYMENT="gpt-4o"

Config-driven โ€” switch strategies, datasets, or thresholds with no code changes. Point target at your agent entry point.

# scan.yaml
scan:
  name: "my-agent-security-scan"

  target:
    module: "my_agent.main"         # Your agent module
    function: "handle_prompt"       # Your agent entry function

  strategies:
    - name: "prompt_sending"        # Single-turn baseline
    - name: "crescendo"             # Gradual escalation
      max_turns: 5
    - name: "tap"                   # Tree of Attacks with Pruning

  converters:                       # Stack โ€” applied in order
    - name: "Base64Converter"
    - name: "LeetspeakConverter"
    - name: "TranslationConverter"

  datasets:
    - "airt_illegal"                # Microsoft AI Red Team dataset
    - "harmbench"                   # Harm evaluation benchmark
    - "advbench"                    # Adversarial benchmark
    - path: "datasets/custom.yaml"  # Your domain-specific prompts

gate:
  fail_on:
    critical_findings: 0            # Zero tolerance for critical
    high_findings: 3
    max_score: 0.3                  # Defense rate must exceed 70%

Run in sequence โ€” if plain prompts pass, layer evasion. Each phase builds on the previous.

Phase 1: Plain prompts
  โ†’ Baseline โ€” establishes what passes without evasion
  โ†’ Catches basic prompt injection and policy violations

Phase 2: Encoded prompts
  โ†’ Base64, ROT13, Leetspeak, Unicode confusables
  โ†’ Tests whether your agent/guardrails decode before evaluating

Phase 3: Semantic attacks
  โ†’ LLM-powered rephrasing, translation, multimodal injection
  โ†’ Converters stack: translate โ†’ Base64 โ†’ embed in image

Phase 4: Multi-turn dialogue attacks
  โ†’ CrescendoAttack: gradual escalation over 5โ€“10 turns
  โ†’ TreeOfAttacksWithPruning (TAP): branching attack trees
  โ†’ Tests whether context accumulation bypasses initial guardrails

Map findings to OWASP LLM Top 10 (2025) for structured risk reporting. Turns PyRIT output into a risk register your security team understands.

LLM01 Prompt Injection       โ†’ PromptSendingAttack + injection datasets (airt_illegal)
LLM02 Sensitive Info         โ†’ Data exfiltration datasets + PII scorers
LLM06 Excessive Agency       โ†’ Tool-calling attack datasets (advbench)
LLM07 System Prompt Leakage  โ†’ System prompt extraction datasets
LLM10 Unbounded Consumption  โ†’ High-volume automated attack patterns

Integrate into your pipeline. Exit code 0 = pass (deploy), exit code 1 = fail (block). No custom actions needed.

# GitHub Actions example
jobs:
  security-scan:
    steps:
      - name: Run AI security scan
        run: |
          pip install pyrit
          python scanner.py --config scan.yaml --output reports/

      - name: Evaluate release gate
        run: |
          python gate.py --report reports/scan_results.json
          # Exit 1 blocks deployment automatically

# When to run:
# Every merge to main:  Quick scan only (phases 1โ€“2, ~10 min)
# Pre-release branch:   Full scan (all 4 phases, architect approval)
# Weekly scheduled:     Full scan across full agent estate
โš  Two risk surfaces โ€” test both: PyRIT tests security vulnerabilities (LLM01โ€“LLM10) AND responsible AI harms (bias, toxicity, manipulation) simultaneously. Traditional pen tests focus on only one. Most AI agents ship with neither tested. You wouldn't ship a web app without OWASP ZAP โ€” the same standard should apply to AI agents.
REFERENCE
Four AI Security KPIs โ€” Operational KQL
Four metrics to track weekly and report quarterly. The trend matters more than the absolute number โ€” you're looking for No-Auth count trending down and DLP hits stabilising as policies mature. For the strategic framing, see Strategy โ†’ Four AI Security KPIs.
โœ“ All KQL runs on data you already have Bookmark in Copilot Activity Monitoring workbook
1
Count of published agents with no authentication
The single most important agent-security metric. Any agent in this count is reachable by anyone โ€” including external users if published to a website. Trend this weekly. If it isn't going down, your Phase 2 enforcement isn't sticking.
AIAgentsInfo | summarize arg_max(Timestamp, *) by AIAgentId | where AgentStatus == "Published" | where UserAuthenticationType == "None" | summarize RiskyAgents = count()
๐Ÿ’ก Pair this with a Sentinel Analytics Rule (see Playbook 01 Step 1) that alerts on any new no-auth agent โ€” the count metric is for trend, the rule is for incident response.
2
AI interactions citing Confidential+ labels
Sourced from Purview Activity Explorer. A stable trend means label enforcement is working. A rising trend means either more sensitive data is being grounded by agents (governance problem) or sites that should have higher labels don't (label coverage gap).
CopilotActivity | where TimeGenerated > ago(7d) | where RecordType has_any ("CopilotInteraction", "AIPluginOperation") | where isnotempty(SensitivityLabelEventData) | extend LabelName = tostring(SensitivityLabelEventData.LabelName) | where LabelName has_any ("Confidential", "Highly Confidential", "Restricted") | summarize SensitiveAccessEvents = count() by bin(TimeGenerated, 1d)
๐Ÿ’ก Cross-reference with DSPM oversharing assessment โ€” a rising trend may indicate overshared sites are being grounded.
3
Blocked or warned responses from Purview DLP at Copilot location
Expect a spike when DLP policy first deploys (audit mode reveals true volume), then stabilisation. A second spike post-stabilisation means either a new data category is being surfaced, or makers are working around an existing policy. Purview portal โ†’ Data Loss Prevention โ†’ Reports โ†’ filter by Microsoft Copilot location.
๐Ÿ’ก Split this by policy: NIN/PII blocks tell a different story to OFFICIAL-SENSITIVE label blocks. Both belong in the KPI but should trend separately.
4
Tool invocations blocked by Defender real-time protection (ATG)
Counter-intuitive trend โ€” you want this to increase initially, because it means runtime protection is firing. A flat-at-zero trend usually means ATG isn't enabled or the tool surface is too narrow for blocks to occur, not that everything is safe.
AlertInfo | where Category == "AI" | where Status == "Resolved" | where Title has_any ("blocked", "prevented") | summarize BlockedToolActions = count() by bin(TimeGenerated, 1d) | sort by TimeGenerated desc
๐Ÿ’ก Flat-at-zero is a posture finding, not a success. Investigate ATG configuration before celebrating.
Weekly โ€” All four KPIs in the Copilot Activity Monitoring workbook for the security team.
Monthly โ€” KPI trend slide for the AI Security Working Group.
Quarterly โ€” KPI trends as one section of the board-level reporting pack (see Strategy โ†’ Quarterly reporting pack).
PLAYBOOK 07
Brief Your Makers โ€” 30-Minute Security Awareness
Maker behaviour is the largest controllable factor in agent risk. Most agent security incidents trace back not to platform vulnerabilities but to maker decisions โ€” using maker credentials, sharing org-wide by default, granting connectors broad scope. A short, focused awareness session converts the platform controls into shared discipline. Run this once before Phase 2 governance rolls out, then quarterly for new makers.
โœ“ 30 minutes ยท live or recorded Audience: anyone publishing a Copilot Studio agent โš  Mandatory before maker is granted environment access
1
Maker credentials = your permissions, extended to every user
When you add a connector (SharePoint, Outlook, Teams) to a Classic agent and choose "Maker credentials," every user of that agent acts with your account's permissions. If you have admin access, every user has admin access via the agent. The fix: always choose end-user authentication on connectors, even if it takes longer to set up.
2
No authentication = anyone, including outside the company
Setting authentication to "None" doesn't mean "easier sign-in" โ€” it means no sign-in at all. If the agent is published to a website or shared org-wide, anyone who finds the URL can use it. Default to end-user auth. Only use no-auth if there's a documented business reason and the Approver has signed off.
3
Org-wide sharing is a security decision, not a convenience toggle
Sharing your agent with "Everyone in the organisation" exposes it to 100% of your colleagues โ€” including their devices, their permissions, and any compromise of their accounts. Start with a named group. Expand only when there's a reason. Org-wide sharing requires Approver sign-off.
4
Connector scope is permanent โ€” grant the minimum
When you grant a connector "Files.ReadWrite.All" or "Mail.Read", the agent has that scope forever, across every conversation, every user. Don't pick the broadest scope to "make sure it works." Pick the narrowest scope that does the job. If you need to broaden later, you can โ€” but you can't easily narrow once users depend on it.
5
Every agent needs an Owner, a Sponsor, and a documented purpose
If you build it and leave, the agent becomes ownerless. If no business stakeholder cares whether it exists, it's invisible to governance reviews. Before publishing, fill in: who maintains it (Owner), who's accountable for whether it should still exist (Sponsor), and one sentence on what it does. Future-you will thank present-you.
!
Self-audit checklist before publishing
Walk through these yourself before clicking Publish. If you can't tick all six, don't publish yet.
โ˜ End-user authentication is on (not "None", not "Maker")
โ˜ All connectors use end-user auth, not maker credentials
โ˜ Connector scopes are the narrowest that work
โ˜ Sharing is set to a named group, not "Everyone"
โ˜ Owner and Sponsor are filled in (different people for HIGH-tier agents)
โ˜ The agent description explains what it does in one sentence
?
Escalation paths
Makers should leave the session knowing where to ask. Adapt this list to your organisation:
"I need to share my agent more broadly" โ†’ IT Approver (named individual or team mailbox)
"My connector needs a broader scope" โ†’ IT Approver + DLP exception process
"I think my agent has been misused" โ†’ Security team (security mailbox or SOC)
"I'm leaving โ€” who takes my agent?" โ†’ Sponsor (hand off Owner role before last day)
"I need an external connector or new model" โ†’ Agent Lifecycle Board (monthly)
๐Ÿ’ก Run format: 30 minutes total โ€” 15 min content, 10 min checklist walkthrough using a sample agent, 5 min Q&A. Recording the session and making it a watch-once prerequisite for environment access scales this without burning facilitator time.
PLAYBOOK 08
Vet a Third-Party Agent Before Publish
External agents โ€” from the Microsoft Agent Store, ISV partners, or vendor-supplied apps โ€” should not reach your tenant without a security review. This is the equivalent of third-party software vetting for the agent era. Run it for every external agent before it appears in any environment, and treat any agent processing regulated data as a DPIA trigger.
โœ“ Run once per third-party agent Owner: IT Approver + Security review โš  Some checks need DPIA where citizen / regulated data is in scope
1
Verify who built it and their security posture
Before looking at the agent itself, validate the publisher. A poorly secured publisher is a supply chain risk even if their agent is well-designed.
โ˜ Publisher identity verified (Microsoft Partner status, registered company, contact details)
โ˜ Publisher security posture documented (SOC 2, ISO 27001, or equivalent)
โ˜ Vulnerability disclosure / responsible disclosure policy exists
โ˜ Publisher subject to GDPR / UK GDPR / equivalent data protection regulation
โ˜ Agent listed on Microsoft Agent Store with "Publisher Verified" badge (if applicable)
2
What data does it touch, and how broadly?
Document every connector, every OAuth scope, every data source the agent will access. The publisher's documentation may understate this โ€” verify against what the agent's manifest actually requests.
โ˜ Full connector list documented (with required scopes per connector)
โ˜ Each scope justified โ€” narrowest scope that works has been selected
โ˜ No broad-read scopes (Files.ReadWrite.All, Mail.Read, Directory.Read.All) without explicit justification
โ˜ Data residency confirmed โ€” does data leave the tenant geo? Cross-EUDB?
โ˜ Sub-processors documented if the agent uses third-party APIs
โ˜ Sensitive data categories the agent will touch are identified and DLP coverage verified
3
How does the agent authenticate, and what is its identity model?
External agents must use end-user authentication and Modern identity model. Maker credentials are not acceptable for any third-party agent regardless of context.
โ˜ Agent uses end-user authentication (not maker credentials, not no-auth)
โ˜ Agent is Modern (Entra Agent ID) โ€” Classic agents from external publishers are rejected outright
โ˜ Conditional Access policies cover the agent identity
โ˜ Access Package or equivalent time-bound permission model is in place
โ˜ Agent identity is registered in your tenant inventory (Phase 1)
!
Does processing trigger a DPIA or regulator notification?
Any external agent processing regulated data (PII, financial, health, citizen records, government-classified content) is a DPIA trigger. Don't skip this step โ€” it's the single most common compliance failure for third-party agent deployments.
โ˜ Data Protection Officer / DPIA team notified before deployment
โ˜ DPIA completed if regulated personal data is in scope
โ˜ EU AI Act Annex III classification reviewed (high-risk categories trigger documentation, transparency, human oversight obligations)
โ˜ Existing DPIAs covering Copilot Studio updated if the agent introduces a new processing purpose
โ˜ Regulator notification considered (ICO, EU AI Office, sector regulator) where applicable
5
Sign-off and lifecycle entry
Approval is not the end of the workflow โ€” every third-party agent enters the same ongoing governance cycle as internally built agents, plus a few extras.
โ˜ Agent Approver signs off in writing (with conditions documented if applicable)
โ˜ Risk tier assigned per the methodology on the Risk page
โ˜ Added to quarterly governance sweep from day one
โ˜ Included in red team rotation if HIGH-tier
โ˜ Publisher's vulnerability disclosure contact added to security team's vendor register
โ˜ Annual re-vetting scheduled โ€” publisher security posture and agent scope are reviewed every 12 months
๐Ÿ’ก Standing veto: any of Owner / Sponsor / Approver / DPO can block a third-party agent at any step. The default for external agents is "not approved" โ€” you have to actively decide to allow them in. This is the inverse of internally built agents, where the default is "permitted within environment policy."
STAY UPDATED
Get notified when Microsoft AI security changes
Monthly updates on new controls, GA announcements, and critical gaps โ€” direct to your inbox.
Subscribe to updates โ†’
aiagentsecurity.substack.com ยท Free ยท No spam