Share via


Security Copilot agent development planning guide

Important

Some information in this article relates to a prereleased product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.

Microsoft Security Copilot (Security Copilot) is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes at machine speed and scale. Security Copilot agents use AI to complete routine tasks and gather insights with some level of autonomy so you can focus on high-value work. You can build agents that tailor and enhance the Security Copilot experience to meet your organization's unique business needs.

Planning is an important first step in designing and building your security agents. Designing effective Security Copilot agents requires thoughtful planning, iterative development, and a strong alignment with responsible AI principles. This guide serves to structure your planning process to create agents that understands your unique business context.

Whether you're designing various end-to-end scenarios such as incident response, threat hunting, intelligence gathering, posture management, and more, here are the core principles to guide your process:

Define a clear objective

Start by articulating the agent's purpose:

  • Problem Statement: What specific security or operational challenge is the agent solving? For example, an agent that monitors endpoint alerts and automatically enriches them with threat intelligence to reduce analyst workload.
  • Target Users: Are they security operations center (SOC) analysts, IT admins, Compliance Admin, or developers? See, Security Copilot personas.
  • Success Metrics: Define measurable outcomes - for example, reduced triage time, improved detection accuracy, or automated remediation.

For detailed guidance on choosing the experience that best fits your agent requirements, see Custom agents.

Identify required capabilities

Break down the tools your agent needs to succeed:

  • Cognitive: Classify alerts, summarize incidents, correlate signals.
  • Linguistic: Interpret user prompts, generate clear explanations.
  • Operational: Trigger workflows, call APIs, retrieve logs, or documents.

Use the Security Copilot tool (skill) to define these capabilities. Tools can be reused or composed into agents using the YAML manifest. To upload a tool or a plugin, see Build agent manifest.

Understand your unique context and ecosystem

Security Copilot agents operate within a secure, extensible environment tailored to your organization's specific needs:

  • Data Sources: Identify the systems, alerts, or APIs the agent need access (for example, Microsoft Defender, Microsoft Sentinel, Microsoft Graph). For integrations to these sources, see Plugins.
  • Security and Compliance: Determine users' access to the Security Copilot platform through Role Based Access Control (RBAC), on-behalf-of authentication, and layer your security coverage with conditional access policies. For more information, see RBAC.
  • State Management: Apply agent feedback through memories to persist relevant information across sessions.

Prioritize Ethical and Responsible AI

Security Copilot agents must adhere to responsible AI principles across the following dimensions:

  1. Transparency:

    • Enable users to verify information sources.
    • Clearly communicate what data is stored in memory and how it's used.
    • Understand the agent's limitations and capabilities.
    • Show how decisions were made (for example, which tool was used, what data was retrieved).
  2. Appropriate Expectations:

    • Provide a clear rationale for actions taken by the agent.
    • Define the scope of the agent's reasoning abilities.
  3. Prevent Overreliance:

    • Ensure users can identify errors in the agent's output.
    • Encourage users to validate results for accuracy.
    • Option to reject incorrect results or misleading outputs.
  4. Security and Privacy:

    • Mask sensitive data and respect tenant boundaries.
    • Ensure agents operate in alignment with organizational policies while preserving data integrity.
    • Enforce least-privilege access to backend systems.
  5. Governance and Compliance:

    • Use naming conventions and disclaimers to ensure clarity and compliance.
  6. Feedback:

    • Enable users to provide feedback on the generated output.
    • Use feedback to identify issues and continuously improve agent performance and reliability.

Next steps

See also