Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Security and compliance are essential for enterprise AI adoption. Enterprise customers expect AI applications to be secure by design and to comply with both internal policies and external regulations.
Failure to meet these expectations can result in:
- Rejection during security and compliance reviews.
- Data leaks or other harmful AI behavior.
- Loss of customer trust and reduced adoption.
To scale AI responsibly, organizations must build on a foundation of strong data security and governance. Because AI applications increasingly process sensitive enterprise data, developers should incorporate security and compliance controls early in the development lifecycle—during design and implementation.
Microsoft Purview provides APIs and services that enable Azure AI Foundry and other AI platforms to integrate enterprise-grade data security and governance controls into custom AI applications and agents, regardless of model or deployment platform.
This article explains how developers can implement Microsoft Purview scenarios in Azure AI Foundry or in custom AI apps using the Microsoft Purview APIs or Azure AI Foundry built-in settings. The following table summarizes the Microsoft Purview scenarios and which options developers should choose for each scenario.
| Scenario | API | Azure AI Foundry setting |
|---|---|---|
| Govern data used at runtime by AI apps | Supported | Supported |
| Protect against data leaks and insider risks | Supported | Not supported |
| Prevent data oversharing | Supported | Not supported |
Govern data used at runtime by AI apps
The following Microsoft Purview features can be used to provide data governance for AI apps:
- Real-time analytics for sensitive data usage, risky behaviors, and unethical AI interactions.
- Auditing for traceability.
- Communication Compliance to detect harmful or unauthorized content.
- Data Lifecycle Management and eDiscovery for legal and regulatory needs.
You can integrate these Microsoft Purview capabilities into your Azure AI Foundry or custom AI app, using the following options:
Native Integration with Azure AI Foundry (recommended): Microsoft Purview settings for audit and related governance outcomes are embedded directly into Azure AI Foundry. Azure Admins can turn on the setting for any given Azure subscription. This setting enables data from all Azure AI based applications running in that subscription to be sent to Microsoft Purview to support governance and compliance outcomes. For more information on turning on the setting, see Enable Data Security for Azure AI with Microsoft Purview.
Use the APIs: Foundry developers can use Microsoft Purview APIs to programmatically send the prompts and responses data from their AI apps into Microsoft Purview.
Use these APIs for API-based integration:
For more information, see Use Microsoft Graph Purview APIs.
Protect against data leaks and insider risks
Today, Foundry developers can only use Microsoft Purview APIs to enforce Microsoft Purview Data Loss Prevention (DLP) policies in their applications. This allows applications to understand and enforce AI app behavior according to the policies set within Microsoft Purview. For example, you can protect sensitive information shared with Large Language Models (LLMs), or control sensitive information shared with risky users in your AI apps.
Use these APIs for API-based integration:
Important
Ensure your Azure subscription is configured to support DLP policies before running your code. The PowerShell code to setup DLP policies on your Azure subscription is included in Example 4 in New-DlpComplianceRule.
Azure AI code sample:
Prevent oversharing data
Oversharing data is restricted when sensitivity labels are applied to data. Foundry developers can only use Microsoft Purview APIs to honor sensitivity labels applied to data used by LLMs to generate responses. Using these APIs ensure that AI-generated responses respect access controls, prevent oversharing data, and limit users to content they’re authorized to view - just as they would outside the AI app environment.
You can use any of the following APIs when building this scenario: