Detect and Block Shadow AI With Microsoft Defender for Cloud Apps

Your employees are using AI tools you haven’t approved. The data is clear: 77% of employees have pasted corporate information into AI services, and 82% used personal accounts rather than enterprise-managed tools. Among executives and senior managers, shadow AI usage hits 93%.

The legal exposure is equally clear. The Heppner ruling in February 2026 held that documents a defendant generated using consumer AI were not protected by attorney-client privilege—because the tool’s privacy policy permitted data collection and model training. If your employees are using consumer AI for work, you may be waiving privilege without knowing it.

Blocking AI tools entirely isn’t the answer. As Microsoft’s own security team notes, banning tools “doesn’t stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap.” The solution is detection, governance, and a compliant alternative.

For the many enterprises already using Microsoft Defender for Cloud Apps (MDCA), the tools to detect and govern shadow AI are already in place. Here’s how to use them.

Why Microsoft Defender for Cloud Apps Is Your Shadow AI Solution

Microsoft Defender for Cloud Apps is a CASB (cloud access security broker) that sits between your users and the cloud applications they access. It can discover, analyze, and govern cloud app usage across your environment—including the AI tools employees are adopting without IT approval.

MDCA is particularly well-suited for shadow AI governance because:

  • It’s already deployed in most Microsoft-centric enterprises
  • It can discover over 31,000 cloud apps, including hundreds of AI tools
  • It provides risk scores based on 90+ attributes (security, compliance, legal)
  • It enables real-time session controls through Conditional Access App Control
  • It integrates with Microsoft Purview for compliance and data loss prevention

As we covered in our Shadow AI Containment Plan, the first step is discovery. MDCA makes that step achievable.

Step 1: Discover Which AI Tools Are Actually in Use

You can’t govern what you can’t see. MDCA’s Cloud Discovery capabilities analyze your traffic logs against a catalog of over 31,000 cloud apps, showing you exactly which AI tools employees are using—and at what volume.

How to set it up:

If you’re using Microsoft Defender for Endpoint, Cloud Discovery is already built in. Logs flow automatically from Windows 10/11 and Windows Server devices enrolled in Defender for Endpoint. If you’re not, you can configure log collection from firewalls and proxies.

Once data is flowing, navigate to Cloud Discovery > Dashboard in the Defender portal. You’ll see a ranked list of discovered apps, including:

  • ChatGPT, Claude, Gemini, and other generative AI tools
  • AI-powered productivity assistants
  • Code completion tools (GitHub Copilot, etc.)
  • Image generation platforms

What to look for:

Filter the app list by category. Look for “AI” or “Machine Learning” tags. Pay attention to:

  • Users: How many employees are using each tool?
  • Traffic: How much data is flowing to each tool?
  • Risk score: MDCA assigns each app a risk score based on security, compliance, and legal attributes. Consumer AI tools often score poorly.

As Microsoft’s documentation notes, “Cloud Discovery analyzes your traffic logs against the Microsoft Defender for Cloud Apps catalog of over 31,000 cloud apps. The apps are ranked and scored based on more than 90 attributes to give you ongoing visibility into cloud use.”

Step 2: Assess Risk by App Category

Not all AI tools create the same exposure. MDCA’s app catalog includes detailed risk assessments that help you classify tools into the three-tier framework we outlined in our Shadow AI Containment Plan.

High-risk indicators to check:

  • Data privacy: Does the app reserve the right to use customer data for model training? Consumer AI tools almost always do.
  • Security practices: Does the app support SSO? Is data encrypted at rest and in transit?
  • Compliance certifications: Is the app SOC 2, ISO 27001, or HIPAA compliant?
  • Jurisdiction: Where is data stored? Can it be subpoenaed by foreign governments?

Tier classification in MDCA:

You can create custom app tags to classify AI tools into risk tiers:

  • High Risk: Consumer AI tools with poor privacy practices, no enterprise terms, and data residency outside your control. Tag these as “Shadow AI – High Risk.”
  • Medium Risk: Enterprise versions of AI tools with data segregation and contractual confidentiality, but limited audit trails. Tag these as “Approved with Conditions.”
  • Low Risk: Fully governed tools with audit trails, human oversight logging, and documented compliance. Tag these as “Approved.”

Step 3: Set Policies to Monitor and Alert

Once you know what’s in use, you need to monitor for risky behavior. MDCA’s policy engine lets you create alerts based on specific activities.

Key policies for shadow AI governance:

Unsanctioned app detection: Create a policy that alerts whenever a high-risk AI tool is accessed. Set the severity based on risk level.

Data exfiltration monitoring: Create policies that monitor for large uploads to AI tools. If an employee pastes an entire contract or source code file into ChatGPT, you want to know.

Privileged user monitoring: Executives and senior managers use shadow AI at 93% rates—and they have access to the most sensitive data. Create policies specifically for privileged accounts.

How to create a policy:

In the Defender portal, navigate to Policies > Policy management > Create policy. Select “Activity policy.” Set conditions based on:

  • App category (AI/ML)
  • Risk score (below a threshold you define)
  • Activity type (upload, download, paste)
  • User or group (target executives for stricter monitoring)

As Microsoft explains, “Policies allow you to monitor for risky behavior in cloud apps, generate alerts, and take automated remediation actions.”

Step 4: Block High-Risk Tools and Guide Users to Approved Alternatives

Discovery and monitoring are essential, but the goal is behavior change. Employees use shadow AI because it helps them work faster. The solution isn’t just blocking—it’s providing a better path.

Use MDCA’s access controls:

With Conditional Access App Control, you can block access to high-risk AI tools in real time. When an employee tries to access an unsanctioned AI app, they can be:

  • Blocked entirely
  • Allowed but monitored (read-only mode)
  • Redirected to an approved alternative

Set up a block policy:

In the Defender portal, create a new policy. Select “Conditional Access App Control.” Choose the apps you’ve tagged as high-risk. Set the action to “Block.”

But don’t stop there:

Blocking without an alternative drives employees to find workarounds. As we covered in our Shadow AI Containment Plan, the single most effective way to reduce shadow AI is to make the compliant path the easiest path.

  • Procure enterprise licenses for approved AI tools
  • Make them accessible via SSO
  • Communicate clearly: “Use this, not that” with plain-language explanations

When an employee is blocked from consumer ChatGPT, the block message should include a link to your approved enterprise alternative and instructions for getting access.

Step 5: Investigate Incidents and Connect to Compliance

MDCA doesn’t just alert—it provides investigation tools and integration with Microsoft Purview for compliance workflows.

Investigate alerts:

When a policy triggers, you can drill into:

  • Which user accessed which app
  • What data was uploaded or downloaded
  • Which device was used
  • Whether the activity met your policy thresholds

Connect to compliance:

Integration with Microsoft Purview allows you to:

  • Preserve evidence for investigations
  • Apply retention policies to AI-generated content
  • Run eDiscovery searches across AI tool data
  • Ensure records are maintained for regulatory requirements

As the Heppner ruling made clear, what employees put into AI tools can become discoverable. You need the ability to find it when a lawsuit or regulatory inquiry arrives.

What This Looks Like in Practice

A healthcare system using Microsoft Defender recently discovered that clinicians were using a consumer AI transcription tool to draft patient notes. The tool’s privacy policy permitted data use for model training—meaning patient data was being fed into a public AI model without authorization.

Using MDCA, the compliance team:

  1. Discovered the tool was in use by 40+ clinicians, with thousands of document uploads
  2. Assessed the tool as high-risk based on privacy practices and lack of HIPAA compliance
  3. Created alerts for any future access attempts
  4. Blocked the tool through Conditional Access App Control
  5. Deployed an approved, HIPAA-compliant alternative with SSO access
  6. Communicated the change to clinicians, explaining why the new tool was safer

Within two weeks, shadow AI usage dropped by 85%, and the organization had a defensible record of its response.

The Bottom Line

Shadow AI isn’t going away. Your employees will keep using tools that make them more productive. The question is whether you can see it, govern it, and guide it.

Microsoft Defender for Cloud Apps gives you the visibility and controls you need—if you configure it for AI governance. The tools are already in place. The missing piece is the policy framework that translates technical capabilities into compliance outcomes.

As we’ve covered throughout our Shadow AI and Data Handling coverage, the organizations that succeed in this environment won’t be the ones that ban AI. They’ll be the ones that detect it, govern it, and make the compliant path the easiest path.

Microsoft Defender can help you get there. The question is whether your compliance team is ready to use it.


For more on related topics, see our coverage of Shadow AI ContainmentAI Data Handling Compliance, and AI Vendor Evaluation.