Shadow AI Is Already in Your Organization. Here’s the 5-Step Containment Plan.

The February 2026 ruling in United States v. Heppner sent shockwaves through the legal and compliance community. A federal judge held that documents a criminal defendant generated using Anthropic’s Claude—and later shared with his attorneys—were not protected by attorney-client privilege or the work product doctrine .

But the ruling itself isn’t the story for compliance teams. The story is what it reveals about a gap most organizations still haven’t closed: 77% of employees have already pasted corporate information into AI tools, and 82% of them used personal accounts . The Heppner decision didn’t create this risk. It made the risk visible.

Shadow AI—the use of AI tools without formal IT or compliance approval—is no longer a future concern. It’s a present reality that creates concrete legal exposure: privilege waiver, privacy law violations, regulatory liability, and trade secret disclosure. And as the software testing community has documented, when official approval processes move slowly, teams will route around them . Blocking doesn’t work. Containment does.

Here’s your five-step plan to bring shadow AI under control this quarter.

Step 1: Discover What’s Actually Happening

You can’t secure what you can’t see. As Microsoft’s security team notes, in a global enterprise, banning AI tools “doesn’t stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap” .

What to do this month:

  • Deploy discovery tools. Microsoft Defender for Cloud Apps (MDCA) can discover and govern your enterprise AI footprint by categorizing tools using risk scores and compliance attributes . Similarly, AI Security Posture Management (AI-SPM) tools provide continuous discovery across clouds and teams, identifying models, endpoints, agents, and AI-powered APIs that no one formally approved .
  • Survey your employees (anonymously). Forrester’s 2024 research found that 60% of employees use their own AI tools at work, often without permission . Your employees are no exception. An anonymous survey can reveal which tools they’re actually using and why—information your monitoring tools won’t capture.
  • Review procurement records. Shadow AI often enters through departmental credit card purchases. Run a report for subscriptions to ChatGPT Plus, Claude Pro, Gemini Advanced, and similar tools. If you find them, you’ve found shadow deployments.

The goal: Replace speculation with data. Before you can govern shadow AI, you need to know where it lives.

Step 2: Classify AI Tools by Risk Tier

Not all AI tools create the same exposure. The Heppner ruling turned on a critical distinction: the defendant used the consumer version of Claude, whose privacy policy explicitly permits data collection, model training on user inputs, and disclosure to third parties including government authorities . The court held that this destroyed any reasonable expectation of confidentiality.

Enterprise AI tools are different. They typically provide data segregation, contractual confidentiality protections, prohibitions on using input data for training, and restrictions on third-party disclosure . As DLA Piper’s analysis emphasizes, “enterprise AI tools often offer technical measures and contractual obligations that segregate input and output data from third parties and enable companies to maintain privilege and confidentiality” .

Build a three-tier classification:

TierDefinitionExamplesRules
High RiskConsumer tools where provider terms permit data collection, training, or disclosureChatGPT Free, Claude Free/Pro, Gemini consumerNo confidential data ever
Medium RiskEnterprise tools with data segregation and contractual confidentiality, but limited audit trailsEnterprise versions of major LLMs with DPAsApproved for specific use cases with clear data handling rules
Low RiskFully governed tools with audit trails, human oversight logging, and documented complianceEnterprise governance platforms, compliance-specific AI toolsPreferred path; actively encourage usage

A critical nuance: Even enterprise tools require scrutiny. As the Warner court noted, AI programs “are tools, not persons, even if they may have administrators somewhere in the background” . But the distinction between consumer and enterprise terms remains the strongest structural defense currently available .

Step 3: Build the Compliant Path

The single most effective way to reduce shadow AI is to make the compliant path the easiest path. As Forrester’s AEGIS framework emphasizes, “security fails when it creates more friction than the risk it seeks to mitigate” . When official processes are slow, employees will route around them .

What to do this quarter:

  • Procure enterprise licenses for the tools your employees actually want to use. If your teams are using consumer ChatGPT because it’s free and accessible, they’ll keep using it until you give them a better option.
  • Make approved tools easy to access. Single sign-on integration. No procurement friction. Clear communication: “Use this, not that” with plain-language explanations of why the approved tool is safer.
  • Create “safe harbors.” As Microsoft’s security guidance notes, providing a sanctioned, enterprise-grade tool “offers a superior tool that naturally cuts down the use of Shadow AI” . Employees aren’t trying to bypass security—they’re trying to do their jobs. Give them a way to do both.
  • Train on context minimization. Teach employees to redact specifics before interacting with any AI model. This reduces risk regardless of which tool they use .

Step 4: Set and Communicate “Do Not Feed” Rules

The Heppner ruling created a clear bright line: what you put into a consumer AI tool can be used against you. As DLA Piper warns, “if a client inputs information learned from privileged communications with counsel into a public AI tool, the client may waive any applicable privilege” .

Define three categories of prohibited data:

  1. Never enter any AI tool: PII, health data, trade secrets, attorney communications, pending deal information, non-public financial results
  2. Only enter approved enterprise instances: Internal documents, customer information, financial data, strategic plans
  3. Document before entering: When attorney direction is required for AI use in litigation preparation (more on this below)

Communicate these rules in short, role-specific training. The European Commission’s May 2025 Q&A on AI literacy confirms that “simply referring staff members to instructions that accompany AI systems will generally not be considered sufficient” to meet regulatory obligations . Training must be frequent, specific, and documented.

And critically: training records are governance records. As we covered in our AI Literacy Compliance Requirements article, when enforcement arrives—or when a civil claim does—the audit trail will matter.

Step 5: Connect Shadow AI to Incident Response

The Heppner defendant didn’t realize he was creating discoverable records until it was too late. Your employees won’t either.

Build an escalation pathway now:

  • Create a clear reporting process for when an employee realizes they put confidential data into an unapproved AI tool. The process should include:
    • Who to notify (legal, compliance, IT security)
    • What to document (what was entered, when, which tool)
    • How to preserve evidence without creating additional risk
  • Document attorney direction for litigation-related AI use. As DLA Piper notes, “if appropriate AI tools are to be used as part of litigation preparation, ensure that the use is at counsel’s specific direction and properly documented, which may help support a claim for protection” . The Heppner court explicitly left open whether the analysis would change if counsel had directed the AI use .
  • Treat consumer AI materials as potentially discoverable. Any document generated using a consumer AI tool should be handled with the assumption that it could be produced in litigation. Plan accordingly .
  • Update your litigation hold notices. As The National Law Review observes, “litigation hold notices (and, likely, privilege logs) must now address AI use, including what’s allowed, what isn’t, and how it’s documented” .

The Bottom Line: Containment Beats Prohibition

The Heppner and Warner decisions, issued the same week, reached opposite conclusions about whether AI-generated materials are protected from discovery . But both cases share a common lesson: organizations cannot rely on courts to sort out their AI governance after the fact.

The Heppner court emphasized that “AI’s novelty does not mean that its use is not subject to longstanding legal principles” . Those principles—confidentiality, attorney direction, reasonable expectations of privacy—apply whether your employees know them or not.

The 77% statistic from our AI Data Handling Compliance article isn’t going down. The 93% of executives using shadow AI aren’t stopping. The question isn’t whether your organization will face a shadow AI incident. It’s whether you’ll have the governance infrastructure to respond when you do.

Organizations that act now—discovering shadow AI, classifying tools by risk, building compliant paths, setting clear rules, and connecting to incident response—will be the ones with defensible positions when the next ruling arrives.

The Heppner ruling didn’t create the risk. It made the risk visible. The question is whether your compliance team is ready to act on what it reveals.


For a deeper dive on related topics, see our coverage of AI Data Handling ComplianceAI Literacy Requirements, and Real-Time AI Governance.