Your Employees Are Already Putting Privileged Information Into AI. Here’s What Compliance Teams Need to Do About It.

AI data handling compliance just became a front-page legal issue. On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York ruled that documents a defendant generated using the consumer version of Claude — Anthropic’s generative AI platform — were not protected by attorney-client privilege or the work product doctrine. It was the first ruling of its kind in the country.

But the ruling itself isn’t the story for compliance teams. The story is what it reveals about a gap most organizations still haven’t closed: the distance between how fast employees are adopting AI tools and how slowly companies are governing what goes into them.

The AI Data Handling Compliance Gap Is Already at Scale

The numbers are hard to ignore. According to LayerX Security’s 2025 report, 77% of employees have pasted corporate information into AI and LLM services. Of those, 82% used personal accounts rather than enterprise-managed tools. A separate TELUS Digital survey found that 57% of enterprise employees had entered confidential information into public AI platforms, with 68% accessing those tools through personal accounts.

What kind of data? Internal company documents. Financial data. Customer information. Source code. Legal documents and contracts. Meeting transcripts containing strategy discussions. In short, exactly the categories of information that trigger regulatory obligations, privilege concerns, and competitive exposure.

Meanwhile, a Cybernews survey found that 59% of U.S. employees use shadow AI — tools that haven’t been formally approved by their employer. Among executives and senior managers, that number hit 93%. IBM’s research found that shadow-AI-related data breaches cost an average of $670,000 more than breaches involving sanctioned AI tools.

These employees aren’t malicious. They’re trying to work faster. However, the legal and regulatory consequences of that behavior are now crystallizing.

What the Heppner Ruling Makes Concrete

In United States v. Heppner, the defendant used the consumer version of Claude to prepare legal strategy documents after learning he was the target of a federal investigation. He later shared those documents with his attorneys. When the FBI seized the materials during a search warrant, Heppner claimed attorney-client privilege.

Judge Rakoff rejected that claim on every element. First, the communications weren’t between a client and an attorney — Claude is not a lawyer. Second, the communications weren’t confidential — Claude’s privacy policy explicitly reserves the right to collect user inputs, use them for training, and disclose them to third parties including government authorities. Third, Heppner wasn’t seeking legal advice from Claude, even though he intended to share the outputs with counsel.

The work product doctrine failed too. Heppner created the documents on his own initiative, not at counsel’s direction. As the court put it, AI’s “novelty” doesn’t exempt it from “longstanding legal principles.”

As Morgan Lewis noted in its analysis, early court decisions suggest a clear pattern: consumer AI tools may waive privilege, while enterprise tools used at the direction of counsel offer more protection. The distinction between the two is rapidly becoming a compliance-critical question.

Consumer vs. Enterprise: The Line That Now Matters Legally

The Heppner ruling hinged on the specific privacy policy of the consumer version of Claude. That policy permits data collection, model training on user inputs, and disclosure to governmental authorities. Most consumer AI platforms operate under similar terms. Jones Walker’s analysis pointed out that both Anthropic and OpenAI use conversations from free and individual paid plans for model training by default. A $20-per-month subscription doesn’t buy you confidentiality.

Enterprise AI agreements are different. They typically provide data segregation, contractual confidentiality protections, prohibitions on using input data for training, and restrictions on third-party disclosure. Judge Rakoff explicitly left open whether the analysis would change if an enterprise tool with robust confidentiality provisions had been used instead.

For compliance teams, this distinction is no longer theoretical. It sits at the center of any serious AI data handling compliance program. Every employee using a personal ChatGPT account, a free Claude account, or any unapproved AI tool for work-related tasks is potentially generating discoverable records — including records that contain information the organization considers privileged, proprietary, or regulated.

AI Data Handling Compliance Extends Beyond Privilege

The privilege question gets the headlines. However, the AI data handling compliance exposure extends well beyond litigation.

State privacy laws — including the CCPA, Colorado Privacy Act, and Virginia’s CDPA — impose obligations on how organizations handle personal information. If employees paste customer data into a consumer AI tool that trains on inputs, the organization may have just enabled an unauthorized disclosure under those statutes.

In financial services, compliance fatigue is already compounding the problem. State attorneys general are increasingly active in the AI space, and as Troutman Pepper noted, AGs will likely cite the Heppner ruling to pierce privilege claims as they seek AI communications through civil investigative demands and subpoenas.

And in healthcare, where the Trump Administration is proposing to eliminate model card transparency requirements, the exposure compounds. Health systems whose employees use consumer AI tools to discuss patient cases or analyze clinical data face potential HIPAA violations on top of privilege waiver.

What AI Data Handling Compliance Looks Like in Practice

The gap between employee AI adoption and organizational governance is the core risk. Closing it requires policies that are specific, enforceable, and built for how people actually use these tools.

Classify your AI tools by risk tier. As compliance consultant Tom Fox outlined in his AI on-ramp framework, every organization needs a living inventory that distinguishes internal productivity tools from high-impact decision tools. Add a third category now: consumer vs. enterprise. Any tool where the provider’s terms permit data collection, training on inputs, or third-party disclosure should be flagged as high risk.

Set explicit “do not feed” rules. Define what categories of information cannot enter any AI tool, what can only enter approved enterprise instances, and what must never be used for model training — then communicate those rules in short, role-specific training. If you roll out AI without clear data handling rules, employees will improvise. As we reported this week, improvisation is where breaches live.

Audit your vendor agreements through a privilege lens. Most AI usage policies focus on data security, accuracy, and IP. Few address privilege. Review every AI vendor contract for data retention provisions, training-on-input clauses, and third-party disclosure rights. If the vendor’s terms don’t contractually guarantee confidentiality, treat every interaction with that tool as potentially discoverable.

Connect AI to your incident response program now. Build an escalation pathway for when an employee suspects confidential data was entered into an unapproved tool, when AI output appears discriminatory or inaccurate, or when a regulatory inquiry touches AI-generated materials. Don’t wait for the first incident to figure out how to respond.

Document attorney direction when AI is used for legal work. The Heppner ruling turned partly on the fact that the defendant acted on his own initiative, not at counsel’s direction. DLA Piper’s analysis recommended that organizations document attorney direction whenever AI tools are used for litigation preparation — and treat all materials generated through consumer AI as potentially discoverable.

AI Data Handling Compliance Can’t Wait

Judge Rakoff acknowledged that his ruling was fact-specific. He didn’t declare that all AI-generated documents lose privilege. He applied longstanding legal principles to a specific set of facts: a consumer tool, a non-confidential privacy policy, and a user acting without attorney direction.

But that narrow holding lands in a world where 77% of employees are already pasting corporate data into AI tools, most of them through personal accounts, and most organizations lack enforceable policies governing that behavior. The ruling didn’t create the risk. It made the risk visible.

For compliance teams, the practical question is straightforward. Can your organization demonstrate — to a regulator, to opposing counsel, to a court — that you have clear AI data handling policies, that employees know what they can and can’t put into AI tools, and that you have governance infrastructure to back it up?

If you can’t answer that today, the Heppner ruling just made it urgent.


AI Compliance Insider covers the regulations, tools, and incidents shaping AI governance. Subscribe for weekly updates.