AI Has a Legal Problem Nobody in Tech Wants to Talk About

Elizabeth Knittle • March 25, 2026

AI Has a Legal Problem Nobody in Tech Wants to Talk About

I'm not anti-AI. I want to get that out of the way first, because what I'm about to say is going to sound like it's coming from someone who is. It's not.


I use AI every day. It's made my work faster, sharper, and more competitive. I've watched it get genuinely good at legal analysis, document synthesis, case research, things that used to take hours. I'm impressed by it, which is exactly why I can see where it falls apart.


And in the legal industry, it falls apart at the foundation.

 

In February 2026, a federal judge in New York ruled on something that should have made headlines outside of legal circles but mostly didn't.


A man facing federal fraud charges used a publicly available AI platform to generate 31 documents related to his legal defense. Some of what he fed into the AI came directly from his attorneys. He later handed those documents to his legal team. The government seized them during a search. He claimed attorney-client privilege. The court said no.


The case is United States v. Heppner, and the judge called it a matter of first impression, meaning no federal court had ruled on this specific question before. That matters.


The reasoning isn't complicated. Attorney-client privilege requires three things, a communication between a client and their attorney, kept confidential, for the purpose of getting legal advice. The AI platform wasn't an attorney. The platform's own privacy policy said user inputs could be retained, used for training, and disclosed to third parties, including the government. And sending the outputs to his lawyers afterward didn't retroactively create a privilege that never existed.


The moment he fed privileged information into that platform, he handed it to a third party. He waived his privilege, even if he didn't mean to, didn't want to, and didn't even know he was doing it.

 

But wait. Another court ruled the opposite way. Here's where it gets messy. On the exact same day Heppner was decided, a federal court in Michigan ruled the other way in a civil case. That court found that AI is just a tool, not a person, and that sharing your thoughts with it doesn't automatically waive work product protection.

Two federal courts. Same day. Opposite conclusions.


Some people read that as good news for the legal AI industry. I'd argue it's actually worse. What it means is that the law here isn't settled. There's no clear rule protecting attorneys who use these tools, and there's no clear rule protecting clients whose information gets fed into them. It's a coin flip, and the coin is your privilege.

When the law is this unsettled, the only rational move is to err on the side of caution. Attorneys have an ethical obligation to protect client confidentiality. "We're not sure yet" isn't a defense to a bar complaint. And it's definitely not something you want to explain to a client after their strategy ends up in the wrong hands.

 

The response from most law firms has been pretty predictable. Warn clients to stop using consumer AI tools, and pivot toward enterprise platforms with better confidentiality agreements. The logic being that a more expensive, more secure tool solves the problem.


That's not wrong exactly. But it's not the whole story either.


Here's what I haven't seen anyone say out loud, and I think it's because the people who'd have to say it have a financial interest in not saying it.

 

Think about how Westlaw works. Attorneys have used Westlaw for decades. Nobody argues that using Westlaw waives privilege. Why? Because Westlaw doesn't need to know anything about your client to be useful.


You search a legal concept. You get results. The case facts stay in your head. The attorney reads the results, applies them to the client's situation, and that synthesis happens entirely inside a human brain that no one can subpoena.

The database never touches the privileged information. Three clean, separate steps. Research. Results. Application. The wall between the tool and the case stays intact.


Now think about what it actually means for AI to replace the work of a paralegal or a junior associate.


To do that job, drafting motions, analyzing legal exposure, synthesizing discovery, building strategy, the AI needs to know the facts of the case. Not a generic legal question. Your client's specific situation. The thing that privilege exists to protect.


That's the trap. The more useful AI is in a legal role, the more it needs to know. The more it needs to know, the greater the disclosure risk. You can't make it more capable without making it more dangerous to use. Those two things move together.


Westlaw works because it's a sophisticated database that doesn't need your client's details. AI breaks down in legal work precisely because it's smart enough to need that information.


A better privacy policy doesn't fix that.

 

What about secure enterprise tools? This is where most people land. The assumption is that an enterprise subscription fixes it. Better contracts, stronger confidentiality agreements, no training on your inputs. Problem solved, right?


Not quite.


Here's the distinction that matters and that almost nobody is making. Confidentiality and privilege aren't the same thing. A platform can be genuinely secure, encrypted, contractually protected, and still not satisfy the legal standard for privilege. Because privilege isn't just about keeping information secret. It's about keeping it within a specific legal relationship. The moment that information crosses outside that relationship, even to a trusted, secure, well-intentioned third party, the privilege analysis changes.


Cloud-based, by definition, means a third party is involved somewhere in the storage, transmission, or processing of that data. Enterprise subscription or not. The Heppner court didn't ask how secure the platform was. It asked whether the information left the privileged relationship. It did. That was enough.


And there's a layer to this that's even harder to walk back. When you disclose information to a cloud-based AI, you're not just risking exposure in the way you'd risk it with a leaky email. The model may learn from what you input. Your client's facts, their strategy, their vulnerabilities, potentially absorbed into a system that will interact with thousands of other users, including opposing counsel in other cases. You can't un-ring that bell. A document can be clawed back. A subpoena can be challenged. But information baked into a model's training is just gone. The information spreads without the end user's control, or maybe even understanding.


And then there's the research problem. The moment an enterprise AI tool needs to do legal research, which is most of what makes it useful, it has to reach outside the firm's environment. It's hitting cloud-hosted legal databases, pulling current case law, querying external systems. And the query it's sending isn't neutral. "What defenses are available to a CEO who claims he didn't know about his subsidiary's accounting practices" isn't a generic search. That's strategy, shaped by your client's specific facts, leaving the building.


The enterprise pitch is essentially, trust us, our cloud is safer than their cloud. That's a confidentiality argument. It's not a privilege argument. And in court, that difference matters.


The only version of legal AI that's truly privilege-safe would have to be completely air-gapped from the internet. No Anthropic. No OpenAI. No Palantir. No cloud anything. Fed only pre-loaded legal databases, updated on a closed internal cycle, operated under documented attorney direction, on infrastructure that never touches an external system.


At that point you've built a very expensive version of Westlaw. Which, again, already exists.

 

And then there's the billing problem. This is the part that should make clients angry.


If you're paying an attorney $350 an hour, and plenty charge more, you're paying for expertise, judgment, and confidentiality. Those are the three things you're actually buying.


If that attorney is running your case facts through a consumer AI platform to draft your motions and build your strategy, a few things are happening at the same time. Your privilege may have been compromised without your knowledge. The expertise you paid attorney rates for may have taken the AI four minutes. You could have done that yourself without an attorney. And in most jurisdictions right now, your attorney has no legal obligation to tell you any of this.


No disclosure requirement. No informed consent. Just a bill.


And here's the part that should give everyone pause. If an attorney is using the same AI tools you could access yourself for $20 a month, what exactly are you paying for? You're not paying for expertise anymore. You're paying for a bar card stapled to an AI output. And as we've just established, that bar card isn't even protecting your confidentiality if the tool being used doesn't support it. You're getting the worst of both worlds, premium rates, AI output, and a privilege question no one warned you about.

 

So what does all of this really mean? AI isn't bad at legal work. That's not the argument. It's gotten remarkably capable and it's going to keep getting better.


The argument is that the legal system was built around a specific structure. Privilege, chain of custody, confidentiality, human accountability. That structure has load-bearing walls. The way AI is currently being marketed and deployed in legal work doesn't just bump up against those walls. It runs straight through them.


Heppner didn't create this problem. It just made it visible. And the Michigan ruling didn't solve it. It just confirmed that nobody has figured out the answer yet.


Nobody in the AI industry is going to tell you this, because it's bad for business. The legal industry is moving fast to adopt tools that make firms more profitable, sometimes faster than the ethics catch up.


The people with the clearest view of this aren't the ones selling the tools. They're the ones who've spent careers working inside the evidentiary record, understanding what chain of custody actually means, what privilege actually protects, and what happens when either one breaks down.



You probably weren't told any of this. Now you have been.

By Elizabeth Knittle January 23, 2026
There is a growing belief in the legal transcription industry that AI has made human transcribers unnecessary. That belief is wrong
By Lorena O'Neil April 15, 2024
Legal transcription, a cornerstone of the legal system, has a rich history dating back centuries. Initially reliant on stenographers and typewriters, the field has undergone remarkable transformations, embracing technological advancements to become an integral part of modern legal proceedings. The origins of legal transcription can be traced back to the introduction of stenography in the 19th century. Stenographers, skilled in shorthand writing, played a crucial role in recording court proceedings and testimonies. Their ability to capture spoken words with speed and accuracy revolutionized the legal profession, providing verbatim transcripts essential for case preparation and documentation. With the invention of the typewriter in the late 19th century, legal transcription entered a new era of efficiency and accessibility. Typists meticulously transcribed shorthand notes into typed documents, laying the foundation for the standardized transcription practices that continue to shape the legal system today. The advent of digital technology in the late 20th century brought about significant changes in the field of legal transcription. Analog tape recorders were replaced by digital audio recording devices, offering enhanced clarity and convenience. Transcriptionists transitioned from typewriters to computers, utilizing word processing software to transcribe audio recordings with greater speed and accuracy. Furthermore, the emergence of voice recognition software and artificial intelligence has revolutionized the transcription process, enabling automated transcription and real-time speech-to-text conversion. While human transcriptionists remain indispensable for complex legal proceedings requiring nuanced interpretation and understanding, technology has complemented their efforts, streamlining transcription workflows and increasing efficiency. In the modern legal landscape, transcription services are indispensable for preserving accurate records of court proceedings, depositions, client meetings, and other legal events. Verbatim transcripts serve as invaluable tools for attorneys, judges, and legal professionals, providing a reliable record of spoken testimony and evidence. Moreover, transcription plays a crucial role in facilitating access to justice by ensuring that legal proceedings are transparent, comprehensible, and accessible to all parties involved. Transcripts empower attorneys to prepare their cases effectively, enable judges to make informed decisions, and allow litigants to review and challenge testimony with confidence.  In conclusion, the evolution of legal transcription from stenography to digital technology reflects the ongoing commitment of the legal profession to accuracy, transparency, and efficiency. As we embrace new tools and techniques, the core principles of legal transcription remain unchanged: to preserve the integrity of the legal record and uphold the principles of justice for all.