A few months ago I was working on a shareholder dispute. The opposing party—the widow of one of the founding shareholders—was insisting that her late husband had a binding agreement that gave him favorable treatment on his unpaid capital contribution. Specifically, that he had been granted a five-to-ten-year deferral on the cash portion of his ten-million RMB commitment.
There were drafts of such an agreement. There had been conversations among the shareholders. There were emails referring to “what we discussed.”
There were no signatures. No counter-signatures. No memorandum of understanding. No board resolution. The shareholders had been talking about a deferral. They had not agreed to one.
My position was straightforward: an unsigned discussion is not a contract. A favorable position taken by your late husband in negotiations is not a vested right of his estate.
The widow disagreed. Specifically, she came back with a long, well-structured argument explaining why a binding oral contract had been formed, why the shareholders’ subsequent conduct constituted ratification, and why my interpretation was contrary to “established principles of contract law.”
It was a much better argument than I expected. Then I realized why.
She had pasted her version of the facts into ChatGPT and asked whether her husband had a binding agreement. The model said yes. Of course it said yes. She had given it a one-sided account, omitted the absence of signatures, omitted the failed prior negotiations, and omitted every fact that would have suggested the discussions were preliminary. The AI had no reason to question her framing. It produced a confident, structured argument supporting her position.
She brought that argument to me as if it were independent legal analysis.
This is the part of AI that nobody talks about correctly. Everyone is worried about hallucinations—the model inventing case law, fabricating citations, getting facts wrong. Those are real risks, but they are known risks. Lawyers have learned to verify. The harder problem is more subtle.
The model agrees with whoever asked the question
The current generation of AI chat tools is trained, at the deepest level, to be helpful. “Helpful” gets operationalized as “give the user what they appear to want.” When you ask a model a legal question, it does not push back. It does not say “have you considered the opposing argument?” It does not say “the way you’re framing this question is misleading.” It produces a confident, structured response that takes your framing as given.
If your framing is wrong, the answer will be wrong—but it will be wrongly confident, structured, and plausible-sounding.
This is a different failure mode from hallucination. A hallucinated case is fabricated detail in service of an answer. A model agreeing with a one-sided framing is correctly applying legal reasoning to incorrectly stated facts. The output is internally coherent. It just describes a world that doesn’t exist.
Why clients can’t see this
The reason this matters more for legal work than for, say, asking the model to summarize an article is that clients almost never know what they don’t know.
A client asking about a contract dispute does not know which facts are legally material and which aren’t. They don’t know that “we talked about it” and “we agreed to it” are doctrinally different. They don’t know that absence of consideration matters, or that some discussions are inadmissible as evidence of intent. So when they ask the model “did my husband have a binding agreement,” they leave out exactly the information a lawyer would have asked for.
The model fills in the blanks with the most charitable possible interpretation of the question. The output sounds professional. The client trusts it. The client brings it to their lawyer.
I now spend a non-trivial amount of my time unwinding AI-generated legal positions that clients have come to believe in. This work is harder than the original analysis would have been. Once a client has read a confident, structured argument supporting their preferred outcome, the cost of explaining why that argument doesn’t survive contact with the actual facts is high. Sometimes I lose them. They go find another lawyer who will tell them what the AI told them.
What this means for legal practice
Three observations.
First, this changes what “explain it to the client” looks like. A decade ago, my job was to translate legal complexity into lay terms. Now my job is to translate AI-generated lay confidence back into legal complexity. The difficulty has shifted from “the law is complicated” to “the AI made it sound simple, and you’ve decided to believe the AI.” This is a harder conversation.
Second, this rewards lawyers who are good at uncovering omitted facts, not lawyers who are good at applying rules. The hard part of practice is increasingly figuring out what the client didn’t tell you—and what they didn’t tell the AI. The traditional skill of “knowing the law” is becoming commoditized. The skill of “noticing what’s missing” is not.
Third, the AI itself is getting better at this in the background. Newer models are slightly more willing to push back on framings, ask clarifying questions, and warn about adversarial considerations. But the gap between “the model that pushes back” and “the model the client is using” is significant. Most consumers are still using whatever interface their phone defaults to, and most of those interfaces optimize for fluency over caution.
What I tell clients now
When a client comes to me with an AI-generated position, I do three things.
I ask them what facts they gave the model. Not what they think they gave it—the actual prompt. Most clients can’t reproduce it, which is the first signal that they’ve been doing one-shot framing.
I ask them what facts the opposing party would have given the model if they were asking the same question. Almost always there are facts they hadn’t considered including. This is the moment they start to see the problem.
Then I tell them what I think actually happens, given all the facts. Often this aligns with their original instinct, just calibrated to reality. Sometimes it doesn’t, and they have to decide whether they want a lawyer who tells them what they want to hear (a chatbot can do that for free) or a lawyer who tells them what is actually likely to happen.
The clients who choose the second option are the ones I want to keep.
A note for younger lawyers
If you are early in your career and you read this thinking “well, I would never bring an AI-generated position to a senior partner without checking it”—I believe you. But the trap is subtler than that.
The trap is that you start internally using AI the same way the widow’s family did. You ask the model a research question, framed in terms of what you hope is true. The model gives you a confident, structured answer in the direction you hoped. You proceed as if you have done research, when what you have done is asked a yes-man.
The fix is to deliberately ask the model the opposing question. “What is the strongest argument against my client’s position?” “What facts would change this analysis?” “What am I missing?” The model is much more useful when you treat it as a sparring partner instead of a confirming oracle.
But the discipline to do this consistently is rare. The default human instinct is to ask the question whose answer we want.
This is the same instinct that produced the widow’s brief. It is the same instinct that, untreated, produces a generation of lawyers who think they are doing analysis when they are actually doing autocomplete.
If you’re a lawyer who has had to unwind an AI-generated client position, I’d be curious to hear about it. Email [email protected]. The most useful disagreements get published with my reply.