Skip to content
Counsel and Code
Go back

My Client Is Shopping for the Answer They Already Want

There is a specific kind of client meeting that didn’t exist five years ago. I have it now perhaps three times a week.

The client arrives with a position. Not a question—a position. They have already decided what the right answer is. They are not coming to me to learn what their legal situation is. They are coming to me to validate a conclusion they have already reached.

What’s new is where the conclusion came from. It used to come from an in-law who once knew a lawyer, or from a forum post the client read at 2 a.m. Those sources were obviously unreliable, and clients knew it. The role of the lawyer was to gently override the bad source with informed judgment.

Now the conclusion comes from ChatGPT. Or Claude. Or some legal AI tool with a confident-sounding name. The conclusion is structured, cited, formatted in numbered paragraphs. It sounds professional. The client has read it three or four times, internalized it, and built their emotional state around it.

When I tell them the actual situation is more complicated than the AI suggested, I am not correcting a bad source. I am asking them to abandon a belief they have invested in. This is a much harder conversation than the one I was having five years ago.

Anxiety produces answer-shopping

I want to be clear about why this is happening, because it’s tempting to blame the AI. The AI is not the cause. The AI is the enabler of something that was always there.

Clients in serious legal trouble are anxious. They have been thinking about their problem at three in the morning. They have a sense of dread that the answer will be unfavorable. So they look for any sign—any source, any opinion, any framework—that says the answer might be favorable. They are not seeking the truth. They are seeking the answer they can live with.

This is human behavior, not bad behavior. I do not blame them. But it has always been a feature of how clients interact with bad news. Before AI, this manifested as: “Well, my brother-in-law said…” Now it manifests as: “Well, ChatGPT said…”

The difference is that the brother-in-law’s authority topped out at “guy who watches Law & Order.” The AI’s apparent authority is much higher. It produces legal-sounding output. It cites things. It uses the right vocabulary. To a non-lawyer, it looks like a real legal opinion. To a non-lawyer in emotional distress, it looks like a real legal opinion that agrees with them.

So the client now has, sitting next to my actual analysis, a parallel “analysis” from a tool that doesn’t know what facts they omitted, doesn’t know what the opposing party would say, and doesn’t have any professional responsibility for the consequences. And the AI’s version, by design, agrees with them.

What this looks like in practice

Let me describe a typical pattern.

A client comes in with an inheritance dispute. They are convinced—because their cousin’s lawyer said so, because their friend’s experience suggested so, because ChatGPT confirmed so—that they are entitled to a specific outcome. They have a stack of documents. They have, in some cases, drafted their own legal argument with AI assistance, complete with citations to statutes they read about in the AI’s response.

I read the documents. I listen to their account. I run my own analysis. And I tell them: the situation is more nuanced than the AI suggested. There are facts that the AI didn’t know to ask about. There are competing precedents. The outcome they want is possible but not likely.

The client’s response is almost never “thank you for the clarification.” It is some version of: “But ChatGPT said…”

This is the moment where you find out what kind of client you have. The good clients—and I have many of them—pause, recalibrate, and ask follow-up questions. They are willing to be told they were wrong, because they understand that the value of a lawyer is being told you were wrong before it costs you.

The bad clients—and there are more of them than there used to be—dig in. They explain to me why the AI’s reasoning was actually correct. They suggest, gently or not, that I am perhaps being conservative because I am incentivized to bill more hours. They ask whether I would be willing to test the AI’s argument in court, as if litigation were a science experiment with controllable variables.

These clients have become forum-shoppers, except the forum is AI tools, and the shopping is for answers, not for venue.

Why this matters more than the official conversation acknowledges

The legal industry has not fully digested this. There are conferences about AI replacing lawyers. There are panels about hallucinations. There are not, as far as I can tell, many panels about the change in what it means to advise a client. But this change is structural.

A traditional client-lawyer relationship rests on what economists call information asymmetry: the lawyer knows things the client doesn’t, the client trusts the lawyer to use that knowledge in their interest, and the relationship is sustained by the client’s recognition that they need access to expertise they cannot easily get themselves.

AI has compressed the information asymmetry. Not eliminated it—a lawyer still knows more than a chatbot about what actually happens in practice—but compressed it enough that clients no longer feel as dependent. They can produce something that looks like legal analysis themselves. They can challenge their lawyer’s view with output from a tool that sounds authoritative. They can engage in their own representation in a way that wasn’t practical before.

The result is that lawyers are increasingly cast in a role we are not trained for: convincing the client that our analysis is more correct than the AI’s. This is a marketing problem, not a legal one. And lawyers are, on the whole, bad at marketing.

What I have learned to do

A few practical adaptations I have made over the past year.

I now ask, at the start of any meeting, whether the client has already used AI tools to think about the matter. This is a useful question because it surfaces the parallel analysis. If they have, I want to know what conclusions they arrived at, because those conclusions are now invisible to me but are influencing how they hear my analysis.

I have stopped fighting AI conclusions head-on. Telling a client “the AI is wrong” makes them defensive. Telling them “the AI didn’t know about facts X, Y, and Z, which change the analysis” is more productive. I am not attacking their belief; I am supplementing the information that produced their belief. This is a more respectful framing, and it works better.

I have started explicitly pricing the work of unwinding AI-generated positions. When a client brings me a pre-formed argument that I have to disassemble before we can have a real conversation, that disassembly is itself substantive legal work. It takes time. It requires the same skills as the original analysis would have, plus the additional skill of explaining why the prior framing was incomplete. I now charge for this, where I used to absorb it. The clients who don’t want to pay for it are usually the clients I shouldn’t be representing anyway.

I am increasingly direct about the limits of what I can do for clients who won’t accept my analysis. If a client insists on the AI’s framing over mine, I am willing to decline the engagement. This was harder five years ago, when the only alternative was the brother-in-law. Now the alternative is the AI itself, which the client can keep consulting for as long as they want. The client who chooses the AI over me is not a client I am going to do good work for.

The deeper change

I want to end with a more uncomfortable observation.

The clients who are answer-shopping are not, in my experience, the sophisticated ones. They are, almost without exception, the clients with limited prior experience of complex legal matters—first-time inheritors, first-time founders, first-time defendants. The sophisticated clients have learned, often painfully, that the answer they want is not always the answer they get. They have a frame for hearing bad news.

The answer-shoppers don’t have that frame yet. The AI is, for them, a kind of substitute experience. It gives them a confident voice that feels authoritative, and they use it to construct a worldview that doesn’t include the possibility of being wrong.

The hard part of legal practice, increasingly, is not the law. It is the patient, repeated work of getting answer-shopping clients to a state where they can hear a real answer. Some of them never get there. They go from lawyer to lawyer, AI to AI, looking for the version of reality that lets them keep their original position. They eventually find a lawyer who will tell them what they want to hear, and then they discover, sometimes catastrophically, that they should have listened to the first lawyer.

I write under a pen name partly so I can say this: a lot of clients are about to learn very expensive lessons about what AI does and does not actually tell them about their legal situation. The lawyers who survive this period will be the ones who can manage the conversation. The lawyers who don’t will be the ones who keep getting fired for telling clients the truth.


This is part of an ongoing series on the changing dynamics between lawyers and clients in the AI era. If you’ve had to navigate the “but ChatGPT said…” conversation, email me at [email protected]. The most useful examples make their way into future articles.

Related: What I Won’t Say Out Loud at Partner Meetings


Share this post on:

Next Post
My Client Asked an AI About His Case. The AI Told Him What He Wanted to Hear.