Skip to content
Counsel and Code
Go back

Argue With AI. Don't Trust It.

There are two postures a lawyer can take toward AI tools, and they produce dramatically different results.

The first posture is submissive. The lawyer asks the AI a question. The AI gives an answer. The lawyer accepts the answer, perhaps with minor edits, and incorporates it into their work. Over time, the lawyer becomes a pipeline for AI output: questions go in, AI answers come out, those answers reach the client. This is the posture I have described elsewhere as becoming AI’s servant, and it produces predictable results: the lawyer’s capability slowly atrophies, the work product slowly degrades in non-obvious ways, and the client eventually notices.

The second posture is adversarial. The lawyer treats the AI not as a senior partner whose answer is to be accepted, but as opposing counsel whose argument is to be tested. The lawyer asks, gets an answer, and then immediately tries to find what’s wrong with the answer. They push back. They demand alternative framings. They ask for the strongest counter-argument. They treat the AI’s first response as a starting point for an argument, not as a conclusion.

The second posture produces dramatically better work. It also produces dramatically better lawyers, because the act of arguing with the AI forces the lawyer to develop their own position, to articulate why they disagree, to sharpen their judgment. The first posture deskills you. The second posture trains you.

This is, I think, the single most important practical lesson I have learned from a year of using AI tools intensively for legal work. It deserves more attention than it gets.

What submissive AI use looks like

Let me describe the submissive posture concretely, because it’s surprisingly common even among lawyers who think they are using AI thoughtfully.

A research question arises. The lawyer asks Claude or ChatGPT: “What is the rule in Delaware regarding shareholder appraisal claims after a freeze-out merger?” The AI produces a structured response with citations. The lawyer reads it, confirms it sounds right, copies relevant parts into their memo, and moves on.

What just happened? The lawyer asked a question, accepted the answer, and incorporated it. They did not test the answer. They did not consider what facts might change it. They did not ask whether the AI might be subtly framing the issue in a misleading way. They did not interrogate the citations or check whether the cases actually stand for what the AI claims they stand for. They behaved as if the AI’s response were a trusted authority.

The problem with this behavior is not that the AI’s answer is necessarily wrong. The answer might be 90% correct. The problem is that the lawyer has outsourced their own analysis to a system whose failures they cannot detect. They will not catch the 10% of cases where the AI’s answer is subtly off. They will not develop the muscle that lets them catch such errors. And they will, over time, lose the ability to do the analysis themselves.

This is what most lawyers I observe do. It does not feel like deskilling. It feels like efficiency. The deskilling becomes visible only years later, when the lawyer is asked to do something AI can’t help with and discovers they no longer know how to start.

What adversarial AI use looks like

The adversarial posture begins with a simple instinct: never accept the AI’s first answer.

When the AI gives me an analysis, I immediately ask three questions, in three different prompts:

“What is the strongest argument against this position?” This forces the AI to generate the opposing view. Almost always, the opposing view contains considerations the AI’s first answer omitted. Sometimes the opposing view is more persuasive than the original answer. Either way, I now have a more complete picture of the issue.

“What facts would change this analysis?” This forces the AI to articulate its hidden assumptions. Almost always, those assumptions are things I should be checking against the actual facts of my client’s situation. The AI’s first answer was conditional on a set of facts; the second prompt reveals what those conditions were.

“What am I missing?” This is the most useful prompt I have found. The AI’s response to “what is the right answer” tends to be a confident-sounding statement of the most common analysis. The response to “what am I missing” is often a list of edge cases, alternative theories, and considerations that didn’t fit the first answer’s frame. This is where the actual legal value lives.

Three prompts. Maybe two minutes of additional work. The result is not the AI’s analysis—it is a much richer set of inputs against which I can develop my own analysis. The AI has been turned from an oracle into a sparring partner, and the difference in output quality is enormous.

Why this works

The reason adversarial use produces better results than submissive use has to do with how the underlying models work, but you don’t need to understand the technology to apply the principle.

Models like Claude and ChatGPT are trained to produce helpful, fluent, plausible-sounding responses to whatever is asked. When you ask “what is the right answer,” the model produces a confident answer. The confidence is partly real—it has been trained on legal materials and knows things—and partly performative. The fluency masks uncertainty. The structure masks gaps in reasoning. The citation format masks the fact that some citations may not actually say what the model claims they say.

When you ask “what is the strongest counter-argument,” the model is forced to do different work. It cannot just retrieve the most common answer; it has to construct an alternative position. This is harder, and the harder work produces more interesting output. The model’s failure modes are different—it might fabricate a counter-argument that doesn’t really exist—but the failure modes are also different to detect, which means you are more likely to catch the errors.

By alternating between “give me the answer” and “give me the strongest reason the answer is wrong,” you triangulate. You get a picture of the issue that no single prompt produces.

This is, by the way, how good lawyers have always worked. We have always tested our own arguments by trying to break them. We have always asked “what would opposing counsel say to this?” The AI is just a faster sparring partner. The skill of using it well is the same skill we already have. The only difference is that we have to consciously apply that skill to a tool that, by default, invites the opposite behavior.

The harder version: prompting against your own bias

There is a more advanced version of adversarial use that I want to flag, because it’s where I have found the highest value.

When I’m working on a matter where I have a hypothesis—where I think the answer is probably X—the most useful thing I can do is ask the AI to make the strongest argument that the answer is not X. Not “give me a balanced view.” Not “give me both sides.” Specifically: “Assume my position is wrong. Argue against me.”

This is uncomfortable. The natural human instinct is to ask the AI questions whose answer we want. If I think the contract is enforceable, I ask “is this contract enforceable?” If I think my client has a strong claim, I ask “what’s the case for my client’s claim?”

The AI, being agreeable, gives me answers that reinforce my prior belief. I walk away with confidence I haven’t actually earned.

The cure for this is to ask, with discipline, the opposite question. “What’s the case against my position?” “Why might this contract not be enforceable?” “What’s wrong with my client’s claim?” The AI will produce a list of arguments I hadn’t fully considered. Sometimes those arguments are dismissible. Sometimes they reveal weaknesses in my position that I should be addressing in my pleading. Either way, I now know more than I did before.

This practice—deliberately prompting against your own preferred answer—is, I think, the single highest-leverage habit a lawyer can develop with AI tools. It costs nothing. It takes two extra minutes per matter. It produces substantially better work and, more importantly, keeps you from the slow drift toward becoming an AI-amplified version of your own biases.

The lawyers who get this

Looking around at the lawyers I know who are using AI well, the pattern is consistent.

The good ones treat the AI’s first answer as a hypothesis, not a conclusion. They argue with the model. They demand counter-arguments. They prompt against their own preferred outcomes. They check citations. They ask the same question three different ways and compare the answers. They use the model as a sparring partner, never as an oracle.

The bad ones—and they are everywhere, including at senior levels—treat the AI’s first answer as the answer. They paste it into memos with light editing. They accept its citations without verification. They use it in the most efficient possible way, which is also the most deskilling way.

The good ones are getting better. They are developing a kind of intellectual sharpness that I associate with lawyers who came up in pre-internet practice, when you had to interrogate sources because the sources didn’t pre-package the answer for you.

The bad ones are getting worse, in a way that won’t be visible to them for a few more years. By the time it becomes visible, the gap between them and the good ones will be very hard to close.

A practical exercise

Here is the simplest exercise I can give. The next time you ask an AI tool a substantive question—any tool, any question—do not act on the first answer. Instead, ask the same model two follow-up questions:

  1. “What is the strongest argument against this position?”
  2. “What facts would change this analysis?”

Read the responses. Notice how much more you know after three exchanges than you knew after one. Notice how often the second and third responses include things that should have changed how you thought about the first.

Now imagine doing this consistently, on every substantive question, for a year. You will be a measurably better lawyer than you were before. You will also be measurably better than your colleagues who skip these steps.

The lawyer who wins against AI is the lawyer who argues with it. Not the lawyer who refuses to use it, and not the lawyer who trusts it. The one who treats it the way they would treat a smart, unreliable, well-spoken opposing counsel: as a worthy adversary whose every claim is worth testing.


This article concludes the second wave of articles on Counsel and Code. If you’ve found these useful, the first wave covers practical workflow questions. Subscribe via the RSS feed at the top of every page for new pieces.

Email me at [email protected] if you’ve developed an adversarial AI practice that I haven’t described. I read everything.


Share this post on:

Next Post
My Client Asked an AI About His Case. The AI Told Him What He Wanted to Hear.