Skip to content
Counsel and Code
Go back

Most Lawyers Are Doing Mediocre Work With Extra Steps

When ChatGPT came out in late 2022, I tried it for the same reason every other lawyer did: to see if the rumors were true.

My first prompt was a substantive legal question I already knew the answer to. I wanted to see if the machine could match what I already had in my head.

The reply came back in three structured paragraphs. The logic was clean. The framing was clear. The answer was wrong.

I sat there for a long minute. Not because the wrongness scared me—I’d seen wrong legal answers from associates, judges, and opposing counsel for fifteen years. What scared me was that the wrong answer was better written than what most of my colleagues would have produced.

That was the first thing I noticed about AI. It didn’t impress me with what it knew. It impressed me with how it spoke.

This is what I actually think, two-something years later, after using AI tools daily, watching colleagues react to them, and quietly reorganizing how I do my work. I’m writing under a pen name because some of what follows would not go down well in our partners’ meeting.

The hallucination problem is real, and it almost made me quit

I want to address this first, because lawyers who haven’t tried AI tools assume those of us who use them are oblivious to the risk.

We’re not. We’re terrified of it.

The first time I caught Claude inventing a case citation—a perfectly formatted, totally plausible, completely fabricated Smith v. Jones with a 2018 Delaware Chancery Court treatment—I almost stopped using these tools permanently. The professional consequences of filing a brief with a hallucinated citation are not “embarrassment.” They are sanctions, malpractice claims, and bar complaints. There is no version of “the AI did it” that protects you.

So when people ask “is the hallucination problem solved?”, the honest answer is less than before, but no. What’s changed is my workflow. I never let AI generate citations. Ever. I source them myself, then ask the model to weave them into prose. The hallucination risk gets pushed onto something I can verify, instead of something the model invents.

This is a kind of skill. Most lawyers haven’t bothered to learn it. Most haven’t tried.

Most of my partners think they’re irreplaceable. They’re not.

The dominant attitude toward AI at my firm is what I’d call defensive comfort. The people senior to me have decided, mostly without examining any evidence, that whatever they are particularly good at is exactly the thing AI cannot do.

This is comforting. It is also wrong.

What most lawyers actually spend their day on is not irreplaceable judgment work. It is the boring middle: reading, summarizing, redlining, restating, organizing, drafting variations on a theme. The work that takes a fifth-year associate three days and a partner twenty minutes of review.

That entire middle is where AI is best, today, right now. Not in the corner cases. In the center of what most billable hours actually buy.

I’ll put this bluntly because nobody else seems willing to: most lawyers aren’t doing complex work. They’re doing mediocre work with extra steps. The “moat” they think they have around their practice isn’t a moat. It’s a habit. It’s the comfort of a workflow they learned twenty years ago and have stopped questioning.

A real moat survives an attack. A habit doesn’t.

When I look at the senior people at my firm who are most dismissive of AI, I see professionals who have, over twenty or thirty years, built an entire sense of self around their craft. To take their craft seriously, they have to take its uniqueness seriously. To take the AI threat seriously, they would have to admit that what they thought was uniquely human about their work is, in many cases, just process.

I don’t blame them for not wanting to do that. But I’m not going to do their thinking for them either.

My clients already know. We’ve reached an understanding through silence.

Here’s something I haven’t seen written about openly: my clients already know I use AI. I haven’t told them. They haven’t asked. We have reached an understanding through silence.

It works like this. A client emails me a question. I draft a response with Claude’s help. They read it. They paste my response into ChatGPT to check it. They reply with a counter-argument that is suspiciously well-structured.

I read their reply. I draft a response—again with Claude’s help. We are now, in effect, having a meeting between two lawyers and two AIs, with the human lawyers acting as judges over which AI’s argument is more useful.

Nobody is acknowledging this. The fees still get billed. The signatures still get applied to the cover letters. But we both know.

There is a second, more frustrating version of this. A client comes back with an “answer” they got from ChatGPT and uses it to challenge what I told them. The problem isn’t that they’re using AI. It’s that they don’t know how to ask it useful questions, and the answer they got is plausible but subtly wrong. Now I’m in the position of explaining why the AI’s answer is wrong, to a client who has decided that the AI is a neutral arbiter. This is not what they’re paying me for, and it is exhausting.

The reason I don’t volunteer that I use AI is the same reason a surgeon doesn’t volunteer which suture brand they prefer: the conversation is supposed to be about the result, not the toolkit. When clients learn that their lawyer uses AI, the question they jump to is “so why am I paying lawyer rates?” And the honest answer—“because the AI doesn’t know what to do with what it produces”—is hard to convey credibly.

This is not entirely comfortable. I’d prefer a world where the use of AI in legal work was openly discussed, the way we openly discuss the use of legal databases or paralegals. That world isn’t here yet. The legal industry is in a halfway-open closet. I am too.

The part most “embrace AI” articles leave out

When younger lawyers ask me whether they should invest time in learning AI tools, I tell them yes. Then I tell them the part that most of these “lawyers should embrace AI” articles leave out.

There is a real risk that learning AI tools, the wrong way, will make a young lawyer worse at being a lawyer.

Here’s how it goes wrong. A first-year associate gets handed a research task. Instead of reading the cases, they ask Claude. Claude gives them a structured summary. They paste it into a memo. Their partner says “good work.” The associate moves on to the next task.

Repeat this sequence for two years.

At the end of those two years, the associate has produced a lot of work. They have not, however, read very many cases. They have not internalized the patterns of how a Delaware court reasons about an appraisal claim, or how Second Circuit panels treat ambiguity in indemnity clauses. They have read summaries of those things. They have not built the gut feel that lets a senior lawyer, sitting across from a client in a conference room, say “this argument won’t work, here’s why” without consulting a database.

The trap is that AI removes the hard part. And the hard part is where the lawyer was being made.

The bar exam, in this sense, is the easy filter. The harder filter is the thousand small acts of discomfort during early practice—reading a case you don’t understand until you do, drafting a clause that gets red-lined into oblivion, getting yelled at in a deposition. Those experiences build the silent layer of judgment that distinguishes a senior lawyer from a competent associate.

AI lets you skip all of it. And nobody will notice you skipped it until you’re handed something AI can’t help with—and you discover, in front of a client, that you don’t have anything underneath.

I tell young people: use AI, but use it after you’ve done the work yourself. Use it to check your draft, not to replace your draft. Use it to find cases you might have missed, not to summarize the ones you should have read. The discomfort of doing it yourself, badly, is what made me a competent lawyer. If you skip that, you skip becoming a lawyer at all.

The industry is going to be full of people who skipped it. They will not realize they have skipped it until it is too late.

What I think actually happens in five years

Every article like this is contractually required to end with a prediction. I’ll make mine specific, because nobody benefits from another “AI will transform everything” hand-wave.

The billable hour, as a pricing unit, is going to start collapsing. Not all at once. Not everywhere. But in transactional work, where the link between time and output is most arbitrary, clients will stop accepting the fiction that one hour of associate time equals a definable unit of value. Pricing will move toward outcomes—percentages of deal value, fixed-fee structures tied to milestones, success fees on litigation. We have been holding the billable hour together by professional consensus. The consensus is about to crack.

The center of a senior lawyer’s job will shift from production to judgment. I will spend less time drafting and reviewing. I will spend more time helping clients decide what they actually want, what risks they can tolerate, what trade-offs they should accept. The lawyer-as-craftsman model will give way to a lawyer-as-counselor model. The people who can’t make this shift—who are good at producing documents but not at advising humans—will struggle.

Junior associate work will partially disappear. The traditional model—hire bright graduates, run them through three years of document review and contract drafting, see who survives—will not survive itself. Firms will hire fewer juniors. The juniors they do hire will be expected to add value differently, and earlier. The firms that figure out the new training model first will recruit better lawyers than the firms that don’t.

Some of the senior people who are most confident today will be the most exposed. The lawyers who say “AI can’t do what I do” without having seriously tested whether it can—those are the people who will be unpleasantly surprised. Not all at once. But the slow erosion of their value will be visible to clients before it’s visible to them.

I don’t say this to be dramatic. I say it because the comfortable lie of the legal profession is that we are too special, too judgment-laden, too irreplaceable to be touched by general-purpose AI.

We are not too special. Most of what we do is process dressed up as expertise. The dressing is being taken off in front of us, and a lot of my colleagues are still pretending not to see.

I’d rather be honest about it. That’s what this site is for.


If you’re a lawyer reading this and you want to push back, email me at [email protected]. The legal industry’s biggest problem with AI isn’t AI. It’s that we don’t talk to each other about it honestly.


Share this post on:

Previous Post
My Client Asked an AI About His Case. The AI Told Him What He Wanted to Hear.
Next Post
I'm a Lawyer Who Uses Claude Code Daily. Here's My Honest Take.