Most of the conversation about AI in legal work focuses on lawyers. We argue about whether AI will replace associates, how it changes pricing, what it means for client relationships. The conversations are vigorous because lawyers are the people writing them.
There is a much quieter conversation that almost no one outside the system is having. It concerns the other side of the courtroom.
Judges are using AI too. They have every reason to. The system has every reason to encourage them. And the consequences for how cases get decided are larger than anyone is publicly acknowledging.
I want to explain why, drawing on what I have observed in two specific places: my own jurisdiction, and the Chinese court system, which is moving faster on this than almost anyone in the West has registered.
The pressure on judges is identical to the pressure on lawyers
Start with the obvious. Judges are humans operating under dockets that have grown faster than the resources to staff them. They are reading briefs at the end of long days. They are writing opinions that have to be done by Friday. They have, like the rest of us, the option of using a tool that drafts a serviceable first version of a document in twenty seconds.
The temptation is the same temptation that lawyers face. The rationalization is the same rationalization. I’ll use it as a starting point. I’ll review the output carefully. The judgment will still be mine.
If we know anything from the lawyer side of this conversation, it is that this rationalization erodes. The verification gets thinner over time. The starting point becomes the ending point. The “judgment is still mine” claim gets more and more difficult to defend as the workflow ossifies.
I am not saying that any particular judge is doing this badly. I am saying that the same pressures that have produced AI-dependent associates at law firms will produce AI-assisted opinions at courts, with the same trajectory and the same risks. To assume otherwise is to imagine that judges are uniquely resistant to incentive structures that have moved everyone else.
The Chinese case is the leading indicator
If you want to see where this goes, look at China. The Shenzhen courts—which I follow because they are economically and technologically connected to the kind of clients I work with—are explicitly and openly deploying AI systems to support judicial work. This is not a future plan. It is happening at scale, today.
The Chinese model is, in some ways, more honest than the Western one. The system openly says: we have too many cases, our judges are overworked, AI tools can help with summarization, precedent search, drafting routine portions of opinions, and identifying patterns in case law. It deploys the tools deliberately, with rules about how they can be used, and with the explicit goal of making judicial output faster and more consistent.
Western jurisdictions tend to be more reticent. The judiciary uses AI in ad hoc ways, without clear rules, often without disclosure. Lawyers know it’s happening. Litigants don’t. The output looks the same as it ever did. The drafting process behind it is increasingly different.
I do not think the Western approach is more cautious in any meaningful sense. I think it is less honest. The same things are happening; they are just less acknowledged.
What changes when judges use AI
Suppose, for the moment, that AI use among judges becomes routine. Three things change about how cases get decided, and none of them are good.
First, the cost of producing a marginal opinion drops. This sounds like a positive—faster justice, more cases resolved, less backlog. In practice, it means the marginal case that wouldn’t have been worth a thorough opinion now gets one anyway. The opinion exists. It can be cited. It enters the body of precedent. The volume of “law” expands faster than human judgment can keep up with, which makes legal practice harder, not easier.
Second, the texture of opinions starts to homogenize. AI tools, by their nature, produce output that resembles their training data. Judicial opinions drafted with AI assistance—even with extensive human editing—gradually drift toward a kind of mean. The idiosyncrasies that distinguish a thoughtful jurist from a procedural one get smoothed out. The variation that makes case law interesting and informative for practitioners narrows. The system becomes more efficient at producing output and worse at producing insight.
Third—and this is the one that should worry every litigator—the same problem I have written about with clients applies to judges. A judge using AI to research a question is going to ask the question in a particular way. The AI is going to produce a confident, structured answer in the direction the question pointed. If the question was framed in a way that subtly favored one side—because that’s how the judge happened to be thinking when they typed it—the answer will reinforce that direction. The opinion that emerges will be subtly tilted in ways that no one, including the judge, will be able to detect from the surface.
This last problem cannot be solved by careful judges. It is a property of the tool. The tool agrees with whoever asked. When the asker is a judge, it agrees with the judge. The agreement gets dressed up in the structure and authority of judicial reasoning, and the litigant on the other side never sees the prompt.
What this means for litigation strategy
If you are a litigator, the implication is uncomfortable but actionable.
You have to assume that the judge in any given case may have used AI to assist with their analysis. You have to write briefs that anticipate the framings an AI tool would be likely to produce, and you have to construct your arguments to survive contact with those framings. You can no longer write only for the human reader. You are writing, increasingly, for the AI tool that the human reader is using.
This is a strange skill. It involves more aggressive framing of the procedural posture, more explicit anticipation of opposing arguments, more careful structuring so that the first thing the AI sees aligns with your client’s position. It is a kind of meta-rhetoric that we did not learn in law school.
The lawyers who learn this skill faster will get better outcomes. The lawyers who continue to write briefs as if they were only being read by a human will be subtly disadvantaged in ways they cannot diagnose. They will lose more cases at the margin, without ever seeing why.
The longer-term question
I want to end with a question I do not know how to answer.
If both sides—lawyers and judges—are using AI extensively, and if the underlying tools all share certain biases and tendencies, what does the legal system actually become?
In one view, it becomes more consistent. Routine cases get decided more uniformly. The system runs faster. Backlog drops.
In another view, it becomes more brittle. The whole system increasingly reflects the priors of a small number of foundation models, deployed by a small number of companies, trained on a particular slice of human language. The diversity of legal reasoning—which has been one of the system’s strengths—narrows. When something goes wrong with the underlying tools, it goes wrong everywhere at once.
I do not know which view is right. I suspect both are partly right, and the question is which of these effects dominates over the next decade. But I am quite sure that almost nobody outside the legal-tech industry is thinking about this carefully enough.
The judiciary will use AI. That is decided. What remains to be decided is whether the legal system is built to absorb that change, or whether we are about to find out the consequences of having let it happen without thinking it through.
I’d bet on the second. I hope I’m wrong.
Part of an ongoing series on AI and the legal profession. If you’re a judge or court staff reading this—or a litigator seeing this play out in your courtroom—I’d be interested in your private observations. Email [email protected].