HomeGamingLexisNexis CEO says the AI law era is already here

LexisNexis CEO says the AI law era is already here


Today, I’m talking with Sean Fitzpatrick, the CEO of LexisNexis, one of the most important companies in the entire legal system. For years — including when I was in law school — LexisNexis was basically the library. It’s where you went to look up case law, do legal research, and find the laws and precedents you would need to be an effective lawyer for your clients. There isn’t a lawyer today who hasn’t used it — it’s fundamental infrastructure for the legal profession, just like email or a word processor.

But enterprise companies with huge databases of proprietary information in 2025 can’t resist the siren call of AI, and LexisNexis is no different. You’ll hear it: when I asked Sean to describe LexisNexis to me, the first word he said wasn’t “law” or “data,” it was “AI.” The goal is for the LexisNexis AI tool, called Protégé, to go beyond simple research, and help lawyers draft the actual legal writing they submit to the court in support of their arguments.

That’s a big deal, because so far AI has created just as much chaos and slop in the courts as anywhere else. There is a consistent drumbeat of stories about lawyers getting caught and sanctioned for relying on AI tools that cite hallucinated case law that doesn’t exist, and there have even been two court rulings retracted because the judges appeared to use AI tools that hallucinated the names of the plaintiffs and cited facts and and quoted cases that didn’t exist. Sean thinks it’s only a matter of time before an attorney somewhere loses their license because of sloppy use of AI.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

So the big promise LexisNexis is making about Protégé is simply accuracy — that everything it produces will be based on the real law, and much more trustworthy than a general purpose AI tool. You’ll hear Sean explain how LexisNexis built their AI tools and teams so that they can make that promise — LexisNexis has hired many more lawyers to review AI work than he expected, for example.

But I also wanted to know what Sean thinks tools like Protégé will do to the profession of law itself, to the job of being a lawyer. If AI is doing all the legal research and writing you’d normally have junior associates doing, how will those junior associates learn the craft? How will we develop new senior people without a pipeline of junior people in the weeds of the work? And if I’m submitting AI legal writing to a judge using AI to read it, aren’t we getting close to automating a little too much of the judicial system? These are big questions, and they’re coming real fast for the legal industry.

I also pressed Sean pretty hard on how judges, particularly conservative judges, are using computers and technology in service of a judicial theory called originalism, which states that laws can only mean what they meant at the time they were enacted. We’ve run stories at The Verge about judges letting automated linguistics systems try and understand the originalist intent of various statutes to reach their preferred outcomes, and AI is only accelerating that trend — especially in an era where literally every part of the Constitution appears to be up for grabs before an incredibly partisan Supreme Court. 

So I asked Sean to demo Protégé doing some legal research for me, on questions that appear to be settled but are newly up for grabs in the Trump administration, like birthright citizenship. To his credit, he was game — but you can see how taking the company from one that provides simple research tools to one that provides actual legal reasoning with AI will have big implications across the board.

This one is weedsy, but it’s important. 

Okay: LexisNexis CEO Sean Fitzpatrick. Here we go.

This interview has been lightly edited for length and clarity.

Sean Fitzpatrick, you’re the CEO of LexisNexis. Welcome to Decoder.

Thank you. Great to be here.

Thank you for joining me. This is my first interview back from parental leave. Apologies to the audience if I’m rusty, but apologies to you if I’m just totally loopy.

Congratulations!

I’m very excited to talk to you. I’m very much a failed lawyer, my wife is a lawyer, there’s a lot of lawyers on The Verge team. The legal profession in America is at a moment of absolute change, a lot of chaos, and an enormous amount of uncertainty. And LexisNexis, if the audience knows, tends to sit at the heart of what lawyers do all day. Most lawyers are using LexisNexis every minute of every day. What that product is, what it can do, and how it helps lawyers do their job connects to a lot of themes that we’re seeing both in the legal profession and in technology and AI generally. 

So, start at the start. What is LexisNexis? How would you explain it to the layperson?

LexisNexis is an AI-powered provider of information, analytics, and drafting solutions for lawyers that work in law firms, corporations, and government entities.

That’s a new conception of LexisNexis. When I was in law school in the early 2000s, it was just the thing I searched to find case law.

Yes, we’ve transformed over time. We were just that research provider. Over time, we’ve acquired and integrated more businesses. In 2020, when we launched our Lexis+ product, we integrated all those things together, becoming an integrated ecosystem of solutions. Then, in 2023, we launched Lexis+ AI, and that’s when we really became an AI-powered provider of information analytics, decision tools, and drafting solutions. AI capabilities have really allowed us to do more things than what we’ve traditionally done in the past

That jump from being the gold standard database of legal opinions, reasonings, case notes, and all that to “we’re going to do the work for you or help you do the work” is a big one. That’s a cultural jump. Obviously, there were some acquisitions along the way that helped you to make that jump, which you can talk about. What drove you to make that jump, to say, “Actually, the lawyers need help drafting the motions, the proposed opinions they might give to a judge?” What made you say, “Okay, we’ve got to step into actually doing the work?”

I think it’s been a natural evolution. As technology has evolved, it’s opened up new avenues of things we can do. We tend to take the latest technology, introduce it to our customers, and spend time talking to them about how they think that technology can be best applied in the legal environment. Then, we translate the ideas they came up with into products that resolve or address those opportunities.

Let me ask you a pretty philosophical question. It’s one that I struggle with all the time. It’s one that I talk to our audience about all the time. I think most people who encounter the legal system think it’s pretty deterministic. Our audience is pretty technically focused. They’re used to computers. Computers are, until recently, pretty deterministic. You put in some inputs, you get some outputs. Most people who encounter the legal system think it’s pretty deterministic. You put in some inputs and you get some predictable outputs.

And what I’m always saying is, “That’s not how it works at all.” You show up to court, the judge is in a bad mood, you have no idea what’s going to happen. You’re a big company with an antitrust appeal, you show up to the three-judge appellate review board, and you have no idea what’s going to happen. Literally anything could happen at any time. The judicial system is fundamentally not deterministic. Even though it’s structured like a computer, trying to think about it like one can get you in all kinds of trouble. Maybe the best example of this is people on Facebook putting the words “no copyright intended” on the bottom of movies. They think they can issue these magic words and the legal system is solved, and they just can’t.

AI is that problem in a nutshell. We’re going to take a computer, make it better at natural language. We’re going to make the computer fundamentally not deterministic — you can’t really predict what an AI is going to do — and then we’re going to apply that to the fundamentally non-deterministic, human nature of the court system. Somewhere in there is a big philosophical problem about applying computers to the justice system. How do you think about that?

First of all, you have these massive investments happening with the foundational models. Each of these hyperscalers — Microsoft, Amazon, Google — are putting in close to $100 billion. So these models just continue to get better and better over time. That’s at the foundational model level. We don’t really operate at that level. We build applications that utilize these foundational models. And at that level, we see prices are dropping. We used to pay $20 for 1 million tokens two years ago, and today we might pay 10 cents for 1 million tokens. That allows us to do things at speed and at scale that we’ve never been able to do before. 

And there are a lot of things about the law that make these models attractive. Most of the law is language-based, and these models are really great with language problems. The law is precedent-based, and so — 

Well, that’s up for grabs. We’ll come back to it.

I’ll grant you that. You look at the activities that lawyers do: they draft documents, they do research, they summarize things. The models are all really good at these types of things. So, you have this perfect storm, with this technology and the things lawyers do coming together. 

Yet, when people try to use these consumer-grade models, there are all kinds of problems with them. Like you said, it’s not deterministic. You can’t just put information into a computer and get an answer out. If that were the case, we wouldn’t need a court system. These models are just not built for the legal system. You can’t go into court and say, “I found this on the internet.” You have to have authoritative content.

The cut-off date for GPT-4o was 2023, I believe. You need to have information that’s constantly updated. Your audience probably doesn’t know this,, but there’s the citator, which traditionally has said, “This is good law” or “It’s not good law, it’s been overturned.” Now, it’ll tell you if it’s the law at all or if some system just made it up. These systems are probabilistic. They want to put together an answer that’s probably right. Well, that’s not the standard we have in legal. You can’t go in with something that’s probably right. So, you have this whole list of issues that these models don’t address. 

What we’ve tried to do is address those with a courtroom-grade solution. Our system is backed by 160 billion documents and records. Our curated collection is our grounding data. So can’t go into court and say, “I found this on the internet,” but you can refer to a specific case. We also have what we call a citator agent that’ll check that case to make sure that it wasn’t fabricated by the system and is actually still good law. You can also look at the case law summary so you know what the case is about. You can look at the headnotes so you can see the particular points of law that were addressed in that case and see if it’s still a valid case. 

Privacy is another issue. There’s a special relationship that exists between attorneys and their clients in that attorney-client privilege, so there’s some privacy requirements that you need in order to maintain that. If you’re using one of these just consumer-grade models, you don’t have the level of privacy and security that you need. Transparency is another issue. You put a question in, you get an answer back. Well, based on what? What was the logic that the system used? We open up the black box so you can see the logic that’s being applied. We give the attorneys the ability to go in and actually change that. If this model is getting something wrong, the attorney has the opportunity to change it so that they get the outcome that they’re driving for. But, as you said, the law is not deterministic. There are lots of different factors that go into this, but you need to have a system that’s legally driven, that’s purpose built for legal situations in order to really operate in a courtroom-type environment.

There’s two things I really want to push on. Again, I was not a good lawyer. I don’t want to ever pretend on the show, to you, or to anyone else that I was any good at this. But you learn a particular way of thinking in law school, which is a pretty rigorous, structured way of approaching a problem, going to find the relevant cases and precedents, and then trying to fashion some solution based on that. That feels like we’re just moving words around, but it’s actually a way of thinking. Before AI showed up, we would mash together using a word processor and thinking a certain way. Now, we’re pulling them apart. We’re saying that the computer can move the words around and generate some thinking. So, that’s one thing I want to push on. I’m very curious about that because it feels like the lawyering part of being a lawyer is being subsumed into a system, and that might change how we lawyer. 

The other part is if anyone is going to look at the work being done. We’re already seeing lawyers get sanctioned for filing briefs with hallucinated case citations in them. There was just a case where, I believe, a court had to rescind an opinion because it had a hallucinated case citation in it. This is bad. This is just straightforwardly a threat to the system and how we might think about lawyers, judges, and courts. It’s not clear to me that anyone’s going to use the tools as rigorously as you want.

So on the one hand there’s “We’ve made the thinking easier.” On the other hand, it’s, “Oh, boy, everyone’s going to get really lazy.” They’re both in your answer. They’re both saying, “We’re making it easier to look at this stuff. We’re making it faster to do the research.” I’m just wondering where you think the thinking comes in.

I don’t think that these models replace the lawyers. I think they help the lawyer and augment what the lawyer does. So, if you think about an activity that a lawyer might do — let’s say they were preparing for a deposition. They need to come up with a list of questions that they’re going to ask the individual that’s being deposed. You can take the facts around that particular case, load them into a vault, and point the system to that vault and say, “Based on the facts of this particular matter, develop a list of deposition questions.” That’s something that a lawyer would’ve done on their own. In the past, they may have referred to a list of questions that they had previously or something —

Actually, can I just grab that example? Maybe a lawyer would’ve done that, but more often a lawyer would’ve told a bunch of junior associates to sit in the basement and do that. That was how those junior associates learned how to do their job. That’s what I mean. We’re farming out the thinking, and some people might never actually do that thinking. That might change the profession down the line in really substantive ways.

Right. It is an apprentice system. So, if you start to take some of the layers out of the bottom, how does everyone skip the bottom layer and still make it to the second with the same capabilities and skills? That’s a real challenge. I think the systems are allowing lawyers to not have the associate do that work. Now they can say, “Generate me 300 or 700 questions.” It doesn’t take that long to go through 700 questions, and the models never get tired. From our experience, they’ll go through that list of questions and say, “First question? Yep, that’s a good question. I would’ve thought of that.” The system made it a little bit faster, but it didn’t really help them. Second question, same thing. Third question, same thing. Fourth question doesn’t even make any sense, scratch it off the list. With the fifth question they’ll say, “Oh, that’s interesting, I wouldn’t have thought to ask that but that’s probably important, so I’m going to add that to my list.” So, there’s an efficiency component to it, but I think there’s also a better outcome component. 

In terms of the apprenticeship piece, I think people are struggling right now to figure out how that’s going to impact the apprenticeship model. Someone was describing to me that they had worked on a situation where they were looking at securitized assets. When they were an associate, they did this project for a company that had 50 states worth of coverage, and so they became the expert in the firm on asset securitization in all 50 states. For four or five years, anytime somebody had a question, they came to that individual. It was a great way to make a career. Now, the system can do all that information for you. So his question was: “How is that ever going to happen now in this new world?” 

I think firms are going to struggle with that, but I also think they’re going to figure it out. We tend to get some of the smartest and brightest people going into the legal profession, and so far, they seem to have figured out every challenge that’s faced the industry. I think they’ll figure this one out as well.

What are some solutions you’ve seen as people try to figure this out?

I don’t know that folks have come up with a lot of solutions around the apprenticeship model. What we’re for sure seeing is that people are embracing AI. It’s here, it’s in the courtroom, it’s in the law firm. Two-thirds of attorneys are using AI in their work, according to our surveys, and our survey’s probably a little outdated. I’d say the number’s probably higher. I don’t know about you, but I use AI every day.  It’s now in my personal and work lives. I think the legal profession is perfectly suited for it, so it’s only going to expand.

When you see the lawyers getting sanctioned and the courts having to rescind opinions, is there a solution that involves using LexisNexis so it won’t happen to you? Or do you think that’s a symptom of something else? Everyone’s just using AI, I get it. Probably the biggest split for our audience right now is between the data that says everyone’s using this stuff all the time and the hostility our audience expresses about the tools, their quality, and the fact that a lot of that usage is driven by big companies just putting it in front of them. There’s something happening where, to justify these enormous investments, the tools are showing up whether the consumers are asking for them or not, and then we’re pointing out that everyone’s using the tools.

What I hear from our audience is, “Well, I can’t turn off the AI overview. Of course I’m using the tool because it’s just in front of me all the time. I can’t make Microsoft Office stop telling me that it’s going to help me. It’s just in front of me all the time.” So, when you see the errors being made in the legal system today — the lawyers getting sanctioned, the lazy AI use, the lack of apprenticeship that’s going to impact the entire next generation of lawyers and how rigorous they are —  how do you make your product address that? Or are you just not thinking about that right now?

No, we’re definitely thinking about it, and we’ve incorporated things into our product. These things always make the headlines when they happen, but I think it’s a small percentage of attorneys that are causing these problems. Just taking something and bringing it into court has never been the standard. You’ve always had the responsibility as a lawyer to check the material and make sure that it’s valid before going into court. And some individuals aren’t doing that. We certainly saw that in the [Tim] Burke case where some attorneys submitted a document to the court that I think had eight citations in it and seven of them were just completely — 

But that was inevitable. The day ChatGPT showed up, half of the legal pundits I know were like, “This is inevitable. This outcome will happen,” and then it happened. There wasn’t even a stutter step. It just happened immediately. That’s what I’m trying to push on. Is the solution just that LexisNexis has a tool that’s better and you should pay for it, or is the profession going to have to build some new guardrails as we take the rigor away from the younger associates?

Well, you can never stop an attorney from taking it into court and not doing the proper work. That’s going to continue to happen. I think somebody’s going to lose their license over this at some point. We’re seeing the sanctions start to ratchet up. So, a couple attorneys got fined $5,000 a piece, and then some attorneys in federal court down in Alabama got referred to the state bar association for disciplinary action. I think the stakes are increasing and increasing. What we do with our system is provide a link to a citation if we have it, so you can click on it and see it in our system.

And there’s no fabricated cases within our system. We have a collection mechanism that ensures that every case in there is valid. It’s shepardized and has headnotes and different tools that lawyers can use. So, we make it really easy for you to use our system to check and make sure that the citations you’re bringing into court are not only valid and still good law but are also in the right format. Format is important. We check for all these things and make it really easy for the lawyer to do the work they need to do. They need to make sure that case is on point, that the case is still valid.

One of the many reasons I was a horrible lawyer was because of that moment when you get your first law firm job and you realize your boss just has a library of their favorite motions on file. They’re just going to pull from the card catalog, change some names and dates, and file the motion. The judge will recognize the motion and the attorney, and this is all just a weird formality to get through the next stage of the process. Maybe we’ll never get to the substantive part of the case because we’re just going to settle it, but we need to file this motion we had and elaborate. This truly was demoralizing. I was like, “I’m just doing paperwork. There’s nothing about this that is real.”

I’m probably describing what every first year associate goes through until the check hits, and it just didn’t work for me. How close are you to having Lexus AI just do that thing, have it recognize the moment and say, “We have the banked motion and we’re just going to file it to a system?”

Well, we can connect into a document management system (DMS) that has an attorney’s prior motions. We have our vault capabilities, so they can load their motions up. They can still use the motions they’ve already developed. And that’s a perfectly fine way to do things because —

Well, I’m saying, from scratch.

Right. We have the ability to do it from scratch too, but a lot of attorneys don’t want to do it from scratch because they’ve reviewed every single word in that motion and they know that it’s good. If they do it from scratch, then they have to review every single word. But if they want to do it from scratch, we can do that for them today, and if they want, we can use their prior work product as the grounding content to create a new motion, or we can use our authoritative material. They can choose the source and the grounding content.

I guess I’m asking what level of automation is there. So, you’re an attorney. You’ve got a document management system, you’ve got a new client, and you need to file some standardized motion that you always file for whatever thing you need to do, like a continuance. At what point does Lexis [AI] say, “I’m watching this case. I’m going to file this for you. I’m just going to hit the buttons for you. Don’t worry about it,” in the way that a great legal assistant might do?

We’re always going to give the attorney the opportunity. We don’t want to just be doing things on their behalf unsupervised, so we’re going to give them the opportunity. We could get to the point where we say, “It looks like you need a continuance. Here’s a draft of a continuance push,” and it will automatically file it. We’re not at that point today, but if you need a continuance, we could draft it for you. Our vision is that every attorney is going to have their own personalized AI assistant, and it’s going to understand their practice area and their jurisdiction, along with having access to their prior work product. The systems are only as good as the content behind them, so it’s going to have access to our 160 billion documents and records, and it’s going to be able to automate tasks that they do today. If you think about all the different types of attorneys and all the different tasks that they perform, there’s probably 10,000 tasks that could be automated. 

So, we’re working with our customers to understand what the most important tasks are, and we’re working with them to automate those tasks today. We have the largest and most robust backlog of projects that we’ve ever had in our company’s history because there are so many of things that can still be automated, and we’re working with our customers to do that. If they tell us, “What we really want is for you to automatically file this” or for us to provide them with an alert that says, “Hey, this deadline is coming up and you need to file this. Here’s a draft. Do you want to file it?” I’m sure we can develop it.

We’re not at that point today, but we are in the drafting phase. That vision is not a five-year vision or a three-year vision, that’s available today. That’s Protégé. That’s what Protégé does today. There are tasks that it can do, but we haven’t finished that massive backlog yet.

If you look at the sweep of other CEOs who’ve been on Decoder, they’re going to tell you, “You just integrate our computer vision system and we’ll use [electronic case files] for you to file this motion.” They’ll all be very happy to sell you that product, I’m sure. 

The reason I’m asking it this way is because when I get the consumer AI CEOs on the show, they love to tell me that they’re going to write my emails for me with AI, and then the next sentence they say is, “Then, we’ll sort your inbox with AI.” At some point, the robots are just writing emails to each other and I’m reading summaries. Something very important has been lost in that chain. One of the funniest outcomes of AI is my iPhone suddenly just summarizing emails and generating emails for other iPhones to summarize, and I have no idea what’s going on. 

That’s bad in the legal context. We’re automating document generation to make the case for our clients. On the other side, the judges and clerks might be using these same tools to ingest the cases, summarize them, understand the arguments, and write the opinions that are the outcomes. Culturally, I think it’s important for you to have a point of view on where that should stop because otherwise we are just going to have a fully automated justice system of LLMs talking to each other. Maybe there’ll be some guardrails that other people don’t have, but we’ve taken an enormous amount of humans out of the loop.

I think you have to have the human in the loop.  It’s an important part of the process. I could see the bots going back and forth on things like if someone says, “Hey, can you meet at nine o’clock?” and your system opens up the calendar, says you’re available to meet this person on your high priority list, and sets up the meeting. When you’re talking about substantive legal matters, the stakes are too high. You’re talking about a disabled veteran getting or not getting their benefits. You’re talking about a victim of a natural disaster getting or not getting insurance proceeds. You’re talking about a single mother getting or not getting welfare benefits. These are all legal matters, and they really have a huge impact on people’s lives. The stakes are way too high for bots to be going back and forth and sharing information.

Do you think that clerks and judges should be using AI the same way lawyers should be? That’s where I would draw the line. I think the clerk should be made to read and interpret everything as humans, and the judges should be made to write everything as humans, but it doesn’t seem like that line has been formalized anywhere.

I don’t think a judge should write every line. I think that they could use AI. It’s great when you put concepts in, put the words around that concept and structure them in an orderly way. I think that there is a component of the work that could be done by AI, but it shouldn’t be a bot talking to a bot. I don’t think it should be fully outsourced to AI. You’ve got a responsibility as a judge, as a law clerk, as a lawyer to review that document and make sure it’s actually saying what you intend it to say.I think most attorneys are using it that way. It will create a great draft, maybe at 80 percent, which allows you to do 20 percent of the work. But that 20 percent is the deep, analytical thought work, the things you actually went to law school to do as opposed to what you were describing earlier. It’s going to allow lawyers to do more of that type of work.

I’m curious to see how different jurisdictions and circuits approach the question of what the judges and clerks should be doing.  I sense that that pressure is going to express itself in different ways across the field. 

Judges are becoming forensic auditors. They’re reviewing this information looking for fake cases. We don’t want them doing that. That should not be their job. I think things do need to change in some of these areas.

Using AI to catch AI is another theme that comes up on Decoder all the time. 

I have utterly forgotten to ask you the Decoder questions. So let me do that, and then I want to zoom out a little bit farther. These are my own questions. You can tell, I’m a little rusty.

I’m looking at the LexisNexis leadership structure, it’s very complicated. There’s a CEO who’s not you, Mike Walsh, but then you’re the CEO of the US and the UK. There’s a bunch of other VPs everywhere. You’ve got a parent company called RELX. Explain how LexisNexis is structured and how your part fits into it.

RELX is the parent company, and it’s publicly traded. It has four divisions. Legal and Professional is one of those divisions, and its CEO is Mike Walsh. I report to Mike. I’m the CEO of our North America, UK, and Ireland businesses. The way that we’re organized, it’s a matrix. We go to market based on customer segments. So, we have a large law business, a small law business, a corporate legal business, a federal government business, a state and local government business, a Canadian business, and a UK business. 

Then, we have functional groups that support that. So, we have product management, and they’re responsible for our product development roadmap and the product strategy. We have an engineering team, and they take the direction from product management but actually build the products. We have functional groups that support that, finance, HR, legal, and global operations that does things like collect content for us. Once you get used to it, it’s not that complicated of a structure. It’s really well integrated and seamlessly integrated together, which allows us to operate really quickly. We can get things done quickly and efficiently. And I would say that the whole process is customer driven.

I’m really interested in the structure, particularly the fact that you have the UK, Ireland, and North America. I’m fascinated by corporate structures, and one of the things that strikes me is that you are not in control of the taxonomy of your product, right? These countries’ governments are in control of the taxonomy of their legal systems. The English legal system and the American legal system have commonalities but wildly different structures. The Canadian legal system and the US legal system have wildly different structures. Canada actually has more in common with the UK given their shared history. How do you think about that? Are those different teams? Do they have different database structures? How does all that work?

We do have different teams and different database structures, but we’re actually trying to consolidate to the extent that we can because when we have similar things, we shouldn’t have them marked up differently in different databases. Getting them marked up in a consistent way will allow us to do what we call “extreme reuse,” which is to basically use that same technology we develop in multiple jurisdictions with limited changes to that system. What that allows us to do is really focus on that core system and roll it out quickly, so that everyone across the world gets the benefits of all those changes. But you have civil law in some jurisdictions and common law in others, and the laws are structured in different ways. So, you do have things that make that more challenging, but that’s the general idea behind what we’re trying to do.

Can you apply the same AI systems to these different legal systems in the same way, or are you actively localizing them differently?

I would say that we actively localize them, but we try to minimize the amount of work that we do because a lot of it can be done in a similar way.

Generally, there’s a lot of concern about American legal precedents traveling across the ocean, particularly in the UK. You can see the American culture war gets exported and shows up in a lot of different ways. Do you think your tool will make that better or worse? If you’re not pulling them apart and are actually trying to minimize the differences, you might see repeat arguments or repeat structures just based on the way the AI works.

Each one is based on the content of the individual jurisdiction. So, we don’t mix the content, but we do try to utilize the same technology. For example, there’s search relevance technology to find the case that’s most closely associated with the matter that someone is working on. We can take that and build it for the US market or the UK market, and then we can move it to another market and it will work pretty well. Then, we need to do some modifications to make it work really well for that particular jurisdiction. We get 80 percent of the DNA transferred over in that core model.

I was recently talking to Mike Krieger, who is the chief product officer of Anthropic — just a totally different conversation on a different thing — but he said this thing to me, which is stuck in my mind. He said, “I recognize Claude, I can see Claude’s writing.” He said, “That’s my boy,” which is cute. Does your AI have a personality? Can I recognize its writing in all these different jurisdictions?

We use a multi-model approach, and so it’s probably a little less clear which particular model drove something. Of course with agentic AI, things have really changed. I think that was probably true a year and a half ago, but now with agentic AI, when someone puts in a query… let’s say they wanted to draft a document. Maybe a client is sending in a request and she’s interested in a premises liability issue around the duty to inform a trespasser about a dangerous condition on a piece of land. The query will go into a planning agent, who will then allocate that query out to other agents. 

It needs to do some deep research, so maybe it uses OpenAI o3  because it’s really good at deep research. At the end, it needs to draft a document, so maybe it uses Claude 3 Opus, which is really good at drafting. We’re model agnostic, and we’ll use whatever model is best in a particular task. So, the result you get back was actually potentially done by multiple different models, which probably makes it a little bit harder just to see if it was drafted by OpenAI.

Is that reflected in your structure? You describe engineering, product, and your localization, but you’ve got to build that agentic orchestration layer and decide which models are best for each purpose. You could design an engineering organization around that problem specifically. Is that how you’ve done it or is that done differently?

We have an engineering team that focuses on the planner agent and the assignment of the tasks to different agents.

Is that where the bulk of your investment is or is it paying the token fees?

I haven’t actually broken it out that way, so I couldn’t tell you. The token fees are certainly an important part of the investment. Engineering is a huge portion of the investment. The attorneys that we hire to review the output and tell us if it’s good or not good are a massive piece of the investment. So, it’s spread out over many different things, but we’ve certainly spent a lot of money on that particular issue.

Tell me about those attorneys. You hire attorneys to basically do document review of the AI? Are they very senior attorneys? Are they moonlighting from big firms? Are they a bunch of junior associates in a basement?

It’s based on the task. What we try to do is get attorneys that have experience in a particular matter. So, if we’re looking at documents related to a mergers and acquisitions transaction, we want those to be looked at by someone who has some experience in mergers and acquisitions. They can tell us that the document looks great, or tell us if it’s missing particular things. Then, we can go back and say, “Why did we miss those particular things and what changes do we need to make to how we’re training and directing these models to correct that situation going forward?”

What’s the biggest thing you’ve learned from that process?

The biggest thing I’ve learned is how important it is to have attorneys doing that work. I expected to hire a lot of technical people and data scientists to do this work. I didn’t really expect to hire an army of attorneys. But I think one of the secret sauce components of our solution is that our outputs are attorney reviewed. That’s how we keep getting more relevant results.

Where were you best at to start with and where were you worst at to start with? 

We weren’t really good at anything to begin with, and I think we’re building things out. Sometimes it’s a practice area, sometimes it’s a task. If you look at all the different tasks attorneys do that we were talking about earlier, in many cases the task’s output is some sort of a document. So, we’re really focused right now on how to improve our document drafting.

Is all this revenue positive yet? Are you making money on all this investment or do you see that on the horizon?

Our growth rate is definitely accelerated as a result of this. The main thing that we’re focused on is the customer outcome. What we’re seeing is that the customers are getting happier and happier with the solution, so I would say that it’s been very successful in that regard. It’s the fastest growing product that we’ve ever had.

Growing fast but losing money with every query is bad, right?

We’re not there. We’re not losing money with every query.

Are you breaking even or are you making money?

Out profit is growing.

Specifically on AI tools, or overall?

Most of our investment is in AI tools.

Let me take the last bit here and zoom out even more broadly. I mentioned that I would bring up precedent again in this conversation. 

I think if you’re paying attention to the legal system of America right now, you know that it’s pretty much in a state of pure upheaval. You’ve got district court judges calling out the Supreme Court, which is not a thing that usually happens. You have a Supreme Court that is overturning precedents in a way that makes me feel like I learned nothing in law school. Chevron deference is out the door. Humphrey’s Executor, the law that keeps the president from firing FTC commissioners is, I’m guessing, out the door. Roe v. Wade was out the door. Just these foundational precedents of American law out the door.

A lot of that is based on what conservative judges would call originalism. I have a lot of feelings about originalism, but a big trend inside of originalism is using AI, or what they call “corpus linguistics,” to determine what people meant in the past. Then, you take the AI and you say, “Well, it did the job for me. This is the answer.” Are you worried that your tools will be used for that kind of effort? Because it really puts a lot of pressure on the AI tool to understand a lot of things.

I’m not that worried. I don’t think the Supreme Court is asking LexisNexis what we think it should do.

But certainly courts up and down the chain are.

They’re asking legal questions, they’re getting answers back, and then they’re interpreting those answers. We are providing them with the raw content that they need to make the determinations, but we’re not practicing law. We’re not making those decisions for them.

I’m going to spring this on you, but here it is: John Bush is a Trump-appointed judge. He cited the emergence of corpus linguistics in the legal field, and he said, “To do originalism, I must undertake the highly laborious and time-consuming process of sifting through all this. But what if AI were employed to do all the review of the hits and compile statistics on word meeting and usage? If the AI could be trusted, that would make the job much easier.”

That is him saying, “I can outsource originalist thinking to an AI.” This is a trend. I see this particularly with the originalist judges, that the job they think they’re meant to do is determine what a word meant in the past. And AI is great at being like, “Statistically, this is what that word meant in the past, and we’re going to outsource some legal reasoning.”

This is, I think, very odd. My thoughts about originalism and stare decisis in America in 2025 aside, saying, “I will use an AI to reach into the past and determine this meaning” seems very odd. I’m wondering how you feel about your tool being used in that way.

I definitely understand your point there. I think about the analogy of a brick. You can use a brick to build a hospital and take care of sick children, or you could take a brick and throw it through a window. One use is really great and another is pretty negative, but in either case, it’s a brick. I think about our tool as being neither good nor bad. I think it could be used for good. I think it could be used for any type of activity that attorneys [need]. I wouldn’t want to say originalism is a bad thing. I think it could be used for many different things. I think it could be used for originalism. I think it could be helpful for those who want to take that path and find a new way of looking at something.

We have all the data. They can search it, they can use the tool to find things it wasn’t possible to find in the past. So, I could see them using our tool in that way. I guess it’s up to the attorneys to determine how they’re going to use the product. We’re not building it because we’re trying to change the law. We’re building it because we’re trying to help attorneys do the tasks that they want to do.

But I look at the sweep of the tech industry — not the legal industry, but the tech industry — over the past 15, 20 years, and boy, have I heard that answer many, many times. The social media companies all said, “Well, you can use it for good or evil. We’re neutral platforms.” It turns out maybe they should have thought of some of those harms earlier.

Look at the AI companies today. Who knows if training is copyrighted. We know the answer. You can’t actually just opt out of copyright law. Now, we’re going to do the lawsuits and we’ll see what happens. Who knows if OpenAI doing Sora, which is TikTok for deepfakes — actually, we know. We know the answer is, you should have some guardrails. 

So, I’m posing you the same question. We see a particularly originalist judiciary hell-bent on using originalism to change precedent at alarming rates. I would say it’s alarming for me because I paid for a law degree that I now think is useless, but that’s why it’s alarming to me. It’s alarming because a lot of people have had their rights taken away as well. Every day this is happening. And one of the ways they’re going to do that is to defer to an AI decision engine. They’re going to say, “We asked the AI, ‘What did “all people” mean when the 14th Amendment was drafted?’” and this will be how we get to a birthright citizenship case. I’m just connecting this to the conversation we had at the beginning. We’re going to give our reasoning to a computer in a way that it’s not necessarily accountable for, and we’re going to trust the computer. The methods of thinking and that rigor might go away. 

So I’ve heard the answer that the tool is neutral from tech companies for years, and I’ve seen the outcomes. I’m asking you. You’re building a tech product for lawyers, and they’re already using it in this specific way. I’m wondering if you’ve thought about the guardrails.

We operate under responsible AI principles, and that includes a number of things. One, we always try to consider the real-world implications of any product we develop. We want to make sure that there’s transparency in terms of how our product works. We open up the black box so people can see the logic that we’re using, and they can actually go in and change it if they want. So, we want to make sure that there’s transparency and there’s control. 

We always incorporate human oversight into product development. Privacy and security is another one of our core tenets in responsible AI creation. Another thing we’ve incorporated is the prevention of bias introduction. So, those are the RELX principles for AI development, and we adhere to those. We want to create products that do good things for the world.

If you asked Lexis AI if the 14th Amendment guarantees birthright citizenship to all people born in the United States, will it make the argument that it doesn’t?

I’ve never asked it that question. I can’t tell you.

Do you have your phone on you? There’s a mobile app.

I could pop up here and ask it, I suppose. Let me pop into Protégé here. “Does the 14th Amendment guarantee birthright citizenship or are there exceptions?” Let’s see.

It’s generating a response, so we can come back to it in a minute.

I’m very curious to see what it says, because up until recently there’s only been one answer to that question. Now, the Trump administration is saying, “Nope, actually, that’s not what ‘subject to the jurisdiction thereof’ means.” In order to win at the Supreme Court, they will have to construct an originalist argument to that question, and I am confident that the way they’re going to do that is by feeding a bunch of data into an AI model and saying, “This is what was actually meant at the time of the 14th Amendment’s drafting.” That’s a thing that AI will be used for that is very destructive.

I’m not an attorney, so I’m just going to read the answer here:

“The 14th Amendment of the United States Constitution guarantees birthright citizenship to all persons born or naturalized in the United States, and subject to its jurisdiction. The phrase ‘subject to its jurisdiction’ has been interpreted to include nearly all individuals born on US soil with a few narrow exceptions. These exceptions include foreign diplomats, children of foreign diplomats, children of enemy forces in hostile occupation, children born on foreign public ships, and, historically, children of members of Native American tribes who owed allegiance to their tribe rather than the United States.”

It goes on.

You should send that to [Chief Justice] John Roberts right now. Can Protégé do that? Because that’s the answer. 

The question is, are a bunch of conservative influencers going to say Protégé is woke now? This is the cultural war that you’re in.

It does recognize that “recent cases have affirmed this interpretation rejecting attempts to expand the exceptions of birthright citizenship,” so it does also recognize that there have been efforts to interpret it differently. The answer goes on quite a bit.

The reason I ask that question very specifically is because Reconstruction is up for grabs in a very real way in this case. Do you think you have a responsibility as the tool maker? That’s really the question for so many AI companies. You’re the tool maker. Do you have a responsibility to not deepfake real people? Do you have a responsibility to not show people fake ideas? I think you were very clear on that, you have a responsibility to not hallucinate, but here you have —

We don’t want to introduce or perpetuate any bias that might exist either. And to do that, we rely on the law as opposed to a consumer-grade model that probably just uses news articles, which might have a very different interpretation of things depending on the news articles. There are much more likely to be biases from introduced news articles than black-letter law, for example.

The reason I’m curious about that is because there’s a spectrum. I don’t think there’s any place for telling people what they can do with Microsoft Word running locally on their laptop. Do what you’ve got to do. Telling people what they can do with a consumer-grade AI tool built into Facebook? I think Facebook has a lot of responsibility there, especially because the opportunity to distribute that content far and wide is at their fingertips.

That’s a big opportunity spectrum, and here in the middle there’s these AI companies. Do you have the obligation to say, “Well, if you want to go make the argument that birthright citizenship doesn’t protect everyone in the United States, you’ve got to do that on your own. Our robot’s not going to help you.” Do you feel any of that pressure?

We try not to get into politics or any of that debate.

I do not think that’s politics.

We’re trying to develop a system that does not have bias introduced into it, that will give you the facts, and attorneys can do the work that attorneys do to make those important decisions. Our job is to give them the information that they need: the precedents, the facts, all the information that they need to then develop their argument, whatever that might be. But we really don’t get into any of the politics of birthright citizenship being guaranteed or not.

Well, at some point you do. This is — again, to bring us back where we started, I first encountered LexisNexis as a database of cases and some case notes. There were some law professors who were very proud that their case notes were in LexisNexis when I was in law school. Now we’re drafting a little bit, going to go do the research. Now we have a agentic AI that’s making the arguments. Maybe one day we will automate all the way to filing. You’re taking on more of the burden. You are making the arguments. The company is making the arguments. Where is the line? Because there are lots of lawyers who wouldn’t take that case, who wouldn’t make that argument. Is there a line for you?

I would say our approach is to arm the attorneys with the best possible information, and help them with the drafting of those documents. We’re really just being led by our customers and what they’re asking us to do. We certainly are not trying to interpret the law. We’re not trying to shape the legal system. We’re not lawyers. We’re not trying to do the work of lawyers. We’re trying to help lawyers do the work they do in a more efficient way and, hopefully, help them drive better outcomes. 

But it’s always their prerogative to interpret the information that we provide, which is what lawyers do. That’s what they’re great at. The reason we have cases is because there are people on both sides. The two individuals are going to make opposite arguments, we want to support both of those attorneys as best that we can.

I get it when you’re the database of cases. I get it when you’re the word processor. I get it when you’re the specialized word processor or the case management platform. The thing that I’m pushing on repeatedly here is if the AI system is actually doing the work, do you feel like you have different guardrails?

I think our responsibility is to develop AI in a responsible way.

Give me an example of something you wouldn’t let your AI do — an argument that you wouldn’t let your AI make, or a motion that you wouldn’t let your AI draft.

I don’t know that we would want to necessarily restrict the AI in that way. We’re referring back to the information that we have, which is our authoritative collection of documents and materials that helps lawyers understand what the facts are, what the precedent is, and what the background is, so they can do the real, deep legal work and make those trade-off decisions, judgment decisions, the important things that, again, attorneys went to law school to do.

I think these questions are going to come up over and over again. We should have you back to answer them as you learn more. As you look out over the horizon — the next two or three years — what’s the next set of capabilities you see for LexisNexis, and what do you think the pressures that might change how you make some of those decisions will be?

It’s hard to say exactly what the main thing that’s going to change the path going forward is going to look like because if I look back two years ago, I would’ve never guessed we’d be doing what we’re doing today because the technology didn’t exist or it was too expensive to implement. That’s totally changed over the last two years, and I think over the next two years, it’s going to change again. So, it’s really hard to say where we’re going to go. Our vision remains the same, which is that we want to help attorneys. We want to provide them with a personalized, AI-powered product that understands their practice area, their jurisdiction. It has access to our authoritative set of materials and their prior work product. It understands their preferences, it understands their style, it understands what they’re trying to do, and it can automate tasks that they do today manually.

We will continue to take that latest available technology, show it to our customers, and have them help us understand how we can use that technology to serve them in more modern and relevant ways. That’s really what’s going to guide our roadmap in the future.

Sean, this was great. Let me know when you develop a system that can actually navigate an electronic case filing website because some of the smartest people I know can’t do that. But this was great. We’ve got to have you back soon. Thank you so much.

Thank you so much. I really enjoyed our time today. Take care.

Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!

- Advertisment -

Most Popular

Recent Comments