A held mobile phone displays ChatGPT
Sensitivity required: many law firms are restricting their use of publicly available AI tools such as ChatGPT © Alamy Stock Photo

The release of ChatGPT in November sparked excitement in the legal sector about how generative AI could make a range of tasks — such as drafting contracts and reviewing case law — more efficient.

But, while the technology can create large passages of humanlike text and review or refine existing documents, its limitations have so far delayed widespread adoption. In particular, the risk of a chatbot producing false information and potentially copyrighted terms has led legal professionals to treat the technology with caution.

Following OpenAI’s success with ChatGPT, big tech companies, including Google and Microsoft (which also backs OpenAI), have launched their own chatbots. Small specialist start-ups, such as Harvey and Robin AI, are joining them in developing products catering to the legal sector. Deal data provider PitchBook estimates that the legal software market will reach $12.4bn this year and increase by around 5 per cent annually.

“We have seen all of these companies spring up,” says Kerry Westland, head of innovation and legal tech at law firm Addleshaw Goddard. “It is a really interesting space that has felt like it’s moved so quickly in the last 10 months. We’ve not been able to keep up, but we’re still very much in that learning phase.”

Addleshaw Goddard has reviewed artificial intelligence offerings from more than 70 companies and has selected eight for pilot projects within the firm, including legal tech software and other AI solutions.

As part of the pilot, lawyers at the firm can use generative AI to review documents and extract specific details, such as clauses, from existing records or to translate complicated contracts into plain English.

However, Westland says there are some flaws with the technology, which can offer up different answers each time a request is made or be overly verbose in its response.

This is part of a broader problem with generative AI that the sector is grappling with: its tendency to “hallucinate” and to write sentences or references as if they were fact. 

In June, two lawyers and a law firm were fined in the US after cases cited in a legal brief were found to be invented by ChatGPT.

Another concern is what happens to information inputted into AI systems, such as sensitive client data. Addleshaw Goddard allows its lawyers to use ChatGPT, but not with confidential information, while UK law firm Travers Smith has blocked the tool entirely.

“There was a term in the API [application programming interface] that said data put into the system would be used to improve and develop the services,” says Shawn Curran, director of legal technology at Travers Smith. “We recognised the risk that somebody could put something really sensitive into that model.”

Although lawyers at the firm are still experimenting with the technology through its open-sourced YCNBot, which uses Microsoft and OpenAI’s enterprise software, it does not currently use AI on client data or any tasks related to clients. Instead, it is testing it for contract review and fictitious litigation disputes. “We call it the journey to ‘safer’ use, not safe use, because it’s an ever-evolving journey, and there’s lots of risk,” Curran says.

Rather than “generative” applications of the technology, Curran foresees more value in “extractive” uses. For example, this could include asking the tool to review several emails involved in a particular dispute to identify any relevant content in them for a case that might be used in a legal argument.

Such applications will be brought to clients over the next year, he adds.

Using this technology “should add to efficiency, quality or work pleasure but only internally”, Sijmen Vrolijk, IT director at law firm NautaDutilh, says. The firm prohibits using publicly available AI for client or internal work and is establishing set rules for its use.

More on FT.com: Best practice case studies

Read the FT Innovative Lawyers Europe ‘Best practice case studies’, which showcase the standout innovations made for and by people working in the legal sector:

Practice of law
Business of law
In-house

Vrolijk believes that, while AI will change how lawyers work, fears of its adoption leading to job losses are overhyped. “Gen AI has created an enormous buzz, which I now think is going away a bit,” he adds. “I don’t see mind-blowing technology with generative AI.”

Even so, lawyers keen to test legally focused generative AI software are finding it difficult to procure. “I have never been in a place where you have to beg vendors to show you their tools or work with them,” says Westland. “You go on wait lists, or they only want to work with five or six firms to get the product working.”

Assessing the value of the contracts is another concern because “these tools are not cheap”, Westland notes. “Everyone talks about the time-based model in law,” she adds.

“A very interesting question that we’re going to have with clients, and they’re asking us, is ‘how do you value legal work?’ And, if I can get something to you in three days rather than three weeks, is that more valuable if you’ve got an urgent deal and a deadline?”

 
Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments