*/
Will generative AI significantly impact the work of barristers? Graham Denholm investigates what it might mean for the Bar
Since the public release of ChatGPT by OpenAI in November 2022, debate about the impact that artificial intelligence (AI) will have on the professions has taken on a new level of urgency. This article explores the ways AI and, in particular, ‘generative AI’, might impact the Bar. It represents a snapshot of developments, opportunities and risks as they present in July 2023. The technology is developing so quickly that the position may be very different a year from now.
AI has been deployed in the legal sector for many years. Machine learning is built into research tools such as LexisNexis and Westlaw. It is deployed in e-discovery and litigation analysis platforms. It is also an increasingly prevalent feature in the cases barristers work on, from finance and IP to medicine, criminal justice and contractual disputes.
ChatGPT and similar platforms are public-facing large language models (LLMs – see box, Some core concepts). Users ‘prompt’ them with instructions or queries in natural language and they generate text in response. They can produce answers to questions, create content, summarise or translate text and produce computer code. Mainstream adoption of LLMs occurred with startling rapidity. ChatGPT reached 100 million active users within two months of launching. Google followed suit with Bard in March 2023. Microsoft – a major investor in OpenAI – incorporated OpenAI’s technology into a chat function in its Bing search engine. It has also developed Microsoft 365 Copilot which brings generative AI directly into Word, Excel, PowerPoint, Outlook and Teams.
The most immediate issue with public facing LLMs in their current form is their tendency to ‘hallucinate’, to generate plausible false outputs. Few readers will be unaware of the unfortunate US lawyer who, relying on his ‘consultation’ with ChatGPT cited no fewer than six non-existent authorities in a court filing. An affidavit acknowledging the error explained that, having been given the citations by ChatGPT, it also ‘assured the reliability’ of the information provided (a claim starkly borne out by the exhibited chat transcripts).
Understanding of the legal risks faced by providers and users of generative AI products will evolve over time, and the legal landscape will change as efforts to regulate the sector gather pace. Already, litigation is afoot alleging misuse of copyrighted data in the training of AI models and defamation in the output of an LLM. Reports of generative AI-powered chatbots producing harmful responses also raise obvious questions about potential future liabilities.
In a May 2023 interview, Jeff Pfeifer, Chief Product Officer at LexisNexis, described generative AI as ‘probably the single biggest technology development that seems perfectly aligned with law firms and the way law firms work,’ citing the ‘conversational opportunity for a human to interact with a service,’ and its creation of efficiencies in drafting. Pfeifer envisages significant benefits in productivity, an ambition that is clearly shared across the corporate legal sector, with very high levels of investment already reported. If the technology works as envisaged, the attraction – and the disruptive potential – for large law firms is obvious. But what of the Bar?
The business model of the self-employed Bar does not lend itself to technical innovation. Widespread adoption of AI tools at the Bar may be some way off, but the potential scale of impending change is such that AI cannot be ignored. I examine below five areas where AI could significantly impact the work of barristers.
The main legal research platforms have long deployed AI in their search functionalities. The holy grail, however, is a generative AI product that can respond to natural language research queries with accurate, cogent, fully referenced narrative responses with little or no human supervision. The model companies appear to be adopting in pursuit of this end is training LLMs on closed data sets (reported cases, statute databases, commentary, textbooks, and so on, as well as, on a case-by-case basis, firms’ internal knowledge bases). Bold claims are being made for the capabilities of such systems, including their ability to avoid hallucinations by cross-checking outputs against verified sources. Thomson Reuters (Westlaw) and LexisNexis both have generative AI products in the pipeline.
Westlaw is introducing ‘Ask Westlaw’ into its Precision offering (US only) which, it is said, will provide a ‘conversational and nuanced response’ to legal queries, together with supporting authority, respond to follow up questions, and allow questions to be put to source documents (such as asking for a summary of what a chapter in a textbook says about a particular issue). Thomson Reuters is also partnering with Microsoft to integrate their legal products into Microsoft 365 CoPilot to allow assisted drafting and research from within Microsoft Word (UK launch plans, if any, unknown).
LexisNexis for its part is developing a generative AI functionality for its products, with Lexis + AI having launched in the US, with a UK release likely to follow. Lexis + AI will allow legal queries to be asked in natural language, eliciting a narrative response with links to supporting authority and the ability to ask follow-up questions.
If these and other AI research platforms perform as their makers hope, and assuming UK rollouts in due course, the potential impact on the civil Bar in particular could be significant, both as a source of efficiencies and as a threat to existing work.
Contract drafting is an obvious use case for AI. LLMs trained on generic and individual firms’ precedents are intended to automate the contract drafting process. This may not have a wide impact on the Bar. However, should the reach of these tools extend to drafting pleadings, skeleton arguments, and so on, the position could be very different. These are more complex tasks than assembling a contract from a library of standard form provisions – the need to assimilate case-specific factual and legal elements requires a significantly greater level of ‘understanding.’ Successful automation cannot be assumed. Freedom from hallucinations will also be critical, otherwise the need for supervision may negate any potential benefits. Nevertheless, if these hurdles are overcome, this use case too could significantly impact the Bar.
The use of AI document review and analysis platform Luminance by the defence team in a murder trial in 2022 attracted some media attention.* Luminance is an AI platform that can analyse large document sets and allow thematic searching, identification of relevant documents, chronological reordering, and identification of duplicates, amongst other tools. Originally launched as a tool for corporate law firms, Orianne Auger, Head of Discovery and Commercial Director at Luminance, told me that the technology has since been used by barristers in a range of settings including clinical negligence, commercial litigation and arbitration, as well as crime. Such document review tools have the potential to significantly streamline case preparation.
Litigation analytics tools deploy machine learning to analyse constituent elements of the litigation process – how a judge has ruled over time, how a particular advocate has fared in past cases, the conduct of an opposing party in past litigation, etc – to generate insights that can inform strategic litigation decisions. Some platforms utilise such data to give predictions on the outcome of litigation.
Use of this technology by funding bodies, insurers and other gatekeepers to the justice system is easy to envisage, so potential risks from inbuilt inaccuracies and biases are particularly acute. Widespread adoption could significantly impact the Bar, given that advising on outcomes is one of our core functions, and our knowledge of judges and opponents a critical element of the added value we can bring to a case.
Finally, there is the more inchoate but potentially significant prospect of engaging with LLMs to refine and develop arguments and chains of reasoning. One barrister I spoke with, an enthusiastic early adopter of AI tools, described using LLMs as ‘rationality machines’ to analyse and develop legal points.
It is beyond the scope of this article to address in any detail the myriad ways generative AI might impact the justice system. Nevertheless, barristers should be aware of some key areas of opportunity and risk.
Advocates of the benefits of this technology suggest that AI could improve access to justice by drastically reducing the cost of advice, by automating processes that currently require a lawyer’s input and by facilitating dispute resolution.
But there is a darker side to the encroachment of automation into the justice system. Susie Alegre, a human rights lawyer and author of Freedom to Think, sees significant risks in the ease with which LLMs produce untrue ‘facts’ and the way bias in the data sets used to train AI systems can embed that bias in those systems. The belief that human supervision might mitigate risk is misplaced in Alegre’s view, the Horizon scandal in the UK providing a stark illustration of humans failing to prevent serial injustices flowing from the failure of automated systems.
Louise Hooper, a barrister and expert in AI ethics, sees benefit in the automation of administrative legal tasks, but risk in the over-hasty adoption of these technologies for substantive legal work, particularly in areas of practice involving issues of personal sensitivity such as family law. When it comes to the application of AI to substantive legal issues, she questions the ability of AI systems to embed fairness in their operation.
There is no specific regulation of lawyers’ use of AI in the UK at present. Deployment and experimentation take place within existing regulatory frameworks. The Bar Standards Board have confirmed that AI is ‘on [their] radar.’ The Bar Council’s Information and Technology Panel is developing guidance on the use of ChatGPT and generative AI.
Use of AI engages issues across the Core Duties in the Code of Conduct, from our duties to the court and our clients, acting with honesty and integrity, and maintaining independence, to maintaining public confidence, protecting confidentiality, providing a competent standard of work and the duty not to discriminate. The use of generative AI raises stark issues around accuracy, data protection, client confidentiality and IP rights.** Its encroachment into the work of the Bar may be unavoidable but users must be alert to the wide range of potential risks.
The technologies discussed above are in their infancy and the power and capacities of AI systems are growing exponentially. There can be little doubt that AI will impact the working lives of barristers, how we do our work, what work we do, the nature of the disputes we are engaged in, and perhaps even how many of us there are. How quickly and how fundamentally those changes will be felt remains to be seen.
Since the public release of ChatGPT by OpenAI in November 2022, debate about the impact that artificial intelligence (AI) will have on the professions has taken on a new level of urgency. This article explores the ways AI and, in particular, ‘generative AI’, might impact the Bar. It represents a snapshot of developments, opportunities and risks as they present in July 2023. The technology is developing so quickly that the position may be very different a year from now.
AI has been deployed in the legal sector for many years. Machine learning is built into research tools such as LexisNexis and Westlaw. It is deployed in e-discovery and litigation analysis platforms. It is also an increasingly prevalent feature in the cases barristers work on, from finance and IP to medicine, criminal justice and contractual disputes.
ChatGPT and similar platforms are public-facing large language models (LLMs – see box, Some core concepts). Users ‘prompt’ them with instructions or queries in natural language and they generate text in response. They can produce answers to questions, create content, summarise or translate text and produce computer code. Mainstream adoption of LLMs occurred with startling rapidity. ChatGPT reached 100 million active users within two months of launching. Google followed suit with Bard in March 2023. Microsoft – a major investor in OpenAI – incorporated OpenAI’s technology into a chat function in its Bing search engine. It has also developed Microsoft 365 Copilot which brings generative AI directly into Word, Excel, PowerPoint, Outlook and Teams.
The most immediate issue with public facing LLMs in their current form is their tendency to ‘hallucinate’, to generate plausible false outputs. Few readers will be unaware of the unfortunate US lawyer who, relying on his ‘consultation’ with ChatGPT cited no fewer than six non-existent authorities in a court filing. An affidavit acknowledging the error explained that, having been given the citations by ChatGPT, it also ‘assured the reliability’ of the information provided (a claim starkly borne out by the exhibited chat transcripts).
Understanding of the legal risks faced by providers and users of generative AI products will evolve over time, and the legal landscape will change as efforts to regulate the sector gather pace. Already, litigation is afoot alleging misuse of copyrighted data in the training of AI models and defamation in the output of an LLM. Reports of generative AI-powered chatbots producing harmful responses also raise obvious questions about potential future liabilities.
In a May 2023 interview, Jeff Pfeifer, Chief Product Officer at LexisNexis, described generative AI as ‘probably the single biggest technology development that seems perfectly aligned with law firms and the way law firms work,’ citing the ‘conversational opportunity for a human to interact with a service,’ and its creation of efficiencies in drafting. Pfeifer envisages significant benefits in productivity, an ambition that is clearly shared across the corporate legal sector, with very high levels of investment already reported. If the technology works as envisaged, the attraction – and the disruptive potential – for large law firms is obvious. But what of the Bar?
The business model of the self-employed Bar does not lend itself to technical innovation. Widespread adoption of AI tools at the Bar may be some way off, but the potential scale of impending change is such that AI cannot be ignored. I examine below five areas where AI could significantly impact the work of barristers.
The main legal research platforms have long deployed AI in their search functionalities. The holy grail, however, is a generative AI product that can respond to natural language research queries with accurate, cogent, fully referenced narrative responses with little or no human supervision. The model companies appear to be adopting in pursuit of this end is training LLMs on closed data sets (reported cases, statute databases, commentary, textbooks, and so on, as well as, on a case-by-case basis, firms’ internal knowledge bases). Bold claims are being made for the capabilities of such systems, including their ability to avoid hallucinations by cross-checking outputs against verified sources. Thomson Reuters (Westlaw) and LexisNexis both have generative AI products in the pipeline.
Westlaw is introducing ‘Ask Westlaw’ into its Precision offering (US only) which, it is said, will provide a ‘conversational and nuanced response’ to legal queries, together with supporting authority, respond to follow up questions, and allow questions to be put to source documents (such as asking for a summary of what a chapter in a textbook says about a particular issue). Thomson Reuters is also partnering with Microsoft to integrate their legal products into Microsoft 365 CoPilot to allow assisted drafting and research from within Microsoft Word (UK launch plans, if any, unknown).
LexisNexis for its part is developing a generative AI functionality for its products, with Lexis + AI having launched in the US, with a UK release likely to follow. Lexis + AI will allow legal queries to be asked in natural language, eliciting a narrative response with links to supporting authority and the ability to ask follow-up questions.
If these and other AI research platforms perform as their makers hope, and assuming UK rollouts in due course, the potential impact on the civil Bar in particular could be significant, both as a source of efficiencies and as a threat to existing work.
Contract drafting is an obvious use case for AI. LLMs trained on generic and individual firms’ precedents are intended to automate the contract drafting process. This may not have a wide impact on the Bar. However, should the reach of these tools extend to drafting pleadings, skeleton arguments, and so on, the position could be very different. These are more complex tasks than assembling a contract from a library of standard form provisions – the need to assimilate case-specific factual and legal elements requires a significantly greater level of ‘understanding.’ Successful automation cannot be assumed. Freedom from hallucinations will also be critical, otherwise the need for supervision may negate any potential benefits. Nevertheless, if these hurdles are overcome, this use case too could significantly impact the Bar.
The use of AI document review and analysis platform Luminance by the defence team in a murder trial in 2022 attracted some media attention.* Luminance is an AI platform that can analyse large document sets and allow thematic searching, identification of relevant documents, chronological reordering, and identification of duplicates, amongst other tools. Originally launched as a tool for corporate law firms, Orianne Auger, Head of Discovery and Commercial Director at Luminance, told me that the technology has since been used by barristers in a range of settings including clinical negligence, commercial litigation and arbitration, as well as crime. Such document review tools have the potential to significantly streamline case preparation.
Litigation analytics tools deploy machine learning to analyse constituent elements of the litigation process – how a judge has ruled over time, how a particular advocate has fared in past cases, the conduct of an opposing party in past litigation, etc – to generate insights that can inform strategic litigation decisions. Some platforms utilise such data to give predictions on the outcome of litigation.
Use of this technology by funding bodies, insurers and other gatekeepers to the justice system is easy to envisage, so potential risks from inbuilt inaccuracies and biases are particularly acute. Widespread adoption could significantly impact the Bar, given that advising on outcomes is one of our core functions, and our knowledge of judges and opponents a critical element of the added value we can bring to a case.
Finally, there is the more inchoate but potentially significant prospect of engaging with LLMs to refine and develop arguments and chains of reasoning. One barrister I spoke with, an enthusiastic early adopter of AI tools, described using LLMs as ‘rationality machines’ to analyse and develop legal points.
It is beyond the scope of this article to address in any detail the myriad ways generative AI might impact the justice system. Nevertheless, barristers should be aware of some key areas of opportunity and risk.
Advocates of the benefits of this technology suggest that AI could improve access to justice by drastically reducing the cost of advice, by automating processes that currently require a lawyer’s input and by facilitating dispute resolution.
But there is a darker side to the encroachment of automation into the justice system. Susie Alegre, a human rights lawyer and author of Freedom to Think, sees significant risks in the ease with which LLMs produce untrue ‘facts’ and the way bias in the data sets used to train AI systems can embed that bias in those systems. The belief that human supervision might mitigate risk is misplaced in Alegre’s view, the Horizon scandal in the UK providing a stark illustration of humans failing to prevent serial injustices flowing from the failure of automated systems.
Louise Hooper, a barrister and expert in AI ethics, sees benefit in the automation of administrative legal tasks, but risk in the over-hasty adoption of these technologies for substantive legal work, particularly in areas of practice involving issues of personal sensitivity such as family law. When it comes to the application of AI to substantive legal issues, she questions the ability of AI systems to embed fairness in their operation.
There is no specific regulation of lawyers’ use of AI in the UK at present. Deployment and experimentation take place within existing regulatory frameworks. The Bar Standards Board have confirmed that AI is ‘on [their] radar.’ The Bar Council’s Information and Technology Panel is developing guidance on the use of ChatGPT and generative AI.
Use of AI engages issues across the Core Duties in the Code of Conduct, from our duties to the court and our clients, acting with honesty and integrity, and maintaining independence, to maintaining public confidence, protecting confidentiality, providing a competent standard of work and the duty not to discriminate. The use of generative AI raises stark issues around accuracy, data protection, client confidentiality and IP rights.** Its encroachment into the work of the Bar may be unavoidable but users must be alert to the wide range of potential risks.
The technologies discussed above are in their infancy and the power and capacities of AI systems are growing exponentially. There can be little doubt that AI will impact the working lives of barristers, how we do our work, what work we do, the nature of the disputes we are engaged in, and perhaps even how many of us there are. How quickly and how fundamentally those changes will be felt remains to be seen.
© Getty images/iStockphoto
Will generative AI significantly impact the work of barristers? Graham Denholm investigates what it might mean for the Bar
The Chair of the Bar sets out how the new government can restore the justice system
In the first of a new series, Louise Crush of Westgate Wealth considers the fundamental need for financial protection
Unlocking your aged debt to fund your tax in one easy step. By Philip N Bristow
Possibly, but many barristers are glad he did…
Mental health charity Mind BWW has received a £500 donation from drug, alcohol and DNA testing laboratory, AlphaBiolabs as part of its Giving Back campaign
The Institute of Neurotechnology & Law is thrilled to announce its inaugural essay competition
How to navigate open source evidence in an era of deepfakes. By Professor Yvonne McDermott Rees and Professor Alexa Koenig
Brie Stevens-Hoare KC and Lyndsey de Mestre KC take a look at the difficulties women encounter during the menopause, and offer some practical tips for individuals and chambers to make things easier
Sir Geoffrey Vos, Master of the Rolls and Head of Civil Justice since January 2021, is well known for his passion for access to justice and all things digital. Perhaps less widely known is the driven personality and wanderlust that lies behind this, as Anthony Inglese CB discovers
The Chair of the Bar sets out how the new government can restore the justice system
No-one should have to live in sub-standard accommodation, says Antony Hodari Solicitors. We are tackling the problem of bad housing with a two-pronged approach and act on behalf of tenants in both the civil and criminal courts