*/
Should healthcare practitioners be liable where treatment assisted by AI goes wrong? Robert Kellar KC explores the ways in which AI is reshaping the practice of medicine and clinical negligence law
Artificial intelligence promises to revolutionise the way in which healthcare is provided. In the future, the exercise of clinical judgement will no longer be the exclusive preserve of human beings. Indeed, this trend is already well under way. Among many other uses, AI is already being developed to predict cancer from mammograms, to monitor skin moles for signs of disease and to perform invasive surgery autonomously.
The disruption caused by these innovations is unlikely to be confined to the practice of medicine. It will also affect the practice of medical law. Some of the possible implications for clinical negligence litigation are discussed below.
At what point will healthcare providers have a positive duty to use AI to provide care? It may be argued in future that the advantages of using AI solutions are so stark that it is irresponsible, or even illogical, not to use them. Alternatively, from the perspective of informed consent, it might be argued that patients should (at least) be counselled as to the benefits of deploying AI for diagnostic or surgical purposes. The strength of these arguments will turn upon the speed of uptake of AI in different areas of medicine. Guidance given by organisations like NICE and the Royal Colleges is also likely to be important.
From the perspective of healthcare defendants, the imposition of a duty to use AI might well raise resourcing issues. New technology can be expensive. Difficult judgements will need to be made about the allocation of limited budgets to maximum advantage. It may be argued that decisions whether or not to allocate scarce resources to AI are not justiciable.
An unspoken assumption of the law of negligence is that people are fallible. To be human is to err. Therefore, perfect clinical judgement is not expected from doctors. Liability is imposed only where a doctor has failed to take ‘reasonable care’. On the other hand, the promise of AI is that machines do not suffer from the same imperfections. Their memory and processing speed are far superior. They do not experience workplace fatigue. They can perceive patterns in data that would be invisible to the human eye or mind. This raises some interesting questions about the standard of care to be expected.
What happens if, for example, an AI system makes recommendations that would not be supported by a responsible body of practitioners? At first blush the answer looks simple: guidance from a computer is not a substitute for the exercise of clinical judgement. A doctor should always make decisions that accord with a ‘responsible body’ of practitioners.
However, early adopters might argue that the very purpose of AI is to expose flaws in conventional medical wisdom. For example, by analysing big data, AI might identify concerning patterns in breast imaging that most radiologists would not consider anomalous. Or it might identify drug combinations for cancer that leading oncologists would consider to be counter intuitive. One could argue that it would be wrong to deprive patients of the benefits of such insights. Perhaps the answer is that patients should given the right to chose whether to accept virtual or corporeal advice. At some point, the courts will have to grapple with this tension between conventional wisdom and computing power.
A related question is whether AI will change the way in which the standard of care itself is conceptualised. To be defensible at common law medical practice must align with a ‘responsible body of medical practitioners’ (Bolam) and be capable of withstanding ‘logical analysis’ (Bolitho). It is not easy to apply that test to guidance provided by non-human intelligence; for reasons that may not be readily intelligible to human beings.
Should healthcare practitioners be liable where treatment assisted by AI goes wrong? On a conventional analysis, the answer to that question looks straightforward. AI could be viewed as just another technological tool used by doctors to deliver treatment. Alternatively, where AI assists in the decision-making process, one could draw an analogy with a consultant that is supervising recommendations made by a junior doctor. However, these analogies may not be apposite for two reasons.
First, AI enabled systems might in future provide treatment autonomously. For example, a team at John Hopkins University has developed a robot that has performed laparoscopic surgery on the tissue of a pig without the guiding hand of a human: the Smart Tissue Autonomous Robot (STAR).
Second, there is the ‘black box’ problem. We see the input and output but what happens inside can be a mystery. It may be impossible for a human to understand in real time why an AI system is making any particular decision or recommendation. This may be due to the amount and complexity of the data that is being processed, the speed of processing or the fact that AI does not use natural human language to parse data.
Accordingly, the courts might well conclude that it is unfair to make individual healthcare practitioners responsible for the real time operation of AI. Putting the matter another way, the use of AI may be considered more analogous to making a referral to a specialist than to overseeing a junior doctor.
If individual practitioners are not responsible, the courts may look for ways to make healthcare institutions liable. There could be different ways to achieve this. One alternative would be to impose a conventional duty of care to ensure that ‘equipment’ is functioning properly. Such duties might include duties to audit, test and maintain AI systems in line with standards imposed by manufacturers and regulatory bodies such as the Medicines and Healthcare products Regulatory Agency (MHRA).
Alternatively, Parliament might consider it necessary to impose a form of strict liability on healthcare providers for harms caused by medical AI. This is because reliance upon pure fault-based liability places an unreasonable burden upon claimants. Where matters go wrong the ‘black box’ problem makes it very difficult for a claimant to pinpoint how an error has arisen and who (if anybody) might be responsible for it.
The policy justification for imposing strict liability is arguably similar to that for imposing vicarious liability. Employers already bear the financial risk of harm inherent in the provision of healthcare by their employees. Similarly healthcare institutions should bear the financial risk of harm from deploying AI to provide medical care. Institutions are also better able to insure against the risk of harm where AI goes wrong.
If strict liability is not imposed, the need to establish fault will surely disrupt the way in which lawyers litigate clinical negligence claims. Errors could arise from conduct by wide range of actors for a wide range of reasons. Those responsible for mishaps might include: software developers, data inputters, manufacturers, maintenance engineers and clinical technicians. The requirements for disclosure and expert evidence in such claims would be bear little resemblance to that that required in a conventional clinical negligence claim.
Such claims are also likely to require solicitors, barristers, and judges with new kinds of expertise. As AI forges new frontiers in healthcare, it is also likely to reshape the contours of clinical negligence law. Like the medical profession, the legal profession and judiciary will need to prepare and adapt.
Artificial intelligence promises to revolutionise the way in which healthcare is provided. In the future, the exercise of clinical judgement will no longer be the exclusive preserve of human beings. Indeed, this trend is already well under way. Among many other uses, AI is already being developed to predict cancer from mammograms, to monitor skin moles for signs of disease and to perform invasive surgery autonomously.
The disruption caused by these innovations is unlikely to be confined to the practice of medicine. It will also affect the practice of medical law. Some of the possible implications for clinical negligence litigation are discussed below.
At what point will healthcare providers have a positive duty to use AI to provide care? It may be argued in future that the advantages of using AI solutions are so stark that it is irresponsible, or even illogical, not to use them. Alternatively, from the perspective of informed consent, it might be argued that patients should (at least) be counselled as to the benefits of deploying AI for diagnostic or surgical purposes. The strength of these arguments will turn upon the speed of uptake of AI in different areas of medicine. Guidance given by organisations like NICE and the Royal Colleges is also likely to be important.
From the perspective of healthcare defendants, the imposition of a duty to use AI might well raise resourcing issues. New technology can be expensive. Difficult judgements will need to be made about the allocation of limited budgets to maximum advantage. It may be argued that decisions whether or not to allocate scarce resources to AI are not justiciable.
An unspoken assumption of the law of negligence is that people are fallible. To be human is to err. Therefore, perfect clinical judgement is not expected from doctors. Liability is imposed only where a doctor has failed to take ‘reasonable care’. On the other hand, the promise of AI is that machines do not suffer from the same imperfections. Their memory and processing speed are far superior. They do not experience workplace fatigue. They can perceive patterns in data that would be invisible to the human eye or mind. This raises some interesting questions about the standard of care to be expected.
What happens if, for example, an AI system makes recommendations that would not be supported by a responsible body of practitioners? At first blush the answer looks simple: guidance from a computer is not a substitute for the exercise of clinical judgement. A doctor should always make decisions that accord with a ‘responsible body’ of practitioners.
However, early adopters might argue that the very purpose of AI is to expose flaws in conventional medical wisdom. For example, by analysing big data, AI might identify concerning patterns in breast imaging that most radiologists would not consider anomalous. Or it might identify drug combinations for cancer that leading oncologists would consider to be counter intuitive. One could argue that it would be wrong to deprive patients of the benefits of such insights. Perhaps the answer is that patients should given the right to chose whether to accept virtual or corporeal advice. At some point, the courts will have to grapple with this tension between conventional wisdom and computing power.
A related question is whether AI will change the way in which the standard of care itself is conceptualised. To be defensible at common law medical practice must align with a ‘responsible body of medical practitioners’ (Bolam) and be capable of withstanding ‘logical analysis’ (Bolitho). It is not easy to apply that test to guidance provided by non-human intelligence; for reasons that may not be readily intelligible to human beings.
Should healthcare practitioners be liable where treatment assisted by AI goes wrong? On a conventional analysis, the answer to that question looks straightforward. AI could be viewed as just another technological tool used by doctors to deliver treatment. Alternatively, where AI assists in the decision-making process, one could draw an analogy with a consultant that is supervising recommendations made by a junior doctor. However, these analogies may not be apposite for two reasons.
First, AI enabled systems might in future provide treatment autonomously. For example, a team at John Hopkins University has developed a robot that has performed laparoscopic surgery on the tissue of a pig without the guiding hand of a human: the Smart Tissue Autonomous Robot (STAR).
Second, there is the ‘black box’ problem. We see the input and output but what happens inside can be a mystery. It may be impossible for a human to understand in real time why an AI system is making any particular decision or recommendation. This may be due to the amount and complexity of the data that is being processed, the speed of processing or the fact that AI does not use natural human language to parse data.
Accordingly, the courts might well conclude that it is unfair to make individual healthcare practitioners responsible for the real time operation of AI. Putting the matter another way, the use of AI may be considered more analogous to making a referral to a specialist than to overseeing a junior doctor.
If individual practitioners are not responsible, the courts may look for ways to make healthcare institutions liable. There could be different ways to achieve this. One alternative would be to impose a conventional duty of care to ensure that ‘equipment’ is functioning properly. Such duties might include duties to audit, test and maintain AI systems in line with standards imposed by manufacturers and regulatory bodies such as the Medicines and Healthcare products Regulatory Agency (MHRA).
Alternatively, Parliament might consider it necessary to impose a form of strict liability on healthcare providers for harms caused by medical AI. This is because reliance upon pure fault-based liability places an unreasonable burden upon claimants. Where matters go wrong the ‘black box’ problem makes it very difficult for a claimant to pinpoint how an error has arisen and who (if anybody) might be responsible for it.
The policy justification for imposing strict liability is arguably similar to that for imposing vicarious liability. Employers already bear the financial risk of harm inherent in the provision of healthcare by their employees. Similarly healthcare institutions should bear the financial risk of harm from deploying AI to provide medical care. Institutions are also better able to insure against the risk of harm where AI goes wrong.
If strict liability is not imposed, the need to establish fault will surely disrupt the way in which lawyers litigate clinical negligence claims. Errors could arise from conduct by wide range of actors for a wide range of reasons. Those responsible for mishaps might include: software developers, data inputters, manufacturers, maintenance engineers and clinical technicians. The requirements for disclosure and expert evidence in such claims would be bear little resemblance to that that required in a conventional clinical negligence claim.
Such claims are also likely to require solicitors, barristers, and judges with new kinds of expertise. As AI forges new frontiers in healthcare, it is also likely to reshape the contours of clinical negligence law. Like the medical profession, the legal profession and judiciary will need to prepare and adapt.
Should healthcare practitioners be liable where treatment assisted by AI goes wrong? Robert Kellar KC explores the ways in which AI is reshaping the practice of medicine and clinical negligence law
The Chair of the Bar sets out how the new government can restore the justice system
In the first of a new series, Louise Crush of Westgate Wealth considers the fundamental need for financial protection
Unlocking your aged debt to fund your tax in one easy step. By Philip N Bristow
Possibly, but many barristers are glad he did…
Mental health charity Mind BWW has received a £500 donation from drug, alcohol and DNA testing laboratory, AlphaBiolabs as part of its Giving Back campaign
The Institute of Neurotechnology & Law is thrilled to announce its inaugural essay competition
How to navigate open source evidence in an era of deepfakes. By Professor Yvonne McDermott Rees and Professor Alexa Koenig
Brie Stevens-Hoare KC and Lyndsey de Mestre KC take a look at the difficulties women encounter during the menopause, and offer some practical tips for individuals and chambers to make things easier
Sir Geoffrey Vos, Master of the Rolls and Head of Civil Justice since January 2021, is well known for his passion for access to justice and all things digital. Perhaps less widely known is the driven personality and wanderlust that lies behind this, as Anthony Inglese CB discovers
The Chair of the Bar sets out how the new government can restore the justice system
No-one should have to live in sub-standard accommodation, says Antony Hodari Solicitors. We are tackling the problem of bad housing with a two-pronged approach and act on behalf of tenants in both the civil and criminal courts