Can you sue for artificial intelligence medical malpractice?
The simple answer is yes. And, no.
Medicine is a science. And while certain pure sciences are capable of producing perfect outcomes, medicine remains imprecise because we bring human error into the mix. In efforts to eliminate human error, many medical facilities are beginning to integrate artificial intelligence to diagnose and treat medical conditions.
Medical errors are considered the third-leading cause of death in the US, while in the UK, one out of every six patients experiences a medical misdiagnosis. When a doctor makes a medical error, you might be able to sue for malpractice. But what happens if artificial intelligence makes a mistake?
Artificial intelligence is expanding what is medically possible. More and more, health care facilities are using AI to identify imaging abnormalities, such as head CT scans, or review health records and pull relevant, actionable information from a patient’s file faster and more accurately than human counterparts could perform.
What is Artificial Intelligence (AI)?
There are many definitions for the term “artificial intelligence” (AI), but John McCarthy at Stanford University defined it in a way that is arguably the most fitting and one of the most widely accepted. He describes AI as the science and engineering responsible for making intelligent machines and computer programs.
AI is a catch-all term that refers to the machines and programs capable of performing tasks usually requiring human intelligence or intervention. In its simplest form, AI is a problem-solving collection of systems that include data fields, machine learning, algorithms, deductive reasoning.
AI might seem like something out of a Tom Cruise movie, but from SIRI to self-driving cars, you don’t have to look hard to find instances of AI in everyday life. Common everyday occurrences of AI include:
- The airline industry has been using AI autopilot programs for decades
- Your spam filter uses AI to sort incoming emails into appropriate folders
- Banks and financial institutions use AI for mobile check deposits, allowing users to take a photo of a check to be cashed
- Social media giants such as Facebook, Twitter, and Instagram use AI to offer suggestions of people or groups to follow or join
AI and Healthcare
AI is poised to significantly reduce mistakes made in health care and can already be found in wide use worldwide.
Oxford’s John Radcliffe Hospital has an AI system capable of surpassing cardiologist performance when examining chest scans to diagnose a heart attack, thereby allowing patients to receive a diagnosis earlier than ever before.
Chinese ophthalmologists are using AI to diagnose a rare eye condition responsible for roughly 10% of childhood blindness with accuracy comparable to human doctors. Stanford researchers are also using AI to diagnose specific lung cancers.
AI is also proving beneficial in various other medical and diagnostic areas, including ALS, dementia, musculoskeletal injuries, cardiovascular issues, cancer, dermatology, telehealth, and even acting as virtual nursing assistants.
What Happens When AI Does Not Work Correctly?
Depending on how AI is being used, the impact of AI failing to perform correctly can vary.
Pilots and co-pilots are continually monitoring autopilot functions and they can quickly make any necessary corrections, often without passengers noticing the inconvenience if the AI fails to operate correctly.
If your bank’s AI malfunctions, bank tellers or other authorized employees can step in and manually process the deposit.
If AI makes a mistake suggesting a new person to follow on your favorite social media platform, unfollowing is an easy remedy that is little more than a nuisance.
But if AI makes a medical misdiagnosis error or fails to perform a task accurately, it can have a devastating impact on a patient’s life.
Can AI Be Held Liable for Medical Malpractice?
While the potential for AI is awe-inspiring, it will fail to hit the mark now and then. And when there is an error, who is the responsible party for potential artificial intelligence malpractice?
While everyone seems to agree that patient safety is a priority, AI’s accountability is somewhat ambiguous. When used as a decision aide to complement doctors in diagnosis, not replace them, the liability remains focused on the health care provider who used the AI.
If diagnosticians, such as radiologists, use AI to aid diagnosis by highlighting abnormalities on scans, they would likely still be accountable for the final interpretation. If, for example, a radiologist failed to catch an error such as cancer or pneumonia on a patient image or scan, you could potentially be eligible to file a medical misdiagnosis malpractice claim.
Diagnosticians who disagree with their AI assistant might find themselves with increased liability. If the AI used by the hospital highlights a chest lung nodule on a radiograph that goes unnoticed by the radiologis — and something they fail to note in their report — the patient could receive a medical misdiagnosis. The diagnostician could potentially be liable for both overlooking the cancer and ignoring the AI interpretation of the imaging.
Currently, the implications for physicians and the health care systems that employ them can be broken into three areas including the following.
Physicians could potentially be liable for failing to evaluate AI recommendations. If they do follow the AI recommendations and fail to meet the required standard of care, the healthcare professional could also be liable.
Physicians could potentially face liability for their decision to implement an inappropriate, or otherwise flawed or malfunctioning AI system within their practice.
Physicians could also potentially be liable if they work for or consult with AI designers, and errors are determined to exist in the core algorithms.
However, in all cases, medical malpractice requires those making the claim to prove that a licensed physicians’ deviation from the standard of care resulted in injury. AI has not to date been licensed to practice medicine, which adds another question mark to the scope of its liability. Most courts have habitually considered it impossible to hold machines legally liable since they are not legal persons.
At Grover Lewis Johnson, we have built our medical malpractice team on 25 years of malpractice experience and a single, focused mission rooted in compassion.
If you have experienced an injury you believe was caused by artificial intelligence medical malpractice, call us today and schedule a free consultation.