The Legal Examiner Affiliate Network The Legal Examiner The Legal Examiner The Legal Examiner search feed instagram google-plus avvo phone envelope checkmark mail-reply spinner error close The Legal Examiner The Legal Examiner The Legal Examiner
Skip to main content

Medicine depends on advancing technology. It helps cure diseases, create vaccines and other life-saving measures to keep us healthy. The most exciting addition to the healthcare sphere is Artificial Intelligence, otherwise known as AI. AI is basically the brain of machines and can be programmed to follow an algorithm to perform certain tasks and interact with humans. You probably even have one in your home if you have a Google or Alexa device, and if you have a smartphone, it’s in there as well. 

The possibilities of AI in medical science are virtually unlimited, testing boundaries that were impossible to breach before. It can catch an abnormality in a CT scan, pull the most vital information from electronic health records, lead to better patient understanding and so much more. But as it is a new technology with no real precedent, what happens when AI malfunctions and results in serious injury to the patient? Can an artificial brain lead to medical malpractice, and if so, can it be sued? 

Total Lack of Precedence Means Uncertainty

One example discusses a radiology scenario. A hospital decided to use AI instead of human radiologists to perform an x-ray because it’s cheaper and more efficient. The AI does the task without incident but was unable to detect the presence of pneumonia. The patient went into septic shock and could not be revived. Who is liable in this scenario?

The answer is that we don’t really know yet. There have not been any major cases closed on the subject, so it has no precedence. Figuring out liability will surely take a lot of cases and legislation over time. Liability could fall on the hospital, the physician or the AI’s creator – or a combination of these – depending on the specifics of the case. 

But AI is only a machine being manipulated by a medical professional, and a machine cannot be sued for medical malpractice. It is a tool being used by a person, not actually a person, and therefore is not licensed to practice medicine. If the physician’s actions meet the definition of malpractice, the tools he used are irrelevant. If his reliance on the tools causes him to be careless or fall beneath the standard of care, it’s still malpractice because patient care is ultimately up to him. But it is not dependent on the AI – not yet, anyway. As algorithms learn and get more intelligent, they become less likely to make a mistake. AI may completely replace certain technicians or physicians over time. If this happens, it could shield human physicians from liability. Only time will tell. 

You could sue the manufacturer of the AI tool under a faulty product action, especially if the AI’s failure to function correctly directly led to your harm. Still, the judge would have to decide whether the fault lies in the software or the product itself, and you cannot generally sue intangible software. This may change as our world adapts to more and more AI usage. 

Who is Regulating AI in Healthcare?

The answer to this, currently, is no one. The U.S. Food and Drug Administration (FDA) will likely play a part if and when regulation kicks in. Still, AI may be completely developed within a hospital system and therefore not be subject to federal regulation. And because AI technology is both brand new and moving a lot faster into an everyday application than anticipated, exactly how to ensure its quality is still unclear. The FDA’s normal regulation process for medical devices doesn’t support AI.  

Certain FDA-approved devices that use AI are already on the market, but the algorithms themselves have not seen any regulation legislation yet. When they do, the FDA could categorize them either as medical devices or software as medical devices, and the regulations will differ accordingly. In 2019 the FDA began drafting risk-based recommendations for clinical support software. Their focus will be on how explainable an algorithm’s process is, so users understand what they’re doing and can adapt appropriately as they learn.

Also, in 2019, the FDA released a paper describing one possible approach in which manufacturers would commit to transparency on pre-market algorithm development, continual performance monitoring and regular updates about any implemented changes. Being able to track AI programs from pre-post creation would go a long way in advancing helpful technology while keeping patients safe.  

There’s no doubt that artificial intelligence will be an integral part of the future of healthcare. Whether it’s involved in your care or not, you deserve justice if you or a loved one is a victim of medical malpractice. D’Arcy Johnson Day has seasoned attorneys who have been helping patient victims for many years. Give us a call at 866-327-2952, or get in touch with us online, and we can help you determine the best legal options.

Comments for this article are closed, but you may still contact the author privately.