As we continue to embrace technology and machine learning to improve care, it is essential to focus on situations as they arise, and advocate adjustments that allow us to evolve. One such area that requires further scrutiny is Physician liability and artificial intelligence (AI). Undeniably, AI is by no means flawless since it involves machine learning on present information, incomplete data, and it may have faulty algorithms; however, it may also be superior to current modalities.
Generally, to avoid malpractice liability, physicians must provide care at the level of a competent physician within the same specialty, taking into account available resources. But, how does this now revise with AI? Does it modify the definition of the standard of care or what others would do? Is it viewed as available resources?
If AI suggests a different approach to treatment, it may indicate that a physician is no longer following the standard of care. If no adverse outcome occurs, there is no liability. However, what if the new approach reduces the chances of a negative outcome, yet one occurs anyway? If the health care provider had followed the predetermined path, the same result might have happened; however, now that a deviation occurred, is the provider responsible or the technology?
This present situation has a potential to lessen the utilization of AI, which in itself limits machine learning to improve future outcomes. Unfortunately, this can cause issues in the opposite direction as well. What if the provider does not use AI, although the prominent academic and respected center in her community does? Is the standard of care now different? To be clear, this conundrum is not a recent dilemma, historically any progress must also deviate from the present status quo.
In a litigious environment, we become paralyzed, stifling progress. One must solve for these unintended consequences designed to protect patient rights. The answer may live in specifics concerning AI, or address the underlying complexity of tort reform. Regardless of the solution, it is imperative to be aware of these unchartered waters. We also must examine outcomes using AI with scientific methodology so we may learn if it enhances care, and thus becomes the standard, or does not add any value. Let us not lose sight of patient protections, yet also avoid allowing our legal system to prevent enhancements in care.