Will Diagnostic Artificial Intelligence Ever Become a Real Thing?

By | October 10, 2022

The desire to improve diagnostic accuracy, efficiency, and safety is always on the minds of clinicians. Suddenly, artificial intelligence (AI) arrives, and everyone believes we have found nirvana and a solution to all problems. Unfortunately, despite multiple peaks of excitement, AI still has failed to meet our expectations.

When we delve into the reasons concerning diagnostic accuracy, it’s crucial first, to examine how clinicians come to a diagnosis. For instance, one issue that may impact accuracy is the reality that we learn from experts. During our educational training, we rely on those training us to have extensive experience on what they are teaching and have most likely learned this from the teachers who taught them. Unfortunately, we can’t measure (with data) the degree of their expertise as their diagnostic ability has not been quantified against benchmarks. Thus, we may propagate a less-than-ideal model of diagnosis prowess, mainly when our learnings are obtained individually or from a limited number of trainers.

Another way we learn is by example. And when working with AI, this training method requires a definite known answer. Thus, for AI to be reproducible and accurate, significant amounts of reliable data must be available for machine learning. Furthermore, the machine must be proficient in detecting subtleties that humans would miss. Since these data sets do not exist, we must rely on human–designated labels, bringing the true correctness of any diagnosis into question. Because once again we are relying on the human to provide the data. On the flip side, there is the question of clinicians trusting a machine to obtain accurate diagnostic information. The machine is not at risk, but the clinician is and ultimately patient is if there are errors. Unfortunately, this line of thinking can prevent us from moving forward; nonetheless, it is a natural, and emotional response.

Learning from experience demonstrates the ability to improve potential based on reinforcement learning. Over time, the more one experiences and diagnoses wrong, the more one will eventually interpret right. When thinking about using AI, a game-playing approach will lead to improved ability. But the key to reinforcement learning is that the best results depend on simulating cases and conducting unlimited experiments with clear outcomes. Once again, an extremely tenuous situation when dealing with human lives and decision-making, especially when contemplating using a machine-generated aid, and a device that requires experimentation before it is accurate.

The diagnosis process requires human interactions, judgments, and social systems. Even when implemented with the utmost understanding by well-experienced, highly trained clinicians, no present system is 100% correct one hundred percent of the time. Therefore, when we consider utilizing AI, we tend to encounter a wall. AI requires significant information over time to create a “standard” or correct answer. Unfortunately, since the diagnosis process does not lend itself to allowing us to know if it is genuinely accurate, AI learning from all of that information will not lead to a known truth.

We know machine learning can be an aid, but it is fraught with the same issues that all clinicians have encountered. Hence, we must first overcome our own natural tendencies of unabashedly assuming the diagnosis we proclaim is correct. And we must further understand that we need help to improve what is best for those we serve. Finally, we must not view AI as a replacement tool; instead, we must view it as an enhancer of our abilities to aid in thinking of diagnoses we might have missed and, when necessary, to effectively obtain a second opinion in real-time.