Algorithms in medicine have shown many potential benefits to both doctors and patients thus far. However, regulating these algorithms is a difficult task. Clarified guidelines from the FDA, among other criteria, could help specify requirements for algorithms and could result in an uptick of clinically deployed algorithms. Proper understanding of algorithms’ limitations and adequate knowledge of clinical data by programmers is key to creating algorithms usable in the medical setting. Here, Dr. Joel Arun Sursas talks about the regulatory and algorithm limitations for implementing AI in medicine.
The Future is Now
It seems the future may already be here, a world where a patient might see their doctor on a computer before they see them in person. Because of the advances in artificial intelligence, it seems like we are moving into a Brave New World of medicine.
But we are not quite there yet.
You might be wondering if a robot-doctor will be delivering your baby in the near future. But despite the advancements of technology, there are still regulatory concerns that need to be addressed before AI is entirely integrated and welcome in the healthcare world.
Open-mindedness and patience will be critical to AI’s future in the healthcare industry. Peoples' fears tend to lead them to panic over the idea of AI entering their lives, especially in the medical field, where physical health is at stake. However, AI algorithms are designed to streamline data processing and make it easier for your doctor to take care of you, not replace your doctor altogether.
AI’s Role in Healthcare
We know that AI can help with more menial tasks in a healthcare setting, but it’s hard to imagine automated surgeries and other complex medical procedures completed by AI.
Advanced surgeries can involve quick thinking, adjustments, and constant monitoring of a patient’s variable bodily functions. Until AI can reasonably keep up with this demand to make decisions on the fly, it is unlikely to see any surgical action for quite some time.
This is one of the ways where the algorithm limitations come into play. If the FDA does release complete and universal guidelines and requirements, this could result in more clinically deployed algorithms .
In addition to facing many obstacles for FDA approval, AI algorithms could also face hardships when it comes to achieving patients’ trust and acceptance. Since it’s a complicated concept to grasp, patients that don’t have a clear understanding of AI and how it can benefit their care may not be willing to accept AI’s help .
Correct decisions made by AI are a function of the structure of the data used as input. If any of the data is wrong or misleading, the algorithms can give misleading results.
This creates a precarious situation because it’s possible algorithm creators may not know that the data is incorrect until it is too late; their algorithm may have already caused medical malpractice. This could cost not only patient well-being but also the hospitals a great deal of money in potential lawsuits.
While human error is also a variable in the medical field, patients seem to be more bothered by machines’ potential mistakes than their corporal counterparts in traditional human doctors.
Proper understanding of algorithms’ limitations by people working in healthcare and adequate knowledge of clinical data by programmers is vital to creating reliable algorithms and can be used in-clinic. It is unknown how long it will take to achieve this understanding, and it will largely depend on the timing of regulatory measures set forth by the FDA.
We are still a way away from algorithms being able to operate independently in clinics, especially with no clear guidance from the FDA or other federal agencies.
For the public to accept algorithms to supplement doctors for different tasks, we need to define the necessary qualities for an algorithm to be deemed accurate. In addition, we need to address the potential sources of error in the algorithm’s decision-making process and transparency surrounding where an algorithm thrives and where it fails.
The challenges are distinct and will take a lot to overcome. Still, it is worth trying to increase medicine’s accuracy and efficiency as we rely more on telemedicine and virtual visits in the wake of the global pandemic of COVID-19.
About Joel Arun Sursas:
Joel Arun Sursas holds a Bachelor's Degree in Medicine and Bachelor's Degree in Surgery from the National University of Singapore, and is continuing his education to obtain a Certificate in Safety, Quality, Informatics and Leadership from the Harvard Medical School, and Masters in Applied Health Science Informatics from the Johns Hopkins University (both expected in 2020). His technical skills include SPSS, RevMan, and Python. Dr. Joel Arun Sursas' most recent engagement is with a medical device start-up company Biorithm where he serves as Head of Clinical Affairs, working to take fetal surveillance out of the hospital and into the home, revolutionizing the obstetric practice globally.
- Ariel, et al. “Artificial Intelligence in Medicine: Applications, Implications, and Limitations.” Science in the News, 19 June 2019, sitn.hms.harvard.edu/flash/2019/artificial-intelligence-in-medicine-applications-implications-and-limitations/.
- “AI For Medicine.” Deeplearning.ai, www.deeplearning.ai/ai-for-medicine/.