Photo Credit: PhonlamaiPhoto
Collaborative healthcare industry regulation can unlock the potential of AI to greatly improve administrative and clinical aspects of the medical field.
Artificial intelligence (AI) is changing the world in every aspect from education to lifestyle management, and the healthcare industry is no exception. According to a New England Journal of Medicine (NEJM) AI study, the US Food and Drug Administration (FDA) has approved more than 500 medical AI devices, and the authors of a JAMA article note that medical AI experts foresee a new healthcare phase in which AI has advanced capabilities that can provide tremendous benefits for the medical community. For example, one article in the journal Nature mentions a new AI system that provides a sepsis warning. Another Nature article covers an AI cardiac function assessment tool that matches the findings of human imaging technicians.
According to the authors of a Production and Operation Management article, the healthcare industry can work to secure reliable AI models by promoting the four pillars of bringing AI into healthcare productivity: physician buy-in, patient acceptance, professional investment, and payer support. An American Journal of Ophthalmology article notes, for instance, the importance of ensuring patient well-being and considering bioethical principles. If physicians and patients lack confidence both in a patient’s health outcomes via AI and in how AI will navigate bioethics, doctors and patients would be unlikely to be pillars of support for AI models.
To begin with, medical AI regulation warrants having solid regulatory processes put in place. This would guide innovators and help to guarantee patient safety. In an effort to retool regulatory frameworks to accommodate new AI innovations, the FDA introduced the Software as a Medical Device pathway. While the pathway is a step forward in successfully incorporating medical AI into the field, it is, nonetheless, restricted to software that either supports or replaces a doctor’s fieldwork. In other words, the pathway does not contain software that performs other essential tasks like enhancing administrative support or serving as an electronic health record. If regulators chose to use a risk-based regulatory framework, then they would be able to distinguish between administrative-task tools (lower risk) and clinical-task tools (higher risk). This would provide regulators with the advantage of being able to take whatever resources they have and key-in on higher risk scenarios.
Based on an NEJM article, another crucial element to implementing cutting-edge AI technologies in the medical field is recognizing oversight within the healthcare industry. For example, the occurrence of dataset shift (when an AI system malfunctions because of data that it was developed on conflicting with newly encountered data) can lead to everything from small discrepancies to dire outcomes. One way to possibly address this issue is through health AI assurance labs, which are public-private partnerships created to ensure fair, safe, and effective employment of AI.
If members of the healthcare industry collaborate in their efforts to create safe and effective regulation, medical AI has the potential to significantly improve the administrative and clinical aspects of the medical field.