The transactional and manual prior authorization (PA) that sometimes even involves an anticipated relic of the past, the fax machine, creates needless impasses for healthcare providers, payors, and patients. To address this issue, the healthcare sector has invested in technology capable of integrating with electronic systems. The benefit of such technology is clear, but it remains unclear whether patients should have a say in how their personal health information (PHI) is communicated during the PA process.
In its current administrative form, the PA process is correlated with lowered quality of practice management and patient care. The American Medical Association (AMA) defines PA as when providers need approval from health insurance payors ahead of performing medical services to patients. According to a 2022 AMA survey, 88% of 1,001 providers associated PA with a considerable burden on their practices, and 89% reported a negative perception of PA on clinical outcomes. The process is time-consuming, as the survey also notes providers reported an estimated average of 2 days per week dedicated to PA.
According to the vice president of consulting at SYNEGERN Health, Terri Gatchel-Schmidt, one solution for alleviating the PA burden is robotic process automation (RPA). RPA, as a MuleSoft blog post notes, is technology that uses software bots to handle repetitious and laborious manual tasks, particularly for workstreams necessary to an enterprise’s operation. For example, RPA can centralize otherwise fragmented workflows, decrease human error, and increase tactical efficiencies and revenue operations. RPA technology alone may still involve manual function at certain stages of PA, which is one possible reason why the healthcare arena stands to benefit from technology combining RPA and artificial intelligence (AI). Put the two together, and the PA process might just be ripe to become hands-off.
For healthcare entities that integrate AI/RPA for PA, they must ensure the technology adheres to HIPAA. The same applies to companies that sell AI/RPA technology. The HIPAA Privacy Rule requires the protection of PHI, including for health plans and electronic data transmission. In this way, patients can expect any AI/RPA technology that providers and payors use to handle sensitive information is HIPAA-compliant. Since HIPAA requires healthcare entities to undertake protective measures, patients’ input on how their data are transmitted for PA may be considered irrelevant.
However, data breaches give reason for concern. In 2021, cyber attackers stole more than 1.1 million patient records. According to a Healthcare Dive article, the healthcare industry’s overall cybersecurity infrastructure is lagging, a partial byproduct of resource constraints. Amid the healthcare ecosystem’s growing vulnerability to breaches, ChatGPT mania spurs two types of investment in AI: from backers of AI companies and hackers of healthcare organizations. If the healthcare industry’s cybersecurity measures are not enough to safeguard against the proliferation and sophistication of cyberattacks, then entities adopting AI/RPA technology for PA should consider patient approval for the communication of their personal data.
Then again, maybe not. The HIPAA Privacy Rule doesn’t specify provisions for protecting PHI data entered in AI technologies. What the rule does stipulate is that healthcare entities are not obligated to get patient consent for interoperability and electronic exchange of PHI. Certain states or entities, however, adopted bills, policies, regulations, or statutes requiring opt-in or opt-out consent from patients. The George Washington University Milken Institute School of Public Health pulled together a list of those states and entities, but its last update was in 2016.
Furthermore, while not a law but no less of a standard, the fourth principle of the AMA’s Principles of Medical Ethics states, “A physician shall respect the rights of patients, colleagues, and other health professionals, and shall safeguard patient privacy within the constraints of the law.” While the HIPAA Privacy Rule contends optional patient consent is legal, that doesn’t mean it’s ethical, especially when, as a HealthITSecurity articlenotes, HIPAA is outdated.
Even in the absence of clear compliance regulations evolved to meet the times, providers and payors can still consider patient consent that uses AI/RPA technology for PA. Already, providers require written authorization to take and post photos of patients on social media. If patient consent for PA is unrealistic due to the potential reversion for those who opt-out back to manual hassles and the correlated fiscal and qualitative strains on practices and patients, then some form of notice might be a worthy option to pursue. Providers could notify patients that their practice uses AI/RPA technology for PA and its implications. Such a measure could serve to strengthen trust and accountability between providers and patients.