According to many, AI is likely to take over 20-50% of jobs over the next decade or two, and significantly disrupt almost every other job. Many specialties lie at the cutting edge of this transformation, while others will perhaps see only small changes in the short term. Some white-collar jobs and specialties will be unrecognizable or will vanish entirely, while some specialties will potentially soar by putting AI tools in the hands of specialists.
Artificial Intelligence (AI) is probably going to disrupt many occupations and tasks, but sometimes the media will sketch unlikely or fanciful scenarios that leave us more mystified than informed. Sometimes this is because the articles offer vague details that seem disconnected from the flourish in the headline. Often, a headline saying something like “Machines replace Radiologists”, will describe details that seem to have little to do with the real processes involved. In this blog I will rub the crystal ball on the likely disruption to my own job, and from that, perhaps describe something more down to earth and practical, but hopefully give you a sense of where this might apply in your own specialty.
For several years, my job had entailed a lot of interviewing. Collecting stakeholder experiences and insights is a core part of monitoring & evaluation (M&E) and quality improvement, and interviews are a rich source of qualitative data. Qualitative data helps us to understand how a new healthcare policy, technology, or workflow is panning out, and what opportunities exist for improvement. My interview participants vary significantly, and depending on the project, I may interview janitorial staff for one, Utilization Management nurses in another, and hospital pentad or program office directors in a third. It might seem, therefore, that my job is somewhat immune to AI encroachment.
However, although there is a lot of variation in the participants and subjects, the process, structure, and a much of the content from my side is somewhat repetitive – and anything repetitive is fair game for AI.
For example, I always start by creating a new project in my Qualitative Data Analysis (QDA) tool (I use MAXQDA 2018 Analytics Pro Portable [VERBI Software, 2017] for qualitative data analysis), starting a logbook, and identifying candidate participants and extracting their contact details. I pretty much always contact the candidates by email to introduce myself, and to invite them to participate in interviews, along with an explanation of the project.
The contact process is pretty repetitive, and could be done by an AI machine. In fact, it would be heaven to offload interview scheduling to a machine. From experience, I have found it best to offer three interview slots upfront in the introduction email, but allow the participant to counteroffer one of their own, while avoiding double-booking myself, or having too many back-to-back interviews. This part is a nightmare, because trying to fit 20-60 1-hour interviews into my calendar is hard to juggle manually, but accommodating the three alternative options and counteroffers is the pits. The AI could do this part so much better than I, and could juggle selections, counteroffers, and unused slots, all while tracking who needed a second or third email bump before they replied.
So, sending out invitations, setting up interview appointments, and updating me on booking progress, would be a cinch for AI. It could highlight those who are lost to contact, provide summarized rejection details, and show me response trends or clusters. Perhaps rejections clustered around specific kinds of participants or times – perhaps surgeons needed three emails before replying, nurses picked afternoons, and bed Czars tended to refuse?
However, this is hardly “disruptive”, and that’s not where the AI would stop.
Much of my interviews themselves are fairly consistent. I always start with thanking the participant, introducing myself and the intent of the M&E project, and go over the expectations of confidentiality and anonymity, the non-attributional nature of the process, and their right to refuse. I always introduce the broad sweep of the project, saying what policy, technology, or workflow we are assessing, and how the results will be used.
The semi-structured questions also stay consistent – I always start with asking them to describe their role and involvement with the thing we are assessing, then step into asking them to describe what they think has worked well thus far, and their expectations of what will continue to work well. Next, I probe about what isn’t working well and what expectations they have for future risks or issues. I usually then ask them to walk me through a typical day in their life related to the thing we are assessing, and ask them to nominate anything that they found surprising, frustrating, or confusing.
If the subject of the project was an implementation lifecycle, I would typically ask for their input on each of the stages of the lifecycle model – from requirements development, through to implementation, customization, and go-live, along with ongoing support, enhancement request processes, and end-of-life preplanning. This is, for example, how I know that clinicians are seldom involved in picking the Electronic Health Records (EHR) system they wind up using, and that almost nobody thinks of planning for obsolescence.
AI could do all this.
Once I picked the depth of questioning required, and which phases of the lifecycle I was interested in, the AI could explain, question, and capture a transcript just as easily as I could. Which is pretty disruptive of my role, but that’s not all. Going back to the participant contact and scheduling part, if the AI was doing the basic interviewing, the schedule no longer had to give me a break between sessions, or limit the number per day or time of day or week. In fact, the AI would have no need to avoid clashes or over-runs – it could hold two sessions independently and simultaneously, it could hold five, ten, fifty interviews at the same time if needed. It wouldn’t care that the branch facility in Hawaii could only be interviewed on Friday afternoons, or that the Nurse Manager brought three other people along and they wanted to extend the call to 90 minutes.
That thought blows out my data collection construct somewhat. While I want to continue interviewing to the point at which I see no new elements emerging in the interviews, in practice I have often limited data collection to match the predetermined project timeframes. With AI, the sampling and interviewing could be far larger, and interviews could run into hundreds, or thousands, if desired, without adding to the project length or cost.
The AI would conduct the basic interviews from my structured questions, and then prepare the QDA for me – transcribe interviews, associate speakers to text segments, import the transcripts into the QDA, and fire up initial coding and analysis. I would want it to auto-code the top most frequent phrases used in the transcripts, create a phrase word-cloud, and auto-code the questions as codes in the transcripts. I would want it to run a sentiment analysis on answers to questions and the top phrases, and highlight variation and commonalty between different kinds of participants by role or demographics.
So, what would I do in future, once the AI ate up all this work?
I would be the man in the little box behind the curtain for the scheduling and initial interviews and analysis, but would step out to conduct follow-up interviews on interesting, unexpected, or emergent themes, or to clarify ambiguity. I would still have to do the bulk of the coding and analysis, but would be painting on a far larger and more finely-woven canvas.
Will the AI circle back in a decade and take that role from me too? I don’t know, but maybe in ten years (or five), I will write another blog like this one, telling you what my job is, rather than the AI’s.