Photo Credit: Andrey Suslov
The following is a summary of “Using ChatGPT to Provide Patient-Specific Answers to Parental Questions in the PICU,” published in the October 2024 issue of Pediatrics by Hunter et al.
Artificial intelligence in healthcare settings, such as pediatric intensive care units (PICU), raises questions about its effectiveness in communicating with families.
Researchers conducted a prospective study to evaluate ChatGPT’s ability to provide patient-specific answers to parental questions in the PICU.
They generated assessments and plans for three patients in the PICU with respiratory failure, septic shock, and status epilepticus, pairing them with eight typical parental questions. ChatGPT was prompted with these instructions, and 6 physicians of the PICU considered the answers for accuracy (1–6), entirety (yes/no), empathy (1–6), and understandability (Patient Education Materials Assessment Tool, PEMAT, 0% to 100%; Flesch–Kincaid grade level). Statistical comparisons were conducted using the Kruskal-Wallis and Fisher’s exact tests.
The results showed that all answers incorporated patient details, used for reasoning in 59% of the sentences. Responses had high accuracy (median 5.0, [IQR), 4.0–6.0]), empathy (median 5.0, [IQR, 5.0–6.0]), completeness (97% questions), and understandability (PEMAT % median 100, [IQR, 87.5–100]; Flesch–Kincaid level 8.7), 4 out of 144 reviewer scores were less than 4 out of 6 in accuracy, and no response was likely to cause harm. No accuracy, completeness, empathy, or understandability change among scenarios or question types. Fair, substantial, and almost perfect agreement was found among reviewers for accuracy, empathy, and understandability.
They concluded that ChatGPT effectively utilized patient-specific information to deliver high-quality responses to parental inquiries in PICU clinical scenarios.