Photo Credit: Vadym Pastukh
The following is a summary of “Employing the Artificial Intelligence Object Detection Tool YOLOv8 for Real-Time Pain Detection: A Feasibility Study,” published in the November 2024 issue of Pain by Cascella et al.
Researchers conducted a retrospective study to explore using computer vision (CV) to detect pain through facial expressions as an alternative to traditional subjective pain assessment methods.
They applied the YOLOv8 real-time object detection model to analyze facial expressions indicative of pain. Using 4 pain datasets, a dataset of facial images expressing pain was compiled, labeling each based on the presence of specific pain-associated Action Units (AUs) such as AU4, AU6, AU7, AU9, AU10, and AU43, following the Prkachin and Solomon Pain Intensity (PSPI) scoring method. Images with a PSPI score higher than 2 were classified as expressing pain. For accurate labeling, an open-source tool makesense.ai was used. The dataset was divided into training and testing subsets containing both pain and no-pain images and trained the YOLOv8 model over 10 epochs, iteratively improving its performance. The model’s efficacy was evaluated using precision, recall, mean Average Precision (mAP), and F1 score metrics.
The results showed a mAP of 0.893 at a threshold of 0.5 for the combined classes. Precision for “pain” and “no pain” detection was 0.868 and 0.919, respectively, F1 scores for “pain,” “no pain,” and “all classes” peaked at 0.80. The model’s performance was further validated on the Delaware dataset and in a real-world setting
Investigators concluded the potential of real-time CV models for pain detection, despite limitations, and suggest further research to improve generalizability and clinical integration.