Spatio-temporal assessment of pain intensity through facial transformation-based representation learning
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
BUIR Usage Stats
views
downloads
Series
Abstract
The nature of pain makes it di cult to assess due to its subjectivity and multidimensional characteristics that include intensity, duration, and location. However, the ability to assess pain in an objective and reliable manner is crucial for adequate pain management intervention as well as the diagnosis of the underlying medical cause. To this end, in this thesis, we propose a video-based approach for the automatic measurement of self-reported pain. The proposed method aims to learn an e cient facial representation by exploiting the transformation of one subject's facial expression to that of another subject's within a similar pain group. We also explore the e ect of leveraging self-reported pain scales i.e., the Visual Analog Scale (VAS), the Sensory Scale (SEN), and the A ective Motivational Scale (AFF), as well as the Observer Pain Intensity (OPI) on the reliable assessment of pain intensity. To this end, a convolutional autoencoder network is proposed to learn the facial transformation between subjects. The autoencoder's optimized weights are then used to initialize the spatio-temporal network architecture, which is further optimized by minimizing the mean absolute error of estimations in terms of each of these scales while maximizing the consistency between them. The reliability of the proposed method is evaluated on the benchmark database for pain measurement from videos, namely, the UNBC-McMaster Pain Archive. Despite the challenging nature of this problem, the obtained results show that the proposed method improves the state of the art, and the automated assessment of pain severity is feasible and applicable to be used as a supportive tool to provide a quantitative assessment of pain in clinical settings.