Show simple item record

dc.contributor.authorErekat, Diyalaen_US
dc.contributor.authorHammal, Z.en_US
dc.contributor.authorSiddiqui, M.en_US
dc.contributor.authorDibeklioğlu, Hamdien_US
dc.coverage.spatialNetherlandsen_US
dc.date.accessioned2021-01-27T11:14:04Z
dc.date.available2021-01-27T11:14:04Z
dc.date.issued2020
dc.identifier.isbn9781450380027
dc.identifier.urihttp://hdl.handle.net/11693/54924
dc.descriptionDate of Conference: 25–29 October 2020en_US
dc.descriptionConference name: 2020 International Conference on Multimodal Interaction, ICMI 2020en_US
dc.description.abstractThe standard clinical assessment of pain is limited primarily to self-reported pain or clinician impression. While the self-reported measurement of pain is useful, in some circumstances it cannot be obtained. Automatic facial expression analysis has emerged as a potential solution for an objective, reliable, and valid measurement of pain. In this study, we propose a video based approach for the automatic measurement of self-reported pain and the observer pain intensity, respectively. To this end, we explore the added value of three self-reported pain scales, i.e., the Visual Analog Scale (VAS), the Sensory Scale (SEN), and the Affective Motivational Scale (AFF), as well as the Observer Pain Intensity (OPI) rating for a reliable assessment of pain intensity from facial expression. Using a spatio-temporal Convolutional Neural Network - Recurrent Neural Network (CNN-RNN) architecture, we propose to jointly minimize the mean absolute error of pain scores estimation for each of these scales while maximizing the consistency between them. The reliability of the proposed method is evaluated on the benchmark database for pain measurement from videos, namely, the UNBC-McMaster Pain Archive. Our results show that enforcing the consistency between different self-reported pain intensity scores collected using different pain scales enhances the quality of predictions and improve the state of the art in automatic self-reported pain estimation. The obtained results suggest that automatic assessment of selfreported pain intensity from videos is feasible, and could be used as a complementary instrument to unburden caregivers, specially for vulnerable populations that need constant monitoring.en_US
dc.language.isoEnglishen_US
dc.source.titleEnforcing Multilabel Consistency for Automatic Spatio-Temporal Assessment of Shoulder Pain Intensityen_US
dc.relation.isversionofhttps://dx.doi.org/10.1145/3395035.3425190en_US
dc.subjectPainen_US
dc.subjectFacial expressionen_US
dc.subjectDynamicsen_US
dc.subjectVisual analogue scaleen_US
dc.subjectObserver pain ıntensityen_US
dc.subjectConvolutional neural networken_US
dc.subjectRecurrent neural networken_US
dc.titleEnforcing multilabel consistency for automatic spatio-temporal assessment of shoulder pain intensityen_US
dc.typeConference Paperen_US
dc.departmentDepartment of Computer Engineeringen_US
dc.citation.spage156en_US
dc.citation.epage164en_US
dc.identifier.doi10.1145/3395035.3425190en_US
dc.publisherAssociation for Computing Machineryen_US
dc.contributor.bilkentauthorErekat, Diyala
dc.contributor.bilkentauthorDibeklioğlu, Hamdi


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record