Detecting COVID-19 from respiratory sound recordings with transformers

buir.contributor.authorAytekin, İdil
buir.contributor.authorDalmaz, Onat
buir.contributor.authorSarıtaş, Emine Ü.
buir.contributor.authorÇukur, Tolga
buir.contributor.orcidÇukur, Tolga|0000-0002-2296-851X
dc.citation.epage9en_US
dc.citation.spage1en_US
dc.citation.volumeNumber12033en_US
dc.contributor.authorAytekin, İdil
dc.contributor.authorDalmaz, Onat
dc.contributor.authorAnkishan, Haydar
dc.contributor.authorSarıtaş, Emine Ü.
dc.contributor.authorBağcı, Ulaş
dc.contributor.authorÇukur, Tolga
dc.contributor.authorÇelik, Haydar
dc.coverage.spatialUnited Statesen_US
dc.date.accessioned2023-02-20T08:58:28Z
dc.date.available2023-02-20T08:58:28Z
dc.date.issued2022-04-04
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.description.abstractAuscultation is an established technique in clinical assessment of symptoms for respiratory disorders. Auscultation is safe and inexpensive, but requires expertise to diagnose a disease using a stethoscope during hospital or office visits. However, some clinical scenarios require continuous monitoring and automated analysis of respiratory sounds to pre-screen and monitor diseases, such as the rapidly spreading COVID-19. Recent studies suggest that audio recordings of bodily sounds captured by mobile devices might carry features helpful to distinguish patients with COVID-19 from healthy controls. Here, we propose a novel deep learning technique to automatically detect COVID-19 patients based on brief audio recordings of their cough and breathing sounds. The proposed technique first extracts spectrogram features of respiratory recordings, and then classifies disease state via a hierarchical vision transformer architecture. Demonstrations are provided on a crowdsourced database of respiratory sounds from COVID-19 patients and healthy controls. The proposed transformer model is compared against alternative methods based on state-of-the-art convolutional and transformer architectures, as well as traditional machine-learning classifiers. Our results indicate that the proposed model achieves on par or superior performance to competing methods. In particular, the proposed technique can distinguish COVID-19 patients from healthy subjects with over 94% AUC.en_US
dc.identifier.doi10.1117/12.2611490en_US
dc.identifier.issn1605-7422
dc.identifier.urihttp://hdl.handle.net/11693/111544
dc.language.isoEnglishen_US
dc.publisherS P I E - International Society for Optical Engineeringen_US
dc.relation.isversionofhttps://doi.org/10.1117/12.2611490en_US
dc.source.titleProgress in Biomedical Optics and Imagingen_US
dc.subjectCOVID-19en_US
dc.subjectRespiratoryen_US
dc.subjectSounden_US
dc.subjectBreathingen_US
dc.subjectCoughen_US
dc.subjectTransformeren_US
dc.titleDetecting COVID-19 from respiratory sound recordings with transformersen_US
dc.typeConference Paperen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Detecting_COVID-19_from_respiratory_sound_recordings_with_transformers.pdf
Size:
1.1 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: