Detecting COVID-19 from respiratory sound recordings with transformers
buir.contributor.author | Aytekin, İdil | |
buir.contributor.author | Dalmaz, Onat | |
buir.contributor.author | Sarıtaş, Emine Ü. | |
buir.contributor.author | Çukur, Tolga | |
buir.contributor.orcid | Çukur, Tolga|0000-0002-2296-851X | |
dc.citation.epage | 9 | en_US |
dc.citation.spage | 1 | en_US |
dc.citation.volumeNumber | 12033 | en_US |
dc.contributor.author | Aytekin, İdil | |
dc.contributor.author | Dalmaz, Onat | |
dc.contributor.author | Ankishan, Haydar | |
dc.contributor.author | Sarıtaş, Emine Ü. | |
dc.contributor.author | Bağcı, Ulaş | |
dc.contributor.author | Çukur, Tolga | |
dc.contributor.author | Çelik, Haydar | |
dc.coverage.spatial | United States | en_US |
dc.date.accessioned | 2023-02-20T08:58:28Z | |
dc.date.available | 2023-02-20T08:58:28Z | |
dc.date.issued | 2022-04-04 | |
dc.department | Department of Electrical and Electronics Engineering | en_US |
dc.description.abstract | Auscultation is an established technique in clinical assessment of symptoms for respiratory disorders. Auscultation is safe and inexpensive, but requires expertise to diagnose a disease using a stethoscope during hospital or office visits. However, some clinical scenarios require continuous monitoring and automated analysis of respiratory sounds to pre-screen and monitor diseases, such as the rapidly spreading COVID-19. Recent studies suggest that audio recordings of bodily sounds captured by mobile devices might carry features helpful to distinguish patients with COVID-19 from healthy controls. Here, we propose a novel deep learning technique to automatically detect COVID-19 patients based on brief audio recordings of their cough and breathing sounds. The proposed technique first extracts spectrogram features of respiratory recordings, and then classifies disease state via a hierarchical vision transformer architecture. Demonstrations are provided on a crowdsourced database of respiratory sounds from COVID-19 patients and healthy controls. The proposed transformer model is compared against alternative methods based on state-of-the-art convolutional and transformer architectures, as well as traditional machine-learning classifiers. Our results indicate that the proposed model achieves on par or superior performance to competing methods. In particular, the proposed technique can distinguish COVID-19 patients from healthy subjects with over 94% AUC. | en_US |
dc.description.provenance | Submitted by Betül Özen (ozen@bilkent.edu.tr) on 2023-02-20T08:58:28Z No. of bitstreams: 1 Detecting_COVID-19_from_respiratory_sound_recordings_with_transformers.pdf: 1188171 bytes, checksum: bd2d433700f7c4162a470b38ca8af047 (MD5) | en |
dc.description.provenance | Made available in DSpace on 2023-02-20T08:58:28Z (GMT). No. of bitstreams: 1 Detecting_COVID-19_from_respiratory_sound_recordings_with_transformers.pdf: 1188171 bytes, checksum: bd2d433700f7c4162a470b38ca8af047 (MD5) Previous issue date: 2022-04-04 | en |
dc.identifier.doi | 10.1117/12.2611490 | en_US |
dc.identifier.issn | 1605-7422 | |
dc.identifier.uri | http://hdl.handle.net/11693/111544 | |
dc.language.iso | English | en_US |
dc.publisher | S P I E - International Society for Optical Engineering | en_US |
dc.relation.isversionof | https://doi.org/10.1117/12.2611490 | en_US |
dc.source.title | Progress in Biomedical Optics and Imaging | en_US |
dc.subject | COVID-19 | en_US |
dc.subject | Respiratory | en_US |
dc.subject | Sound | en_US |
dc.subject | Breathing | en_US |
dc.subject | Cough | en_US |
dc.subject | Transformer | en_US |
dc.title | Detecting COVID-19 from respiratory sound recordings with transformers | en_US |
dc.type | Conference Paper | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Detecting_COVID-19_from_respiratory_sound_recordings_with_transformers.pdf
- Size:
- 1.1 MB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.69 KB
- Format:
- Item-specific license agreed upon to submission
- Description: