• About
  • Policies
  • What is open access
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Detecting COVID-19 from respiratory sound recordings with transformers

      Thumbnail
      View / Download
      1.1 Mb
      Author(s)
      Aytekin, İdil
      Dalmaz, Onat
      Ankishan, Haydar
      Sarıtaş, Emine Ü.
      Bağcı, Ulaş
      Çukur, Tolga
      Çelik, Haydar
      Date
      2022-04-04
      Source Title
      Progress in Biomedical Optics and Imaging
      Print ISSN
      1605-7422
      Publisher
      S P I E - International Society for Optical Engineering
      Volume
      12033
      Pages
      1 - 9
      Language
      English
      Type
      Conference Paper
      Item Usage Stats
      16
      views
      12
      downloads
      Abstract
      Auscultation is an established technique in clinical assessment of symptoms for respiratory disorders. Auscultation is safe and inexpensive, but requires expertise to diagnose a disease using a stethoscope during hospital or office visits. However, some clinical scenarios require continuous monitoring and automated analysis of respiratory sounds to pre-screen and monitor diseases, such as the rapidly spreading COVID-19. Recent studies suggest that audio recordings of bodily sounds captured by mobile devices might carry features helpful to distinguish patients with COVID-19 from healthy controls. Here, we propose a novel deep learning technique to automatically detect COVID-19 patients based on brief audio recordings of their cough and breathing sounds. The proposed technique first extracts spectrogram features of respiratory recordings, and then classifies disease state via a hierarchical vision transformer architecture. Demonstrations are provided on a crowdsourced database of respiratory sounds from COVID-19 patients and healthy controls. The proposed transformer model is compared against alternative methods based on state-of-the-art convolutional and transformer architectures, as well as traditional machine-learning classifiers. Our results indicate that the proposed model achieves on par or superior performance to competing methods. In particular, the proposed technique can distinguish COVID-19 patients from healthy subjects with over 94% AUC.
      Keywords
      COVID-19
      Respiratory
      Sound
      Breathing
      Cough
      Transformer
      Permalink
      http://hdl.handle.net/11693/111544
      Published Version (Please cite this version)
      https://doi.org/10.1117/12.2611490
      Collections
      • Department of Electrical and Electronics Engineering 4011
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCoursesThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCourses

      My Account

      Login

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 2976
      © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy