Şahin, Devrim2016-07-012016-07-012015http://hdl.handle.net/11693/30041Cataloged from PDF version of article.Heart murmurs are pathological heart sounds that originate from blood flowing with abnormal turbulence due to physiological defects of the heart, and are the prime indicator of many heart-related diseases. Murmurs can be diagnosed via auscultation; that is, by listening with a stethoscope. However, manual detection and classification of murmur requires clinical expertise and is highly prone to misclassification. Although automated classification algorithms exist for this purpose; they heavily depend on feature extraction from ‘segmented’ heart sound waveforms. Segmentation in this context refers to detecting and splitting cardiac cycles. However, heart sound signal is not a stationary signal; and typically has a low signal-to-noise ratio, which makes it very difficult to segment using no external information but the signal itself. Most of the commercial systems require an external electrocardiography (ECG) signal to determine S1 and S2 peaks, but ECG is not as widely available as stethoscopes. Although algorithms that provide segmentation using sound alone exist, a proper comparison between these algorithms on a common dataset is missing. We propose several modifications to many of these algorithms, as well as an evaluation method that allows a unified comparison of all these approaches. We have tested each combination of algorithms on a real data set [1], which also provides manual annotations as ground truth. We also propose an ensemble of several methods, and a heuristic for which algorithm’s output to use. Whereas tested algorithms report up to 62% accuracy, our ensemble method reports a 75% success rate. Finally, we created a tool named UpBeat to enable manual segmentation of heart sounds, and construction of a ground truth dataset. UpBeat is a starting medium for auscultation segmentation, time-domain based feature extraction and evaluation; which has automatic segmentation capabilities, as well as a minimalistic drag-and-drop interface which allows manual annotation of S1 and S2 peaks.viii, 64 leaves, ChartsEnglishinfo:eu-repo/semantics/openAccessHeart soundSegmentationFourierWavelet transformB150940Heart sound segmentation using signal processing methodsThesisB150940