Pippa, E.Zacharaki, E. I.Ă–zdemir, A. T.Barshan, BillurMegalooikonomou, V.2019-02-212019-02-2120189781450364331http://hdl.handle.net/11693/50329Date of Conference: July 09 - 12, 2018The aim of this paper is to investigate feature extraction and fusion of information across a number of sensors in different spatial locations to classify temporal events. Although the common feature-level fusion allows capturing spatial dependencies across sensors, the significant increase of feature vector dimensionality does not allow learning the classification models using a small number of samples usually available in practice. In decision-level fusion on the other hand, sensor-specific classification models are trained and subsequently integrated to reach a combined decision. Recent work has shown that decision-level fusion with a global (common for all sensors) classification model, is more appropriate for generalized events that show a (weak or strong) manifestation across all sensors. Although we can hypothesize that the choice of scheme depends on the event type (generalized vs focal/local), the prior work does not provide enough evidence to guide on the choice of fusion scheme. Thus in this work we aim to compare the three data fusion schemes for classification of generalized and non-generalized events using two case scenarios: (i) classification of paroxysmal events based on EEG patterns and (ii) classification of falls and activities of daily living (ADLs) from multiple sensors. The results support our hypothesis that feature level fusion is more beneficial for the characterization of heterogeneous data (based on an adequate number of samples), while sensor-independent classifiers should be selected in the case of generalized manifestation patterns.EnglishClassificationDecision-level fusionFeature-level fusionMulti-dimensional time seriesPattern analysisGlobal vs local classification models for multi-sensor data fusionConference Paper10.1145/3200947.3201034