Global vs local classification models for multi-sensor data fusion
SETN '18 Proceedings of the 10th Hellenic Conference on Artificial Intelligence
43-1 - 43-5
Item Usage Stats
The aim of this paper is to investigate feature extraction and fusion of information across a number of sensors in different spatial locations to classify temporal events. Although the common feature-level fusion allows capturing spatial dependencies across sensors, the significant increase of feature vector dimensionality does not allow learning the classification models using a small number of samples usually available in practice. In decision-level fusion on the other hand, sensor-specific classification models are trained and subsequently integrated to reach a combined decision. Recent work has shown that decision-level fusion with a global (common for all sensors) classification model, is more appropriate for generalized events that show a (weak or strong) manifestation across all sensors. Although we can hypothesize that the choice of scheme depends on the event type (generalized vs focal/local), the prior work does not provide enough evidence to guide on the choice of fusion scheme. Thus in this work we aim to compare the three data fusion schemes for classification of generalized and non-generalized events using two case scenarios: (i) classification of paroxysmal events based on EEG patterns and (ii) classification of falls and activities of daily living (ADLs) from multiple sensors. The results support our hypothesis that feature level fusion is more beneficial for the characterization of heterogeneous data (based on an adequate number of samples), while sensor-independent classifiers should be selected in the case of generalized manifestation patterns.
Multi-dimensional time series