Browsing by Subject "Decision tree"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Comparative study on classifying human activities with miniature inertial and magnetic sensors(Elsevier, 2010) Altun, K.; Barshan, B.; Tunçel, O.This paper provides a comparative study on the different techniques of classifying human activities that are performed using body-worn miniature inertial and magnetic sensors. The classification techniques implemented and compared in this study are: Bayesian decision making (BDM), a rule-based algorithm (RBA) or decision tree, the least-squares method (LSM), the k-nearest neighbor algorithm (k-NN), dynamic time warping (DTW), support vector machines (SVM), and artificial neural networks (ANN). Human activities are classified using five sensor units worn on the chest, the arms, and the legs. Each sensor unit comprises a tri-axial gyroscope, a tri-axial accelerometer, and a tri-axial magnetometer. A feature set extracted from the raw sensor data using principal component analysis (PCA) is used in the classification process. A performance comparison of the classification techniques is provided in terms of their correct differentiation rates, confusion matrices, and computational cost, as well as their pre-processing, training, and storage requirements. Three different cross-validation techniques are employed to validate the classifiers. The results indicate that in general, BDM results in the highest correct classification rate with relatively small computational cost.Item Open Access Is better nuclear weapon detection capability justified?(Walter de Gruyter GmbH, 2011) Bakir, N. O.; Von Winterfeldt, D.In this paper, we present a decision tree model for evaluation of the next generation radiation portal technology (Advanced Spectroscopic Portals or ASPs) to scan containers entering the United States non-intrusively against nuclear or radiological weapons. Advanced Spectroscopic Portals are compared against the current designs of portal monitors (plastic scintillators or PVTs). We consider five alternative deployment strategies: 1) Exclusive deployment of ASPs replacing all the PVTs currently deployed at U.S. ports of entry, 2) Sequential deployment of ASPs with PVTs installing ASPs in all secondary and some primary inspections areas, 3) Sequential deployment of ASPs with PVTs installing ASPs in only secondary inspections areas, 4) Exclusive deployment of PVTs, 5) Stop deployment of new portal monitors and continue inspections with the current capacity. The baseline solution recommends a hybrid strategy that supports the deployment of new designs of portal monitors for secondary inspections and current designs of portal monitors for primary inspections. However, this solution is found to be very sensitive to the probability of attack attempt, the type of weapon shipped through ports of entry, the probability of successful detonation, detection probabilities and the extra deterrence that each alternative may provide. We also illustrate that the list of most significant parameters depends heavily on the dollar equivalent of overall consequences and the probability of attack attempt. For low probability and low consequence scenarios, false alarm related parameters are found to have more significance. Our extensive exploratory analysis shows that for most parametric combinations, continued deployment of portal monitors is recommended. Exclusive deployment of ASPs is optimal under high risk scenarios. However, we also show that if ASPs fail to improve detection capability, then extra benefits they offer in reducing false alarms may not justify their mass deployment. © 2011 Berkeley Electronic Press. All rights reserved.Item Open Access Online learning under adverse settings(2015-05) Özkan, HüseyinWe present novel solutions for contemporary real life applications that generate data at unforeseen rates in unpredictable forms including non-stationarity, corruptions, missing/mixed attributes and high dimensionality. In particular, we introduce novel algorithms for online learning, where the observations are received sequentially and processed only once without being stored, under adverse settings: i) no or limited assumptions can be made about the data source, ii) the observations can be corrupted and iii) the data is to be processed at extremely fast rates. The introduced algorithms are highly effective and efficient with strong mathematical guarantees; and are shown, through the presented comprehensive real life experiments, to significantly outperform the competitors under such adverse conditions. We develop a novel highly dynamical ensemble method without any stochastic assumptions on the data source. The presented method is asymptotically guaranteed to perform as well as, i.e., competitive against, the best expert in the ensemble, where the competitor, i.e., the best expert, itself is also specifically designed to continuously improve over time in a completely data adaptive manner. In addition, our algorithm achieves a significantly superior modeling power (hence, a significantly superior prediction performance) through a hierarchical and self-organizing approach while mitigating over training issues by combining (taking finite unions of) low-complexity methods. On the contrary, the state-of-the-art ensemble techniques are heavily dependent on static and unstructured expert ensembles. In this regard, we rigorously solve the resulting issues such as the over sensitivity to source statistics as well as the incompatibility between the modeling power and the computational load/precision. Our results uniformly hold for every possible input stream in the deterministic sense regardless of the stationary or non-stationary source statistics. Furthermore, we directly address the data corruptions by developing novel versatile imputation methods and thoroughly demonstrate that the anomaly detection -in addition to being stand alone an important learning problem- is extremely effective for corruption detection/imputation purposes. To that end, as the first time in the literature, we develop the online implementation of the Neyman-Pearson characterization for anomalies in stationary or non-stationary fast streaming temporal data. The introduced anomaly detection algorithm maximizes the detection power at a specified controllable constant false alarm rate with no parameter tuning in a truly online manner. Our algorithms can process any streaming data at extremely fast rates without requiring a training phase or a priori information while bearing strong performance guarantees. Through extensive experiments over real/synthetic benchmark data sets, we also show that our algorithms significantly outperform the state-of-the-art as well as the most recently proposed techniques in the literature with remarkable adaptation capabilities to non-stationarity.