• About
  • Policies
  • What is open access
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Sequential outlier detection based on incremental decision trees

      Thumbnail
      View / Download
      2.9 Mb
      Author(s)
      Gökçesu, Kaan
      Neyshabouri, Mohammadreza Mohaghegh
      Gökçesu, Hakan
      Serdar, Süleyman
      Date
      2019
      Source Title
      IEEE Transactions on Signal Processing
      Print ISSN
      1053-587X
      Electronic ISSN
      1941-0476
      Publisher
      IEEE
      Volume
      67
      Issue
      4
      Pages
      993 - 1005
      Language
      English
      Type
      Article
      Item Usage Stats
      138
      views
      180
      downloads
      Abstract
      We introduce an online outlier detection algorithm to detect outliers in a sequentially observed data stream. For this purpose, we use a two-stage filtering and hedging approach. In the first stage, we construct a multimodal probability density function to model the normal samples. In the second stage, given a new observation, we label it as an anomaly if the value of aforementioned density function is below a specified threshold at the newly observed point. In order to construct our multimodal density function, we use an incremental decision tree to construct a set of subspaces of the observation space. We train a single component density function of the exponential family using the observations, which fall inside each subspace represented on the tree. These single component density functions are then adaptively combined to produce our multimodal density function, which is shown to achieve the performance of the best convex combination of the density functions defined on the subspaces. As we observe more samples, our tree grows and produces more subspaces. As a result, our modeling power increases in time, while mitigating overfitting issues. In order to choose our threshold level to label the observations, we use an adaptive thresholding scheme. We show that our adaptive threshold level achieves the performance of the optimal prefixed threshold level, which knows the observation labels in hindsight. Our algorithm provides significant performance improvements over the state of the art in our wide set of experiments involving both synthetic as well as real data.
      Keywords
      Anomaly detection
      Exponential family
      Online learning
      Mixture-of-experts
      Permalink
      http://hdl.handle.net/11693/53071
      Published Version (Please cite this version)
      https://doi.org/10.1109/TSP.2018.2887406
      Collections
      • Department of Electrical and Electronics Engineering 3863
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCoursesThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCourses

      My Account

      Login

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 2976
      © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy