Browsing by Subject "Mixture-of-experts"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Online density estimation of nonstationary sources using exponential family of distributions(Institute of Electrical and Electronics Engineers Inc., 2018) Gokcesu, K.; Kozat, Süleyman SerdarWe investigate online probability density estimation (or learning) of nonstationary (and memoryless) sources using exponential family of distributions. To this end, we introduce a truly sequential algorithm that achieves Hannan-consistent log-loss regret performance against true probability distribution without requiring any information about the observation sequence (e.g., the time horizon T and the drift of the underlying distribution C) to optimize its parameters. Our results are guaranteed to hold in an individual sequence manner. Our log-loss performance with respect to the true probability density has regret bounds of O((CT)1/2), where C is the total change (drift) in the natural parameters of the underlying distribution. To achieve this, we design a variety of probability density estimators with exponentially quantized learning rates and merge them with a mixture-of-experts notion. Hence, we achieve this square-root regret with computational complexity only logarithmic in the time horizon. Thus, our algorithm can be efficiently used in big data applications. Apart from the regret bounds, through synthetic and real-life experiments, we demonstrate substantial performance gains with respect to the state-of-the-art probability density estimation algorithms in the literature. IEEEItem Open Access Sequential outlier detection based on incremental decision trees(IEEE, 2019) Gökçesu, Kaan; Neyshabouri, Mohammadreza Mohaghegh; Gökçesu, Hakan; Serdar, SüleymanWe introduce an online outlier detection algorithm to detect outliers in a sequentially observed data stream. For this purpose, we use a two-stage filtering and hedging approach. In the first stage, we construct a multimodal probability density function to model the normal samples. In the second stage, given a new observation, we label it as an anomaly if the value of aforementioned density function is below a specified threshold at the newly observed point. In order to construct our multimodal density function, we use an incremental decision tree to construct a set of subspaces of the observation space. We train a single component density function of the exponential family using the observations, which fall inside each subspace represented on the tree. These single component density functions are then adaptively combined to produce our multimodal density function, which is shown to achieve the performance of the best convex combination of the density functions defined on the subspaces. As we observe more samples, our tree grows and produces more subspaces. As a result, our modeling power increases in time, while mitigating overfitting issues. In order to choose our threshold level to label the observations, we use an adaptive thresholding scheme. We show that our adaptive threshold level achieves the performance of the optimal prefixed threshold level, which knows the observation labels in hindsight. Our algorithm provides significant performance improvements over the state of the art in our wide set of experiments involving both synthetic as well as real data.