Browsing by Subject "Exponential family"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access Differential entropy of the conditional expectation under additive gaussian voise(Institute of Electrical and Electronics Engineers, 2022) Atalik, Arda; Köse, Alper; Gastpar, MichaelThe conditional mean is a fundamental and important quantity whose applications include the theories of estimation and rate-distortion. It is also notoriously difficult to work with. This paper establishes novel bounds on the differential entropy of the conditional mean in the case of finite-variance input signals and additive Gaussian noise. The main result is a new lower bound in terms of the differential entropies of the input signal and the noisy observation. The main results are also extended to the vector Gaussian channel and to the natural exponential family. Various other properties such as upper bounds, asymptotics, Taylor series expansion, and connection to Fisher Information are obtained. Two applications of the lower bound in the remote-source coding and CEO problem are discussed.Item Open Access Estimating distributions varying in time in a universal manner(IEEE, 2017) Gökçesu, Kaan; Manış, Eren; Kurt, Ali Emirhan; Yar, ErsinWe investigate the estimation of distributions with time-varying parameters. We introduce an algorithm that achieves the optimal negative likelihood performance against the true probability distribution. We achieve this optimum regret performance without any knowledge about the total change of the parameters of true distribution. Our results are guaranteed to hold in an individual sequence manner such that we have no assumptions on the underlying sequences. Apart from the regret bounds, through synthetic and real life experiments, we demonstrate substantial performance gains with respect to the state-of-the-art probability density estimation algorithms in the literature.Item Open Access Online anomaly detection in case of limited feedback with accurate distribution learning(IEEE, 2017) Marivani, Iman; Kari, Dariush; Kurt, Ali Emirhan; Manış, ErenWe propose a high-performance algorithm for sequential anomaly detection. The proposed algorithm sequentially runs over data streams, accurately estimates the nominal distribution using exponential family and then declares an anomaly when the assigned likelihood of the current observation is less than a threshold. We use the estimated nominal distribution to assign a likelihood to the current observation and employ limited feedback from the end user to adjust the threshold. The high performance of our algorithm is due to accurate estimation of the nominal distribution, where we achieve this by preventing anomalous data to corrupt the update process. Our method is generic in the sense that it can operate successfully over a wide range of data distributions. We demonstrate the performance of our algorithm with respect to the state-of-the-art over time varying distributions.Item Open Access Online density estimation of nonstationary sources using exponential family of distributions(Institute of Electrical and Electronics Engineers Inc., 2018) Gokcesu, K.; Kozat, Süleyman SerdarWe investigate online probability density estimation (or learning) of nonstationary (and memoryless) sources using exponential family of distributions. To this end, we introduce a truly sequential algorithm that achieves Hannan-consistent log-loss regret performance against true probability distribution without requiring any information about the observation sequence (e.g., the time horizon T and the drift of the underlying distribution C) to optimize its parameters. Our results are guaranteed to hold in an individual sequence manner. Our log-loss performance with respect to the true probability density has regret bounds of O((CT)1/2), where C is the total change (drift) in the natural parameters of the underlying distribution. To achieve this, we design a variety of probability density estimators with exponentially quantized learning rates and merge them with a mixture-of-experts notion. Hence, we achieve this square-root regret with computational complexity only logarithmic in the time horizon. Thus, our algorithm can be efficiently used in big data applications. Apart from the regret bounds, through synthetic and real-life experiments, we demonstrate substantial performance gains with respect to the state-of-the-art probability density estimation algorithms in the literature. IEEEItem Open Access Sequential outlier detection based on incremental decision trees(IEEE, 2019) Gökçesu, Kaan; Neyshabouri, Mohammadreza Mohaghegh; Gökçesu, Hakan; Serdar, SüleymanWe introduce an online outlier detection algorithm to detect outliers in a sequentially observed data stream. For this purpose, we use a two-stage filtering and hedging approach. In the first stage, we construct a multimodal probability density function to model the normal samples. In the second stage, given a new observation, we label it as an anomaly if the value of aforementioned density function is below a specified threshold at the newly observed point. In order to construct our multimodal density function, we use an incremental decision tree to construct a set of subspaces of the observation space. We train a single component density function of the exponential family using the observations, which fall inside each subspace represented on the tree. These single component density functions are then adaptively combined to produce our multimodal density function, which is shown to achieve the performance of the best convex combination of the density functions defined on the subspaces. As we observe more samples, our tree grows and produces more subspaces. As a result, our modeling power increases in time, while mitigating overfitting issues. In order to choose our threshold level to label the observations, we use an adaptive thresholding scheme. We show that our adaptive threshold level achieves the performance of the optimal prefixed threshold level, which knows the observation labels in hindsight. Our algorithm provides significant performance improvements over the state of the art in our wide set of experiments involving both synthetic as well as real data.