Browsing by Author "Kari, Dariush"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Open Access Big data signal processing using boosted RLS algorithm(IEEE, 2016) Civek, Burak Cevat; Kari, Dariush; Delibalta, İ.; Kozat, Süleyman SerdarWe propose an efficient method for the high dimensional data regression. To this end, we use a least mean squares (LMS) filter followed by a recursive least squares (RLS) filter and combine them via boosting notion extensively used in machine learning literature. Moreover, we provide a novel approach where the RLS filter is updated randomly in order to reduce the computational complexity while not giving up more on the performance. In the proposed algorithm, after the LMS filter produces an estimate, depending on the error made on this step, the algorithm decides whether or not updating the RLS filter. Since we avoid updating the RLS filter for all data sequence, the computational complexity is significantly reduced. Error performance and the computation time of our algorithm is demonstrated for a highly realistic scenario.Item Open Access Boosted adaptive filters(2017-07) Kari, DariushWe investigate boosted online regression and propose a novel family of regression algorithms with strong theoretical bounds. In addition, we implement several variants of the proposed generic algorithm. We specifically provide theoretical bounds for the performance of our proposed algorithms that hold in a strong mathematical sense. We achieve guaranteed performance improvement over the conventional online regression methods without any statistical assumptions on the desired data or feature vectors. We demonstrate an intrinsic relationship, in terms of boosting, between the adaptive mixture-of-experts and data reuse algorithms. Furthermore, we introduce a boosting algorithm based on random updates that is significantly faster than the conventional boosting methods and other variants of our proposed algorithms while achieving an enhanced performance gain. Hence, the random updates method is specifically applicable to the fast and high dimensional streaming data. Specifically, we investigate Recursive Least Squares (RLS)-based and Least Mean Squares (LMS)-based linear regression algorithms in a mixture-of-experts setting, and provide several variants of these well known adaptation methods. Moreover, we extend the proposed algorithms to other filters. Specifically, we investigate the effect of the proposed algorithms on piecewise linear filters. Furthermore, we provide theoretical bounds for the computational complexity of our proposed algorithms. We demonstrate substantial performance gains in terms of mean square error over the constituent filters through an extensive set of benchmark real data sets and simulated examples.Item Open Access Boosted adaptive filters(Elsevier, 2018) Kari, Dariush; Mirza, Ali H.; Khan, Farhan; Özkan, H.; Kozat, Süleyman SerdarWe introduce the boosting notion of machine learning to the adaptive signal processing literature. In our framework, we have several adaptive filtering algorithms, i.e., the weak learners, that run in parallel on a common task such as equalization, classification, regression or filtering. We specifically provide theoretical bounds for the performance improvement of our proposed algorithms over the conventional adaptive filtering methods under some widely used statistical assumptions. We demonstrate an intrinsic relationship, in terms of boosting, between the adaptive mixture-of-experts and data reuse algorithms. Additionally, we introduce a boosting algorithm based on random updates that is significantly faster than the conventional boosting methods and other variants of our proposed algorithms while achieving an enhanced performance gain. Hence, the random updates method is specifically applicable to the fast and high dimensional streaming data. Specifically, we investigate Recursive Least Square-based and Least Mean Square-based linear and piecewise-linear regression algorithms in a mixture-of-experts setting and provide several variants of these well-known adaptation methods. Furthermore, we provide theoretical bounds for the computational complexity of our proposed algorithms. We demonstrate substantial performance gains in terms of mean squared error over the base learners through an extensive set of benchmark real data sets and simulated examples.Item Open Access Boosted LMS-based piecewise linear adaptive filters(IEEE, 2016) Kari, Dariush; Marivani, Iman; Delibalta, İ.; Kozat, Süleyman SerdarWe introduce the boosting notion extensively used in different machine learning applications to adaptive signal processing literature and implement several different adaptive filtering algorithms. In this framework, we have several adaptive constituent filters that run in parallel. For each newly received input vector and observation pair, each filter adapts itself based on the performance of the other adaptive filters in the mixture on this current data pair. These relative updates provide the boosting effect such that the filters in the mixture learn a different attribute of the data providing diversity. The outputs of these constituent filters are then combined using adaptive mixture approaches. We provide the computational complexity bounds for the boosted adaptive filters. The introduced methods demonstrate improvement in the performances of conventional adaptive filtering algorithms due to the boosting effect.Item Open Access Online anomaly detection in case of limited feedback with accurate distribution learning(IEEE, 2017) Marivani, Iman; Kari, Dariush; Kurt, Ali Emirhan; Manış, ErenWe propose a high-performance algorithm for sequential anomaly detection. The proposed algorithm sequentially runs over data streams, accurately estimates the nominal distribution using exponential family and then declares an anomaly when the assigned likelihood of the current observation is less than a threshold. We use the estimated nominal distribution to assign a likelihood to the current observation and employ limited feedback from the end user to adjust the threshold. The high performance of our algorithm is due to accurate estimation of the nominal distribution, where we achieve this by preventing anomalous data to corrupt the update process. Our method is generic in the sense that it can operate successfully over a wide range of data distributions. We demonstrate the performance of our algorithm with respect to the state-of-the-art over time varying distributions.