Browsing by Subject "Piecewise linear"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Open Access Adaptive and efficient nonlinear channel equalization for underwater acoustic communication(Elsevier B.V., 2017) Kari, D.; Vanli, N. D.; Kozat, S. S.We investigate underwater acoustic (UWA) channel equalization and introduce hierarchical and adaptive nonlinear (piecewise linear) channel equalization algorithms that are highly efficient and provide significantly improved bit error rate (BER) performance. Due to the high complexity of conventional nonlinear equalizers and poor performance of linear ones, to equalize highly difficult underwater acoustic channels, we employ piecewise linear equalizers. However, in order to achieve the performance of the best piecewise linear model, we use a tree structure to hierarchically partition the space of the received signal. Furthermore, the equalization algorithm should be completely adaptive, since due to the highly non-stationary nature of the underwater medium, the optimal mean squared error (MSE) equalizer as well as the best piecewise linear equalizer changes in time. To this end, we introduce an adaptive piecewise linear equalization algorithm that not only adapts the linear equalizer at each region but also learns the complete hierarchical structure with a computational complexity only polynomial in the number of nodes of the tree. Furthermore, our algorithm is constructed to directly minimize the final squared error without introducing any ad-hoc parameters. We demonstrate the performance of our algorithms through highly realistic experiments performed on practical field data as well as accurately simulated underwater acoustic channels. © 2017 Elsevier B.V.Item Open Access Boosted LMS-based piecewise linear adaptive filters(IEEE, 2016) Kari, Dariush; Marivani, Iman; Delibalta, İ.; Kozat, Süleyman SerdarWe introduce the boosting notion extensively used in different machine learning applications to adaptive signal processing literature and implement several different adaptive filtering algorithms. In this framework, we have several adaptive constituent filters that run in parallel. For each newly received input vector and observation pair, each filter adapts itself based on the performance of the other adaptive filters in the mixture on this current data pair. These relative updates provide the boosting effect such that the filters in the mixture learn a different attribute of the data providing diversity. The outputs of these constituent filters are then combined using adaptive mixture approaches. We provide the computational complexity bounds for the boosted adaptive filters. The introduced methods demonstrate improvement in the performances of conventional adaptive filtering algorithms due to the boosting effect.Item Open Access Competitive and online piecewise linear classification(IEEE, 2013) Özkan, Hüseyin; Donmez, M.A.; Pelvan O.S.; Akman, A.; Kozat, Süleyman S.In this paper, we study the binary classification problem in machine learning and introduce a novel classification algorithm based on the 'Context Tree Weighting Method'. The introduced algorithm incrementally learns a classification model through sequential updates in the course of a given data stream, i.e., each data point is processed only once and forgotten after the classifier is updated, and asymptotically achieves the performance of the best piecewise linear classifiers defined by the 'context tree'. Since the computational complexity is only linear in the depth of the context tree, our algorithm is highly scalable and appropriate for real time processing. We present experimental results on several benchmark data sets and demonstrate that our method provides significant computational improvement both in the test (5 ∼ 35×) and training phases (40 ∼ 1000×), while achieving high classification accuracy in comparison to the SVM with RBF kernel. © 2013 IEEE.Item Open Access Highly efficient nonlinear regression for big data with lexicographical splitting(Springer London, 2017) Neyshabouri, M. M.; Demir, O.; Delibalta, I.; Kozat, S. S.This paper considers the problem of online piecewise linear regression for big data applications. We introduce an algorithm, which sequentially achieves the performance of the best piecewise linear (affine) model with optimal partition of the space of the regressor vectors in an individual sequence manner. To this end, our algorithm constructs a class of 2 D sequential piecewise linear models over a set of partitions of the regressor space and efficiently combines them in the mixture-of-experts setting. We show that the algorithm is highly efficient with computational complexity of only O(mD2) , where m is the dimension of the regressor vectors. This efficient computational complexity is achieved by efficiently representing all of the 2 D models using a “lexicographical splitting graph.” We analyze the performance of our algorithm without any statistical assumptions, i.e., our results are guaranteed to hold. Furthermore, we demonstrate the effectiveness of our algorithm over the well-known data sets in the machine learning literature with computational complexity fraction of the state of the art.Item Open Access Linear MMSE-optimal turbo equalization using context trees(IEEE, 2013) Kim, K.; Kalantarova, N.; Kozat, S. S.; Singer, A. C.Formulations of the turbo equalization approach to iterative equalization and decoding vary greatly when channel knowledge is either partially or completely unknown. Maximum aposteriori probability (MAP) and minimum mean-square error (MMSE) approaches leverage channel knowledge to make explicit use of soft information (priors over the transmitted data bits) in a manner that is distinctly nonlinear, appearing either in a trellis formulation (MAP) or inside an inverted matrix (MMSE). To date, nearly all adaptive turbo equalization methods either estimate the channel or use a direct adaptation equalizer in which estimates of the transmitted data are formed from an expressly linear function of the received data and soft information, with this latter formulation being most common. We study a class of direct adaptation turbo equalizers that are both adaptive and nonlinear functions of the soft information from the decoder. We introduce piecewise linear models based on context trees that can adaptively approximate the nonlinear dependence of the equalizer on the soft information such that it can choose both the partition regions as well as the locally linear equalizer coefficients in each region independently, with computational complexity that remains of the order of a traditional direct adaptive linear equalizer. This approach is guaranteed to asymptotically achieve the performance of the best piecewise linear equalizer, and we quantify the MSE performance of the resulting algorithm and the convergence of its MSE to that of the linear minimum MSE estimator as the depth of the context tree and the data length increase.Item Open Access Twice-universal piecewise linear regression via infinite depth context trees(IEEE, 2015) Vanlı, Nuri Denizcan; Sayın, Muhammed O.; Göze, T.; Kozat, Süleyman SelimWe investigate the problem of sequential piecewise linear regression from a competitive framework. For an arbitrary and unknown data length n, we first introduce a method to partition the regressor space. Particularly, we present a recursive method that divides the regressor space into O(n) disjoint regions that can result in approximately 1.5n different piecewise linear models on the regressor space. For each region, we introduce a universal linear regressor whose performance is nearly as well as the best linear regressor whose parameters are set non-causally. We then use an infinite depth context tree to represent all piecewise linear models and introduce a universal algorithm to achieve the performance of the best piecewise linear model that can be selected in hindsight. In this sense, the introduced algorithm is twice-universal such that it sequentially achieves the performance of the best model that uses the optimal regression parameters. Our algorithm achieves this performance only with a computational complexity upper bounded by O(n) in the worst-case and O(log(n)) under certain regularity conditions. We provide the explicit description of the algorithm as well as the upper bounds on the regret with respect to the best nonlinear and piecewise linear models, and demonstrate the performance of the algorithm through simulations.