Browsing by Subject "Nonlinear regression"
Now showing 1 - 11 of 11
- Results Per Page
- Sort Options
Item Open Access Bağlam ağaçları ile ardışık doğrusal olmayan bağlanım(IEEE, 2014-04) Vanlı, N. Denizcan; Kozat, Süleyman S.Bu bildiride, ardışık doğrusal olmayan bağlanım problemi incelenmiş ve bağlam ağaçları kullanarak etkili bir öğrenme algoritması sunulmuştur. Bu amaçla, bağlanım alanı parçalara ayrılmış ve oluşan bölgeler bağlam ağacı ile simgelenmiştir. Her bölgede bağımsız bağlanım algoritmaları kullanılarak bağlam ağacı tarafından gösterilebilen tüm doğrusal olmayan modellerin kestirimleri, hesaplama karmaşıklığı bağlam ağacının düğüm sayısıyla doğrusal olan bu algoritma ile uyarlanır olarak birleştirilmiştir. Önerilen algoritmanın performans limitleri, veriler üzerinde istatistiksel varsayımlarda bulunmaksızın incelenmiştir. Ayrıca, teorik sonuçları izah etmek için sayısal bir örnek sunulmuştur.Item Open Access A comprehensive approach to universal piecewise nonlinear regression based on trees(IEEE, 2014) Vanli, N. D.; Kozat, S. S.In this paper, we investigate adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper bounds in an individual sequence manner. We use a tree notion in order to partition the space of regressors in a nested structure. The introduced algorithms adapt not only their regression functions but also the complete tree structure while achieving the performance of the 'best' linear mixture of a doubly exponential number of partitions, with a computational complexity only polynomial in the number of nodes of the tree. While constructing these algorithms, we also avoid using any artificial 'weighting' of models (with highly data dependent parameters) and, instead, directly minimize the final regression error, which is the ultimate performance goal. The introduced methods are generic such that they can readily incorporate different tree construction methods such as random trees in their framework and can use different regressor or partitioning functions as demonstrated in the paper.Item Open Access Highly efficient hierarchical online nonlinear regression using second order methods(Elsevier B.V., 2017) Civek, B. C.; Delibalta, I.; Kozat, S. S.We introduce highly efficient online nonlinear regression algorithms that are suitable for real life applications. We process the data in a truly online manner such that no storage is needed, i.e., the data is discarded after being used. For nonlinear modeling we use a hierarchical piecewise linear approach based on the notion of decision trees where the space of the regressor vectors is adaptively partitioned based on the performance. As the first time in the literature, we learn both the piecewise linear partitioning of the regressor space as well as the linear models in each region using highly effective second order methods, i.e., Newton–Raphson Methods. Hence, we avoid the well known over fitting issues by using piecewise linear models, however, since both the region boundaries as well as the linear models in each region are trained using the second order methods, we achieve substantial performance compared to the state of the art. We demonstrate our gains over the well known benchmark data sets and provide performance results in an individual sequence manner guaranteed to hold without any statistical assumptions. Hence, the introduced algorithms address computational complexity issues widely encountered in real life applications while providing superior guaranteed performance in a strong deterministic sense.Item Open Access Highly efficient nonlinear regression for big data with lexicographical splitting(Springer London, 2017) Neyshabouri, M. M.; Demir, O.; Delibalta, I.; Kozat, S. S.This paper considers the problem of online piecewise linear regression for big data applications. We introduce an algorithm, which sequentially achieves the performance of the best piecewise linear (affine) model with optimal partition of the space of the regressor vectors in an individual sequence manner. To this end, our algorithm constructs a class of 2 D sequential piecewise linear models over a set of partitions of the regressor space and efficiently combines them in the mixture-of-experts setting. We show that the algorithm is highly efficient with computational complexity of only O(mD2) , where m is the dimension of the regressor vectors. This efficient computational complexity is achieved by efficiently representing all of the 2 D models using a “lexicographical splitting graph.” We analyze the performance of our algorithm without any statistical assumptions, i.e., our results are guaranteed to hold. Furthermore, we demonstrate the effectiveness of our algorithm over the well-known data sets in the machine learning literature with computational complexity fraction of the state of the art.Item Open Access Markovian RNN: an adaptive time series prediction network with HMM-based switching for nonstationary environments(Institute of Electrical and Electronics Engineers Inc., 2023-02-01) İlhan, Fatih; Karaahmetoğlu, Oğuzhan; Balaban, İ.; Kozat, Süleyman SerdarWe investigate nonlinear regression for nonstationary sequential data. In most real-life applications such as business domains including finance, retail, energy, and economy, time series data exhibit nonstationarity due to the temporally varying dynamics of the underlying system. We introduce a novel recurrent neural network (RNN) architecture, which adaptively switches between internal regimes in a Markovian way to model the nonstationary nature of the given data. Our model, Markovian RNN employs a hidden Markov model (HMM) for regime transitions, where each regime controls hidden state transitions of the recurrent cell independently. We jointly optimize the whole network in an end-to-end fashion. We demonstrate the significant performance gains compared to conventional methods such as Markov Switching ARIMA, RNN variants and recent statistical and deep learning-based methods through an extensive set of experiments with synthetic and real-life datasets. We also interpret the inferred parameters and regime belief values to analyze the underlying dynamics of the given sequences.Item Open Access Markovian RNN: an adaptive time series prediction network with HMM-based switching for nonstationary environments(Institute of Electrical and Electronics Engineers, 2021-08-09) İlhan, Fatih; Karaahmetoğlu, Oğuzhan; Balaban, İ.; Kozat, Süleyman SerdarWe investigate nonlinear regression for nonstationary sequential data. In most real-life applications such as business domains including finance, retail, energy, and economy, time series data exhibit nonstationarity due to the temporally varying dynamics of the underlying system. We introduce a novel recurrent neural network (RNN) architecture, which adaptively switches between internal regimes in a Markovian way to model the nonstationary nature of the given data. Our model, Markovian RNN employs a hidden Markov model (HMM) for regime transitions, where each regime controls hidden state transitions of the recurrent cell independently. We jointly optimize the whole network in an end-to-end fashion. We demonstrate the significant performance gains compared to conventional methods such as Markov Switching ARIMA, RNN variants and recent statistical and deep learning-based methods through an extensive set of experiments with synthetic and real-life datasets. We also interpret the inferred parameters and regime belief values to analyze the underlying dynamics of the given sequences.Item Open Access Nonlinear regression using second order methods(IEEE, 2016) Civek, Burak Cevat; Delibalta, İ.; Kozat, Süleyman SerdarWe present a highly efficient algorithm for the online nonlinear regression problem. We process only the currently available data and do not reuse it, hence, there is no need for storage. For the nonlinear regression, we use piecewise linear modeling, where the regression space is partitioned into several regions and a linear model is fit to each region. As the first time in the literature, we use second order methods, e.g. Newton-Raphson Methods, and adaptively train both the region boundaries and the corresponding linear models. Therefore, we overcome the well known overfitting and underfitting problems. The proposed algorithm provides a substantial improvement in the performance compared to the state of the art.Item Open Access Nonstationary time series prediction with Markovian switching recurrent neural networks(2021-07) İlhan, FatihWe investigate nonlinear prediction for nonstationary time series. In most real-life scenarios such as finance, retail, energy and economy applications, time se-ries data exhibits nonstationarity due to the temporally varying dynamics of the underlying system. This situation makes the time series prediction challenging in nonstationary environments. We introduce a novel recurrent neural network (RNN) architecture, which adaptively switches between internal regimes in a Markovian way to model the nonstationary nature of the given data. Our model, Markovian RNN employs a hidden Markov model (HMM) for regime transitions, where each regime controls hidden state transitions of the recurrent cell inde-pendently. We jointly optimize the whole network in an end-to-end fashion. We demonstrate the significant performance gains compared to conventional methods such as Markov Switching ARIMA, RNN variants and recent statistical and deep learning-based methods through an extensive set of experiments with synthetic and real-life datasets. We also interpret the inferred parameters and regime belief values to analyze the underlying dynamics of the given sequences.Item Open Access Online distributed nonlinear regression via neural networks(IEEE, 2017) Ergen, Tolga; Kozat, Süleyman SerdarIn this paper, we study the nonlinear regression problem in a network of nodes and introduce long short term memory (LSTM) based algorithms. In order to learn the parameters of the LSTM architecture in an online manner, we put the LSTM equations into a nonlinear state space form and then introduce our distributed particle filtering (DPF) based training algorithm. Our training algorithm asymptotically achieves the optimal training performance. In our simulations, we illustrate the performance improvement achieved by the introduced algorithm with respect to the conventional methods.Item Open Access Piecewise nonlinear regression via decision adaptive trees(IEEE, 2014-09) Vanlı, N. Denizcan; Sayın, Muhammed O.; Ergüt, S.; Kozat, Süleyman S.We investigate the problem of adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper bounds in an individual sequence manner. We partition the regressor space using hyperplanes in a nested structure according to the notion of a tree. In this manner, we introduce an adaptive nonlinear regression algorithm that not only adapts the regressor of each partition but also learns the complete tree structure with a computational complexity only polynomial in the number of nodes of the tree. Our algorithm is constructed to directly minimize the final regression error without introducing any ad-hoc parameters. Moreover, our method can be readily incorporated with any tree construction method as demonstrated in the paper. © 2014 EURASIP.Item Open Access Variable selection in regression using maximal correlation and distance correlation(Taylor and Francis Ltd., 2015) Yenigün, C. D.; Rizzo, M. L.In most of the regression problems the first task is to select the most influential predictors explaining the response, and removing the others from the model. These problems are usually referred to as the variable selection problems in the statistical literature. Numerous methods have been proposed in this field, most of which address linear models. In this study we propose two variable selection criteria for regression based on two powerful dependence measures, maximal correlation and distance correlation. We focus on these two measures since they fully or partially satisfy the Rényi postulates for dependence measures, and thus they are able to detect nonlinear dependence structures. Therefore, our methods are considered to be appropriate in linear as well as nonlinear regression models. Both methods are easy to implement and they perform well. We illustrate the performances of the proposed methods via simulations, and compare them with two benchmark methods, stepwise Akaike information criterion and lasso. In several cases with linear dependence all four methods turned out to be comparable. In the presence of nonlinear or uncorrelated dependencies, we observed that our proposed methods may be favourable. An application of the proposed methods to a real financial data set is also provided. © 2014, Taylor & Francis.