Browsing by Subject "Boosting"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access Big data signal processing using boosted RLS algorithm(IEEE, 2016) Civek, Burak Cevat; Kari, Dariush; Delibalta, İ.; Kozat, Süleyman SerdarWe propose an efficient method for the high dimensional data regression. To this end, we use a least mean squares (LMS) filter followed by a recursive least squares (RLS) filter and combine them via boosting notion extensively used in machine learning literature. Moreover, we provide a novel approach where the RLS filter is updated randomly in order to reduce the computational complexity while not giving up more on the performance. In the proposed algorithm, after the LMS filter produces an estimate, depending on the error made on this step, the algorithm decides whether or not updating the RLS filter. Since we avoid updating the RLS filter for all data sequence, the computational complexity is significantly reduced. Error performance and the computation time of our algorithm is demonstrated for a highly realistic scenario.Item Open Access Escaping local optima in a class of multi-agent distributed optimization problems: a boosting function approach(IEEE, 2014) Sun, X.; Cassandras, C. G.; Gökbayrak, KaanWe address the problem of multiple local optima commonly arising in optimization problems for multi-agent systems, where objective functions are nonlinear and nonconvex. For the class of coverage control problems, we propose a systematic approach for escaping a local optimum, rather than randomly perturbing controllable variables away from it. We show that the objective function for these problems can be decomposed to facilitate the evaluation of the local partial derivative of each node in the system and to provide insights into its structure. This structure is exploited by defining 'boosting functions' applied to the aforementioned local partial derivative at an equilibrium point where its value is zero so as to transform it in a way that induces nodes to explore poorly covered areas of the mission space until a new equilibrium point is reached. The proposed boosting process ensures that, at its conclusion, the objective function is no worse than its pre-boosting value. However, the global optima cannot be guaranteed. We define three families of boosting functions with different properties and provide simulation results illustrating how this approach improves the solutions obtained for this class of distributed optimization problems.Item Open Access Low complexity efficient online learning algorithms using LSTM networks(2018-12) Mirza, Ali HassanIn this thesis, we implement efficient online learning algorithms using the Long Short Term Memory (LSTM) networks with low time and computational complexity. In Chapter 2, we investigate efficient covariance information-based online learning using the LSTM networks known as Co-LSTM networks. We utilize the covariance information into the LSTM gating structure and propose various effi- cient models. We reduce the computational complexity by applying the Weight Matrix Factorization (WMF) trick and derive the additive gradient based updates. In Chapter 3, we give a practical application of the network intrusion detection using the Co-LSTM networks. In Chapter 4, we propose a boosted binary version of Tree-LSTM networks which we call BBT-LSTM networks. We introduce the depth and windowing factor into the N-ary Tree-LSTM networks where each LSTM node is binarily split and the whole tree architecture grows in a balanced manner. In order to reduce the computational complexity of the BBT-LSTM networks, we apply WMF trick, replace the regular multiplication operator with the energy efficient operator and finally introduce the slicing operation on the BBT-LSTM network weight matrices. In Chapter 5, we propose another low complexity LSTM network based on a minimum number of hopping over the input data sequence. We study two methods to select the appropriate value of the hopping distance. Through an extensive set of experiments using the real-life data sets, we demonstrate the significant increase in the performance of the proposed algorithms at the end of each chapter.Item Open Access Online boosting algorithm for regression with additive and multiplicative updates(IEEE, 2018-05) Mirza, Ali H.In this paper, we propose a boosted regression algorithm in an online framework. We have a linear combination of the estimated output for each weak learner and weigh each of the estimated output differently by introducing ensemble coefficients. We then update the ensemble weight coefficients using both additive and multiplicative updates along with the stochastic gradient updates of the regression weight coefficients. We make the proposed algorithm robust by introducing two critical factors; significance and penalty factor. These two factors play a crucial role in the gradient updates of the regression weight coefficients and in increasing the regression performance. The proposed algorithm is guaranteed to converge in terms of exponentially decaying regret bound in terms of number of weak learners. We then demonstrate the performance of our proposed algorithm on both synthetic as well as real-life data sets.Item Open Access Online nonlinear modeling for big data applications(2017-12) Khan, FarhanWe investigate online nonlinear learning for several real life, adaptive signal processing and machine learning applications involving big data, and introduce algorithms that are both e cient and e ective. We present novel solutions for learning from the data that is generated at high speed and/or have big dimensions in a non-stationary environment, and needs to be processed on the y. We speci cally focus on investigating the problems arising from adverse real life conditions in a big data perspective. We propose online algorithms that are robust against the non-stationarities and corruptions in the data. We emphasize that our proposed algorithms are universally applicable to several real life applications regardless of the complexities involving high dimensionality, time varying statistics, data structures and abrupt changes. To this end, we introduce a highly robust hierarchical trees algorithm for online nonlinear learning in a high dimensional setting where the data lies on a time varying manifold. We escape the curse of dimensionality by tracking the subspace of the underlying manifold and use the projections of the original high dimensional regressor space onto the underlying manifold as the modi ed regressor vectors for modeling of the nonlinear system. By using the proposed algorithm, we reduce the computational complexity to the order of the depth of the tree and the memory requirement to only linear in the intrinsic dimension of the manifold. We demonstrate the signi cant performance gains in terms of mean square error over the other state of the art techniques through simulated as well as real data. We then consider real life applications of online nonlinear learning modeling, such as network intrusions detection, customers' churn analysis and channel estimation for underwater acoustic communication. We propose sequential and online learning methods that achieve signi cant performance in terms of detection accuracy, compared to the state-of-the-art techniques. We speci cally introduce structured and deep learning methods to develop robust learning algorithms. Furthermore, we improve the performance of our proposed online nonlinear learning models by introducing mixture-of-experts methods and the concept of boosting. The proposed algorithms achieve signi cant performance gain over the state-ofthe- art methods with signi cantly reduced computational complexity and storage requirements in real life conditions.