Browsing by Author "Vanli, N. D."
Now showing 1 - 12 of 12
- Results Per Page
- Sort Options
Item Open Access Adaptive and efficient nonlinear channel equalization for underwater acoustic communication(Elsevier B.V., 2017) Kari, D.; Vanli, N. D.; Kozat, S. S.We investigate underwater acoustic (UWA) channel equalization and introduce hierarchical and adaptive nonlinear (piecewise linear) channel equalization algorithms that are highly efficient and provide significantly improved bit error rate (BER) performance. Due to the high complexity of conventional nonlinear equalizers and poor performance of linear ones, to equalize highly difficult underwater acoustic channels, we employ piecewise linear equalizers. However, in order to achieve the performance of the best piecewise linear model, we use a tree structure to hierarchically partition the space of the received signal. Furthermore, the equalization algorithm should be completely adaptive, since due to the highly non-stationary nature of the underwater medium, the optimal mean squared error (MSE) equalizer as well as the best piecewise linear equalizer changes in time. To this end, we introduce an adaptive piecewise linear equalization algorithm that not only adapts the linear equalizer at each region but also learns the complete hierarchical structure with a computational complexity only polynomial in the number of nodes of the tree. Furthermore, our algorithm is constructed to directly minimize the final squared error without introducing any ad-hoc parameters. We demonstrate the performance of our algorithms through highly realistic experiments performed on practical field data as well as accurately simulated underwater acoustic channels. © 2017 Elsevier B.V.Item Open Access A comprehensive approach to universal piecewise nonlinear regression based on trees(IEEE, 2014) Vanli, N. D.; Kozat, S. S.In this paper, we investigate adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper bounds in an individual sequence manner. We use a tree notion in order to partition the space of regressors in a nested structure. The introduced algorithms adapt not only their regression functions but also the complete tree structure while achieving the performance of the 'best' linear mixture of a doubly exponential number of partitions, with a computational complexity only polynomial in the number of nodes of the tree. While constructing these algorithms, we also avoid using any artificial 'weighting' of models (with highly data dependent parameters) and, instead, directly minimize the final regression error, which is the ultimate performance goal. The introduced methods are generic such that they can readily incorporate different tree construction methods such as random trees in their framework and can use different regressor or partitioning functions as demonstrated in the paper.Item Open Access Growth optimal investment in discrete-time markets with proportional transaction costs(Elsevier Inc., 2016) Vanli, N. D.; Tunc, S.; Donmez, M. A.; Kozat, S. S.We investigate how and when to diversify capital over assets, i.e., the portfolio selection problem, from a signal processing perspective. To this end, we first construct portfolios that achieve the optimal expected growth in i.i.d. discrete-time two-asset markets under proportional transaction costs. We then extend our analysis to cover markets having more than two stocks. The market is modeled by a sequence of price relative vectors with arbitrary discrete distributions, which can also be used to approximate a wide class of continuous distributions. To achieve the optimal growth, we use threshold portfolios, where we introduce a recursive update to calculate the expected wealth. We then demonstrate that under the threshold rebalancing framework, the achievable set of portfolios elegantly form an irreducible Markov chain under mild technical conditions. We evaluate the corresponding stationary distribution of this Markov chain, which provides a natural and efficient method to calculate the cumulative expected wealth. Subsequently, the corresponding parameters are optimized yielding the growth optimal portfolio under proportional transaction costs in i.i.d. discrete-time two-asset markets. As a widely known financial problem, we also solve the optimal portfolio selection problem in discrete-time markets constructed by sampling continuous-time Brownian markets. For the case that the underlying discrete distributions of the price relative vectors are unknown, we provide a maximum likelihood estimator that is also incorporated in the optimization framework in our simulations.Item Open Access A Novel Family of Adaptive Filtering Algorithms Based on the Logarithmic Cost(IEEE, 2014-09-01) Sayin, M. O.; Vanli, N. D.; Kozat, S. S.We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (LMF) algorithm and extends the stability bound on the step size. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interferences and outperforms the sign algorithm (SA). We analyze the transient, steady state and tracking performance of the introduced algorithms and demonstrate the match of the theoretical analyzes and simulation results. We show the extended stability bound of the LMLS algorithm and analyze the robustness of the LLAD algorithm against impulsive interferences. Finally, we demonstrate the performance of our algorithms in different scenarios through numerical examples.Item Open Access Online classification via self-organizing space partitioning(Institute of Electrical and Electronics Engineers Inc., 2016) Ozkan, H.; Vanli, N. D.; Kozat, S. S.The authors study online supervised learning under the empirical zero-one loss and introduce a novel classification algorithm with strong theoretical guarantees. The proposed method is a highly dynamical self-organizing decision tree structure, which adaptively partitions the feature space into small regions and combines (takes the union of) the local simple classification models specialized in those regions. The authors' approach sequentially and directly minimizes the cumulative loss by jointly learning the optimal feature space partitioning and the corresponding individual partition-region classifiers. They mitigate overtraining issues by using basic linear classifiers at each region while providing a superior modeling power through hierarchical and data adaptive models. The computational complexity of the introduced algorithm scales linearly with the dimensionality of the feature space and the depth of the tree. Their algorithm can be applied to any streaming data without requiring a training phase or a priori information, hence processing data on-the-fly and then discarding it. Therefore, the introduced algorithm is especially suitable for the applications requiring sequential data processing at large scales/high rates. The authors present a comprehensive experimental study in stationary and nonstationary environments. In these experiments, their algorithm is compared with the state-of-the-art methods over the well-known benchmark datasets and shown to be computationally highly superior. The proposed algorithm significantly outperforms the competing methods in the stationary settings and demonstrates remarkable adaptation capabilities to nonstationarity in the presence of drifting concepts and abrupt/sudden concept changes. © 1991-2012 IEEE.Item Open Access Optimum Power Allocation for Average Power Constrained Jammers in the Presense of Non-Gaussian Noise(Institute of Electrical and Electronics Engineers, 2012-08) Bayram, S.; Vanli, N. D.; Dulek, B.; Sezer, I.; Gezici, SinanWe study the problem of determining the optimum power allocation policy for an average power constrained jammer operating over an arbitrary additive noise channel, where the aim is to minimize the detection probability of an instantaneously and fully adaptive receiver employing the Neyman-Pearson (NP) criterion. We show that the optimum jamming performance can be achieved via power randomization between at most two different power levels. We also provide sufficient conditions for the improvability and nonimprovability of the jamming performance via power randomization in comparison to a fixed power jamming scheme. Numerical examples are presented to illustrate theoretical results.Item Open Access Optimum power randomization for the minimization of outage probability(IEEE, 2013) Dulek, B.; Vanli, N. D.; Gezici, Sinan; Varshney P. K.The optimum power randomization problem is studied to minimize outage probability in flat block-fading Gaussian channels under an average transmit power constraint and in the presence of channel distribution information at the transmitter. When the probability density function of the channel power gain is continuously differentiable with a finite second moment, it is shown that the outage probability curve is a nonincreasing function of the normalized transmit power with at least one inflection point and the total number of inflection points is odd. Based on this result, it is proved that the optimum power transmission strategy involves randomization between at most two power levels. In the case of a single inflection point, the optimum strategy simplifies to on-off signaling for weak transmitters. Through analytical and numerical discussions, it is shown that the proposed framework can be adapted to a wide variety of scenarios including log-normal shadowing, diversity combining over Rayleigh fading channels, Nakagami-m fading, spectrum sharing, and jamming applications. We also show that power randomization does not necessarily improve the outage performance when the finite second moment assumption is violated by the power distribution of the fading. © 2013 IEEE.Item Open Access Robust least squares methods under bounded data uncertainties(Academic Press, 2015) Vanli, N. D.; Donmez, M. A.; Kozat, S. S.We study the problem of estimating an unknown deterministic signal that is observed through an unknown deterministic data matrix under additive noise. In particular, we present a minimax optimization framework to the least squares problems, where the estimator has imperfect data matrix and output vector information. We define the performance of an estimator relative to the performance of the optimal least squares (LS) estimator tuned to the underlying unknown data matrix and output vector, which is defined as the regret of the estimator. We then introduce an efficient robust LS estimation approach that minimizes this regret for the worst possible data matrix and output vector, where we refrain from any structural assumptions on the data. We demonstrate that minimizing this worst-case regret can be cast as a semi-definite programming (SDP) problem. We then consider the regularized and structured LS problems and present novel robust estimation methods by demonstrating that these problems can also be cast as SDP problems. We illustrate the merits of the proposed algorithms with respect to the well-known alternatives in the literature through our simulations.Item Open Access Sequential nonlinear learning for distributed multiagent systems via extreme learning machines(Institute of Electrical and Electronics Engineers Inc., 2017) Vanli, N. D.; Sayin, M. O.; Delibalta, I.; Kozat, S. S.We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data-and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data. © 2016 IEEE.Item Open Access Sequential prediction over hierarchical structures(Institute of Electrical and Electronics Engineers Inc., 2016) Vanli, N. D.; Gokcesu, K.; Sayin, M. O.; Yildiz, H.; Kozat, S. S.We study sequential compound decision problems in the context of sequential prediction of real valued sequences. In particular, we consider finite state (FS) predictors that are constructed based on a hierarchical structure, such as the order preserving patterns of the sequence history. We define hierarchical equivalence classes by tying certain models at a hierarchy level in a recursive manner in order to mitigate undertraining problems. These equivalence classes defined on a hierarchical structure are then used to construct a super exponential number of sequential FS predictors based on their combinations and permutations. We then introduce truly sequential algorithms with computational complexity only linear in the pattern length that 1) asymptotically achieve the performance of the best FS predictor or the best linear combination of all the FS predictors in an individual sequence manner without any stochastic assumptions over any data length n under a wide range of loss functions; 2) achieve the mean square error of the best linear combination of all FS filters or predictors in the steady-state for certain nonstationary models. We illustrate the superior convergence and tracking capabilities of our algorithm with respect to several state-of-the-art methods in the literature through simulations over synthetic and real benchmark data. © 1991-2012 IEEE.Item Open Access Stochastic subgradient algorithms for strongly convex optimization over distributed networks(IEEE Computer Society, 2017) Sayin, M. O.; Vanli, N. D.; Kozat, S. S.; Başar, T.We study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node; and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on stochastic subgradient descent (SSD) updates. We use a carefully designed time-dependent weighted averaging of the SSD iterates, which yields a convergence rate of O N ffiffiffi N p (1s)T after T gradient updates for each node on a network of N nodes, where 0 ≤ σ < 1 denotes the second largest singular value of the communication matrix. This rate of convergence matches the performance lower bound up to constant terms. Similar to the SSD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SSD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets. © 2017 IEEE.Item Open Access A unified approach to universal prediction: Generalized upper and lower bounds(Institute of Electrical and Electronics Engineers Inc., 2015) Vanli, N. D.; Kozat, S. S.We study sequential prediction of real-valued, arbitrary, and unknown sequences under the squared error loss as well as the best parametric predictor out of a large, continuous class of predictors. Inspired by recent results from computational learning theory, we refrain from any statistical assumptions and define the performance with respect to the class of general parametric predictors. In particular, we present generic lower and upper bounds on this relative performance by transforming the prediction task into a parameter learning problem. We first introduce the lower bounds on this relative performance in the mixture of experts framework, where we show that for any sequential algorithm, there always exists a sequence for which the performance of the sequential algorithm is lower bounded by zero. We then introduce a sequential learning algorithm to predict such arbitrary and unknown sequences, and calculate upper bounds on its total squared prediction error for every bounded sequence. We further show that in some scenarios, we achieve matching lower and upper bounds, demonstrating that our algorithms are optimal in a strong minimax sense such that their performances cannot be improved further. As an interesting result, we also prove that for the worst case scenario, the performance of randomized output algorithms can be achieved by sequential algorithms so that randomized output algorithms do not improve the performance. © 2012 IEEE.