Browsing by Subject "Quantization"
Now showing 1 - 12 of 12
- Results Per Page
- Sort Options
Item Open Access Compression of images in CFA format(IEEE, 2006) Cüce, Halil İbrahim; Çetin, A. Enis; Davey, M. K.In this paper, images in Color Filter Array (CFA) format are compressed without converting them to full-RGB color images. Green pixels are extracted from the CFA image data and placed in a rectangular array, and compressed using a transform based method without estimating the corresponding luminance values. In addition, two sets of color difference (or chrominance) coefficients are obtained corresponding to the red and blue pixels of the CFA data and they are also compressed using a transform based method. The proposed method produces better PSNR values compared to the standard approach of bilinear interpolation followed by compression.Item Open Access CRLB based optimal noise enhanced parameter estimation using quantized observations(IEEE, 2010-02-22) Balkan, G. O.; Gezici, SinanIn this letter, optimal additive noise is characterized for parameter estimation based on quantized observations. First, optimal probability distribution of noise that should be added to observations is formulated in terms of a CramerRao lower bound (CRLB) minimization problem. Then, it is proven that optimal additive noise can be represented by a constant signal level, which means that randomization of additive signal levels is not needed for CRLB minimization. In addition, the results are extended to the cases in which there exists prior information about the unknown parameter and the aim is to minimize the Bayesian CRLB (BCRLB). Finally, a numerical example is presented to explain the theoretical results.Item Open Access The design of finite-state machines for quantization using simulated annealing(IEEE, 1993) Kuruoğlu, Ercan Engin; Ayanoğlu, E.In this paper, the combinatorial optimization algorithm known as simulated annealing is used for the optimization of the trellis structure or the next-state map of the decoder finite-state machine in trellis waveform coding. The generalized Lloyd algorithm which finds the optimum codebook is incorporated into simulated annealing. Comparison of simulation results with previous work in the literature shows that this combined method yields coding systems with good performance.Item Open Access Federated learning and distributed inference over wireless channels(2023-11) Tegin, BüşraIn an era marked by massive connectivity and a growing number of connected devices, we have gained unprecedented access to a wealth of information, enhancing the reliability and precision of intelligent systems and enabling the de-velopment of learning algorithms that are more capable than ever. However, this proliferation of data also introduces new challenges for centralized learning algorithms for the training and inference processes of these intelligent systems due to increased traffic loads and the necessity of substantial computational resources. Consequently, the introduction of federated learning (FL) and distributed inference systems has become essential. Both FL and distributed inference necessitate communication within the network, specifically, the transmission of model updates and intermediate features. This has led to a significant emphasis on their utilization over wireless channels, underscoring the pivotal role of wireless communications in this context. In pursuit of a practical implementation of federated learning over wireless fading channels, we direct our focus towards cost-effective solutions, accounting for hardware-induced distortions. We consider a blind transmitter scenario, wherein distributed workers operate without access to channel state information (CSI). Meanwhile, the parameter server (PS) employs multiple antennas to align received signals. To mitigate the increased power consumption and hardware cost, we leverage complex-valued, low-resolution digital-to-analog converters (DACs) at the transmitter and analog-to-digital converters (ADCs) at the PS. Through a combination of theoretical analysis and numerical demonstrations, we establish that federated learning systems can effectively operate over fading channels, even in the presence of low-resolution ADCs and DACs. As another aspect of practical implementation, we investigate federated learning with over-the-air aggregation over time-varying wireless channels. In this scenario, workers transmit their local gradients over channels that undergo time variations, stemming from factors such as worker or PS mobility and other transmission medium fluctuations. These channel variations introduce inter-carrier interference (ICI), which can notably degrade the system performance, particularly in cases of rapidly varying channels. We examine the effects of the channel time variations on FL with over-the-air aggregation, and show that the resulting undesired interference terms have only limited destructive effects, which do not prevent the convergence of the distributed learning algorithm. Focusing on the distributed inference concept, we also consider a multi-sensor wireless inference system. In this configuration, several sensors with constrained computational capacities observe common phenomena and engage in collaborative inference efforts alongside a central device. Given the inherent limitations on the computational capabilities of the sensors, the features extracted from the front part of the network are transmitted to an edge device, which necessitates sensor fusion for the intermediate features. We propose Lp-norm inspired and LogSumExp approximations for the maximum operation as a sensor fusion method, resulting in the acquisition of transformation-invariant features that also enable bandwidth-efficient feature transmission. As a further enhancement of the proposed method, we introduce a learnable sensor fusion technique inspired by the Lp-norm. This technique incorporates a trainable parameter, providing the flexibility to customize the sensor fusion according to the unique network and sensor distribution characteristics. We show that by encompassing a spectrum of behaviors, this approach enhances the adaptability of the system and contributes to its overall performance improvement.Item Open Access Noise enhanced parameter estimation using quantized observations(2010) Balkan, Gökçe OsmanIn this thesis, optimal additive noise is characterized for both single and multiple parameter estimation based on quantized observations. In both cases, first, optimal probability distribution of noise that should be added to observations is formulated in terms of a Cramer-Rao lower bound (CRLB) minimization problem. In the single parameter case, it is proven that optimal additive “noise” can be represented by a constant signal level, which means that randomization of additive signal levels (equivalently, quantization levels) are not needed for CRLB minimization. In addition, the results are extended to the cases in which there exists prior information about the unknown parameter and the aim is to minimize the Bayesian CRLB (BCRLB). Then, numerical examples are presented to explain the theoretical results. Moreover, performance obtained via optimal additive noise is compared to performance of the commonly used dither signals. Furthermore, mean-squared error (MSE) performances of maximum likelihood (ML) and maximum a-posteriori probability (MAP) estimates are investigated in the presence and absence of additive noise. In the multiple parameter case, the form of the optimal random additive noise is derived for CRLB minimization. Next, the theoretical result is supported with a numerical example, where the optimum noise is calculated by using the particle swarm optimization (PSO) algorithm. Finally, the optimal constant noise in the multiple parameter estimation problem in the presence of prior information is discussed.Item Open Access Quadratic multi-dimensional signaling games and affine equilibria(Institute of Electrical and Electronics Engineers Inc., 2017) Sarıtaş, S.; Yüksel S.; Gezici, SinanThis paper studies the decentralized quadratic cheap talk and signaling game problems when an encoder and a decoder, viewed as two decision makers, have misaligned objective functions. The main contributions of this study are the extension of Crawford and Sobel's cheap talk formulation to multi-dimensional sources and to noisy channel setups. We consider both (simultaneous) Nash equilibria and (sequential) Stackelberg equilibria. We show that for arbitrary scalar sources, in the presence of misalignment, the quantized nature of all equilibrium policies holds for Nash equilibria in the sense that all Nash equilibria are equivalent to those achieved by quantized encoder policies. On the other hand, all Stackelberg equilibria policies are fully informative. For multi-dimensional setups, unlike the scalar case, Nash equilibrium policies may be of non-quantized nature, and even linear. In the noisy setup, a Gaussian source is to be transmitted over an additive Gaussian channel. The goals of the encoder and the decoder are misaligned by a bias term and encoder's cost also includes a penalty term on signal power. Conditions for the existence of affine Nash equilibria as well as general informative equilibria are presented. For the noisy setup, the only Stackelberg equilibrium is the linear equilibrium when the variables are scalar. Our findings provide further conditions on when affine policies may be optimal in decentralized multi-criteria control problems and lead to conditions for the presence of active information transmission in strategic environments.Item Open Access Signaling and information games with subjective costs or priors and privacy constraints(2021-08) Kazıklı, ErtanWe investigate signaling game problems where an encoder and a decoder with misaligned objectives communicate. We consider a variety of setups involving cost criterion mismatch, prior mismatch and a particular application to privacy problems. We also consider both Nash and Stackelberg solution concepts. First, we extend the classical results on the scalar cheap talk problem which is intro-duced by Crawford and Sobel. In prior work, it is shown that the encoder must employ a quantization policy under any Nash equilibrium for arbitrary source distributions. We specifically consider sources with a log-concave density and in-vestigate properties of equilibria. For sources with two-sided unbounded support, we prove that, for any finite number of bins, there exists a unique equilibrium. If the source has semi-unbounded support, then there may exist a finite upper bound on the number of bins in equilibrium depending on certain explicit condi-tions. Moreover, we show that an equilibrium with more bins is more informative by showing that the expected costs of the encoder and the decoder in equilibrium decrease as the number of bins increases. Furthermore, for strictly log-concave sources with two-sided unbounded support, we prove that if the encoder and decoder iteratively compute their best responses starting from a given number of bins, then the resulting policies converge to the unique equilibrium with the corresponding number of bins. Second, we model a privacy problem as a signaling game between an encoder and a decoder. Given a pair of correlated observations modeled as jointly Gaus-sian random vectors, the encoder aims to hide one of them and convey the other one to the decoder. In contrast, the aim of the decoder is to accurately estimate both of the random vectors. For the resulting signaling game problem, we show that a payoff dominant Nash equilibrium among all admissible policies is attained by a set of explicitly characterized linear policies. We also show that a payoff dominant Nash equilibrium coincides with a Stackelberg equilibrium. Moreover, we formulate the information bottleneck problem within our Stackelberg frame-work under the mean squared error criterion where the information bottleneck setup has a further restriction that only one of the parameters is observed at the encoder. We show that the Gaussian information bottleneck problem admits a linear solution which is explicitly characterized. Third, we investigate communications through a Gaussian noise channel be-tween an encoder and a decoder with prior mismatch. Although they consider the same cost function, the induced expected costs as a map of their policies are misaligned due to their prior mismatch. We analyze the resulting signaling game problem under Stackelberg equilibria. We first investigate robustness of equilibria and show that the Stackelberg equilibrium cost of the encoder is upper semi continuous, under the Wasserstein metric, as the encoder’s prior approaches the decoder’s prior, and it is also lower semi continuous with Gaussian priors. In addition, we show that the optimality of affine policies for Gaussian signaling no longer holds under prior mismatch. Furthermore, we provide conditions under which there exist informative equilibria under an affine policy restriction. Fourth, we extend Crawford and Sobel’s formulation to a multidimensional source setting. We first provide a set of geometry conditions that decoder actions at a Nash equilibrium has to satisfy considering any multidimensional source. Then, we consider independent and identically distributed sources and charac-terize necessary and sufficient conditions under which an informative linear equi-librium exists. We observe that these conditions involve the bias vector that leads to misaligned costs. Depending on certain conditions on the bias vector, the existence of linear equilibria may require sources with a Gaussian or a sym-metric density. Moreover, we provide a rate-distortion theoretic formulation of the cheap talk problem and obtain achievable rate and distortion pairs for the Gaussian case. Finally, in a communication theoretic setup, we consider modulation classi-fication and symbol decoding problems jointly and propose optimal strategies under various settings. The aim is to decode a sequence of received signals with an unknown modulation scheme. First, the prior probabilities of the candidate modulation schemes are assumed to be known and a formulation is proposed under the Bayesian framework. Second, we address the case when the prior prob-abilities of the candidate modulation schemes are unknown, and provide a method under the minimax framework. Numerical simulations show that the proposed techniques improve the performance under the employed criteria compared to the conventional techniques in a variety of system configurations.Item Open Access Signaling games in networked systems(2018-07) Sarıtaş, SerkanWe investigate decentralized quadratic cheap talk and signaling game problems when the decision makers (an encoder and a decoder) have misaligned objective functions. We first extend the classical results of Crawford and Sobel on cheap talk to multi-dimensional sources and noisy channel setups, as well as to dynamic (multi-stage) settings. Under each setup, we investigate the equilibria of both Nash (simultaneous-move) and Stackelberg (leader-follower) games. We show that for scalar cheap talk, the quantized nature of Nash equilibrium policies holds for arbitrary sources; whereas Nash equilibria may be of non-quantized nature, and even linear for multi-dimensional setups. All Stackelberg equilibria policies are fully informative, unlike the Nash setup. For noisy signaling games, a Gauss-Markov source is to be transmitted over a memoryless additive Gaussian channel. Here, conditions for the existence of a ne equilibria, as well as informative equilibria are presented, and a dynamic programming formulation is obtained for linear equilibria. For all setups, conditions under which equilibria are noninformative are derived through information theoretic bounds. We then provide a different construction for signaling games in view of the presence of inconsistent priors among multiple decision makers, where we focus on binary signaling problems. Here, equilibria are analyzed, a characterization on when informative equilibria exist, and robustness and continuity properties to misalignment are presented under Nash and Stackelberg criteria. Lastly, we provide an analysis on the number of bins at equilibria for the quadratic cheap talk problem under the Gaussian and exponential source assumptions. Our findings reveal drastic differences in signaling behavior under team and game setups and yield a comprehensive analysis on the value of information; i.e., for the decision makers, whether there is an incentive for information hiding, or not, which have practical consequences in networked control applications. Furthermore, we provide conditions on when a ne policies may be optimal in decentralized multi-criteria control problems and for the presence of active information transmission even in strategic environments. The results also highlight that even when the decision makers have the same objective, presence of inconsistent priors among the decision makers may lead to a lack of robustness in equilibrium behavior.Item Open Access Simulation-based engineering(Springer, 2017) Çakmakcı, Melih; Sendur, G. K.; Durak, U.; Mittal, S.; Durak, U.; Ören, T.Engineers, mathematicians, and scientists were always interested in numerical solutions of real-world problems. The ultimate objective within nearly all engineering projects is to reach a functional design without violating any of the performance, cost, time, and safety constraints while optimizing the design with respect to one of these metrics. A good mathematical model is at the heart of each powerful engineering simulation being a key component in the design process. In this chapter, we review role of simulation in the engineering process, the historical developments of different approaches, in particular simulation of machinery and continuum problems which refers basically to the numerical solution of a set of differential equations with different initial/boundary conditions. Then, an overview of well-known methods to conduct continuum based simulations within solid mechanics, fluid mechanics and electromagnetic is given. These methods include FEM, FDM, FVM, BEM, and meshless methods. Also, a summary of multi-scale and multi-physics-based approaches are given with various examples. With constantly increasing demands of the modern age challenging the engineering development process, the future of simulations in the field hold great promise possibly with the inclusion of topics from other emerging fields. As technology matures and the quest for multi-functional systems with much higher performance increases, the complexity of problems that demand numerical methods also increases. As a result, large-scale effective computing continues to evolve allowing for efficient and practical performance evaluation and novel designs, hence the enhancement of our thorough understanding of the physics within highly complex systems.Item Open Access Super-resolution using multiple quantized images(IEEE, 2010) Özçelikkale, Ayça; Akar, G. B.; Özaktas, Haldun M.In this paper, we study the effect of limited amplitude resolution (pixel depth) in super-resolution problem. The problem we address differs from the standard super-resolution problem in that amplitude resolution is considered as important as spatial resolution. We study the trade-off between the pixel depth and spatial resolution of low resolution (LR) images in order to obtain the best visual quality in the reconstructed high resolution (HR) image. The proposed framework reveals great flexibility in terms of pixel depth and number of LR images in super-resolution problem, and demonstrates that it is possible to obtain target visual qualities with different measurement scenarios including images with different amplitude and spatial resolutions.Item Open Access Terabits-per-second throughput for polar codes(IEEE, 2019-09) Süral, Altuğ; Sezer, E. Göksu; Ertuğrul, Yiğit; Arıkan, Orhan; Arıkan, ErdalBy using Majority Logic (MJL) aided Successive Cancellation (SC) decoding algorithm, an architecture and a specific implementation for high throughput polar coding are proposed. SC-MJL algorithm exploits the low complexity nature of SC decoding and the low latency property of MJL. In order to reduce the complexity of SC-MJL decoding, an adaptive quantization scheme is developed within 1-5 bits range of internal log-likelihood ratios (LLRs). The bit allocation is based on maximizing the mutual information between the input and output LLRs of the quantizer. This scheme causes a negligible performance loss when the code block length is N= 1024 and the number of information bits is K = 854. The decoder is implemented on 45nm ASIC technology using deeply-pipelined, unrolled hardware architecture with register balancing. The pipeline depth is kept at 40 clock cycles in ASIC by merging consecutive decoding stages implemented as combinational logic. The ASIC synthesis results show that SC-MJL decoder has 427 Gb/s throughput at 45nm technology. When we scale the implementation results to 7nm technology node, the throughput reaches 1 Tb/s with under 10 mm 2 chip area and 0.37 W power dissipation.Item Open Access Understanding how orthogonality of parameters improves quantization of neural networks(IEEE, 2022-05-10) Eryılmaz, Şükrü Burç; Dündar, AyşegülWe analyze why the orthogonality penalty improves quantization in deep neural networks. Using results from perturbation theory as well as through extensive experiments with Resnet50, Resnet101, and VGG19 models, we mathematically and experimentally show that improved quantization accuracy resulting from orthogonality constraint stems primarily from reduced condition numbers, which is the ratio of largest to smallest singular values of weight matrices, more so than reduced spectral norms, in contrast to the explanations in previous literature. We also show that the orthogonality penalty improves quantization even in the presence of a state-of-the-art quantized retraining method. Our results show that, when the orthogonality penalty is used with quantized retraining, ImageNet Top5 accuracy loss from 4- to 8-bit quantization is reduced by up to 7% for Resnet50, and up to 10% for Resnet101, compared to quantized retraining with no orthogonality penalty.