Browsing by Subject "Codes"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Lower bounds to moments of list size(IEEE, 1990) Arıkan, ErdalSummary form only given. The list-size random variable L for a block code is defined as the number of incorrect messages that appear to a maximum-likelihood decoder to be at least as likely as the true message. Lower bounds to the moments of L have been obtained. For sequential decoding, the results imply that the tth moment of computation is unbounded at rates above a certain value, for all t≥0, and settle a long-standing open problem.Item Open Access Rate-distortion optimized layered stereoscopic video streaming with raptor codes(IEEE, 2007) Tan, A. Serdar; Aksay, A.; Bilen, C.; Bozdağı-Akar, G.; Arıkan, ErdalA near optimal streaming system for stereoscopic video is proposed. Initially, the stereoscopic video is separated into three layers and the approximate analytical model of the Rate-Distortion (RD) curve of each layer is calculated from sufficient number of rate and distortion samples. The analytical modeling includes the interdependency of the defined layers. Then, the analytical models are used to derive the optimal source encoding rates for a given channel bandwidth. The distortion in the quality of the stereoscopic video that is caused by losing a NAL unit from the defined layers is estimated to minimize the average distortion of a single NAL unit loss. The minimization is performed over protection rates allocated to each layer. Raptor codes are utilized as the error protection scheme due to their novelty and suitability in video transmission. The layers are protected unequally using Raptor codes according to the parity ratios allocated to the layers. Comparison of the defined scheme with two other protection allocation schemes is provided via simulations to observe the quality of stereoscopic video.Item Open Access StyleRes: transforming the residuals for real ımage editing with StyleGAN(IEEE, 2023-07-22) Pehlivan, Hamza; Dalva, Yusuf; Dündar, AysegülWe present a novel image inversion framework and a training pipeline to achieve high-fidelity image inversion with high-quality attribute editing. Inverting real images into StyleGAN’s latent space is an extensively studied problem, yet the trade-off between the image reconstruction fidelity and image editing quality remains an open challenge. The low-rate latent spaces are limited in their expressiveness power for high-fidelity reconstruction. On the other hand, high-rate latent spaces result in degradation in editing quality. In this work, to achieve high-fidelity inversion, we learn residual features in higher latent codes that lower latent codes were not able to encode. This enables preserving image details in reconstruction. To achieve high-quality editing, we learn how to transform the residual features for adapting to manipulations in latent codes. We train the framework to extract residual features and transform them via a novel architecture pipeline and cycle consistency losses. We run extensive experiments and compare our method with state-of-the-art inversion methods. Qualitative metrics and visual comparisons show significant improvements. Code: https://github.com/hamzapehlivan/StyleRes