Altındiş, Said FahriMeric, AdilDalva, YusufGüdükbay, UğurDündar, Ayşegül2025-02-272025-02-272024-120162-8828https://hdl.handle.net/11693/116880Estimating 3D human texture from a single image is essential in graphics and vision. It requires learning a mapping function from input images of humans with diverse poses into the parametric (uv) space and reasonably hallucinating invisible parts. To achieve a high-quality 3D human texture estimation, we propose a framework that adaptively samples the input by a deformable convolution where offsets are learned via a deep neural network. Additionally, we describe a novel cycle consistency loss that improves view generalization. We further propose to train our framework with an uncertainty-based pixel-level image reconstruction loss, which enhances color fidelity. We compare our method against the state-of-the-art approaches and show significant qualitative and quantitative improvements.EnglishCC BY-NC-ND 4.0 DEED (Attribution-NonCommercial-NoDerivatives 4.0 International)https://creativecommons.org/licenses/by-nc-nd/4.0/Texture estimationDeformable convolutionUncertainty estimationRefining 3D human texture estimation from a single imageArticle10.1109/TPAMI.2024.34568171939-3539