Three-dimensional human texture estimation learning from multi-view images
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Source Title
Print ISSN
Electronic ISSN
Publisher
Volume
Issue
Pages
Language
Type
Journal Title
Journal ISSN
Volume Title
Attention Stats
Usage Stats
views
downloads
Series
Abstract
In the fields of graphics and vision, accurately estimating 3D human texture from a single image is a critical task. This process involves developing a mapping function that transforms input images of humans in various poses into parametric (UV) space, while also effectively inferring the appearance of unseen parts. To enhance the quality of 3D human texture estimation, our study introduces a framework that utilizes deformable convolution for adaptive input sampling. This convolution is uniquely characterized by offsets learned through a sophisticated deep neural network. Additionally, we introduce an innovative cycle consistency loss, which markedly enhances view generalization. Our framework is further refined by incorporating an uncertainty-based, pixel-level image reconstruction loss, aimed at augmenting color accuracy. Through comprehensive comparisons with leading-edge methods, our approach demonstrates notable qualitative and quantitative advancements in the field.