Three-dimensional reconstruction and editing from single images with generative models
buir.advisor | Boral, Ayşegül Dündar | |
dc.contributor.author | Bilecen, Bahri Batuhan | |
dc.date.accessioned | 2025-05-22T10:39:48Z | |
dc.date.available | 2025-05-22T10:39:48Z | |
dc.date.copyright | 2025-05 | |
dc.date.issued | 2025-05 | |
dc.date.submitted | 2025-05-21 | |
dc.description | Cataloged from PDF version of article. | |
dc.description | Includes bibliographical references (leaves 79-93). | |
dc.description.abstract | Advancements in generative networks have significantly improved visual synthesis, particularly in three-dimensional (3D) applications. However, key challenges remain in achieving high-fidelity 3D reconstruction, preserving identity in 3D stylization, and enabling reference-based edits with 3D consistency. This thesis attempts to address these gaps through three interconnected studies. First, a framework of high-fidelity 3D head reconstruction from single images is introduced, leveraging dual encoder GAN inversion to reconstruct full 360-degree heads. By integrating an occlusion-aware triplane discriminator, this approach ensures seamless blending of visible and occluded regions, surpassing existing methods in realism and structural accuracy. Next, an identity-preserving 3D head stylization method is developed to balance artistic transformation with facial identity retention. Through multi-view score distillation and likelihood distillation, this technique enhances stylization diversity while maintaining subjectspecific features, outperforming prior diffusion-to-GAN adaptation strategies. Finally, a single image reference-based 3D-aware image editing method extends these advancements by enabling precise, high-quality edits using triplane representations. By incorporating automatic feature localization, spatial disentanglement, and fusion learning, this work achieves state-of-the-art performance in 3D-consistent, 2D reference-guided edits across various domains. Together, these contributions attempt to advance the field of 3D-aware generative modeling, providing robust solutions for reconstruction, stylization, and editing with greater fidelity, consistency, and control. | |
dc.description.statementofresponsibility | by Bahri Batuhan Bilecen | |
dc.format.extent | xiv, 93 leaves : color illustrations, charts ; 30 cm. | |
dc.identifier.itemid | B134847 | |
dc.identifier.uri | https://hdl.handle.net/11693/117125 | |
dc.language.iso | English | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.subject | 3D reconstruction | |
dc.subject | 3D editing | |
dc.subject | 3D stylization | |
dc.subject | Generation from single images | |
dc.title | Three-dimensional reconstruction and editing from single images with generative models | |
dc.title.alternative | Üretken modellerle tekli görsellerden üç-boyutlu yeniden yapılandırma ve düzenleme | |
dc.type | Thesis | |
thesis.degree.discipline | Computer Engineering | |
thesis.degree.grantor | Bilkent University | |
thesis.degree.level | Master's | |
thesis.degree.name | MS (Master of Science) |