Warping the residuals for image editing with StyleGAN

buir.contributor.authorYıldırım, Ahmet Burak
buir.contributor.authorPehlivan, Hamza
buir.contributor.authorDündar, Ayşegül
buir.contributor.orcidYıldırım, Ahmet Burak|0000-0003-3312-4280
dc.contributor.authorYıldırım, Ahmet Burak
dc.contributor.authorPehlivan, Hamza
dc.contributor.authorDündar, Ayşegül
dc.date.accessioned2025-02-27T12:14:45Z
dc.date.available2025-02-27T12:14:45Z
dc.date.issued2024-11-18
dc.departmentDepartment of Computer Engineering
dc.description.abstractStyleGAN models show editing capabilities via their semantically interpretable latent organizations which require successful GAN inversion methods to edit real images. Many works have been proposed for inverting images into StyleGAN's latent space. However, their results either suffer from low fidelity to the input image or poor editing qualities, especially for edits that require large transformations. That is because low bit rate latent spaces lose many image details due to the information bottleneck even though it provides an editable space. On the other hand, higher bit rate latent spaces can pass all the image details to StyleGAN for perfect reconstruction of images but suffer from low editing qualities. In this work, we present a novel image inversion architecture that extracts high-rate latent features and includes a flow estimation module to warp these features to adapt them to edits. This is because edits often involve spatial changes in the image, such as adjustments to pose or smile. Thus, high-rate latent features must be accurately repositioned to match their new locations in the edited image space. We achieve this by employing flow estimation to determine the necessary spatial adjustments, followed by warping the features to align them correctly in the edited image. Specifically, we estimate the flows from StyleGAN features of edited and unedited latent codes. By estimating the high-rate features and warping them for edits, we achieve both high-fidelity to the input image and high-quality edits. We run extensive experiments and compare our method with state-of-the-art inversion methods. Qualitative metrics and visual comparisons show significant improvements.
dc.identifier.doi10.1007/s11263-024-02301-6
dc.identifier.eissn1573-1405
dc.identifier.issn0920-5691
dc.identifier.urihttps://hdl.handle.net/11693/116939
dc.language.isoEnglish
dc.publisherSpringer New York LLC
dc.relation.isversionofhttps://dx.doi.org/10.1007/s11263-024-02301-6
dc.rightsCC BY 4.0 Deed (Attribution 4.0 International)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.source.titleInternational Journal of Computer Vision
dc.subjectGAN inversion
dc.subjectImage editing
dc.subjectGenerative adversarial networks
dc.titleWarping the residuals for image editing with StyleGAN
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Warping_the_residuals_for_image_editing_with_StyleGAN.pdf
Size:
4.31 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: