VecGAN: Image-to-Image translation with interpretable latent directions
Date
2022-10-21Source Title
Computer Vision – ECCV 2022
Print ISSN
03029743
Volume
13676
Pages
153 - 169
Language
English
Type
ArticleItem Usage Stats
9
views
views
5
downloads
downloads
Abstract
We propose VecGAN, an image-to-image translation framework for facial attribute editing with interpretable latent directions. Facial attribute editing task faces the challenges of precise attribute editing with controllable strength and preservation of the other attributes of an image. For this goal, we design the attribute editing by latent space factorization and for each attribute, we learn a linear direction that is orthogonal to the others. The other component is the controllable strength of the change, a scalar value. In our framework, this scalar can be either sampled or encoded from a reference image by projection. Our work is inspired by the latent space factorization works of fixed pretrained GANs. However, while those models cannot be trained end-to-end and struggle to edit encoded images precisely, VecGAN is end-to-end trained for image translation task and successful at editing an attribute while preserving the others. Our extensive experiments show that VecGAN achieves significant improvements over state-of-the-arts for both local and global edits.
Keywords
Image translationGenerative adversarial networks
Latent space manipulation
Face attribute editing