Dalva, Yusuf2023-07-072023-07-072023-062023-062023-06-20https://hdl.handle.net/11693/112376Cataloged from PDF version of article.Includes bibliographical references (leaves 50-55).We propose an image-to-image translation framework for facial attribute editing with disentangled interpretable latent directions. Facial attribute editing task faces the challenges of targeted attribute editing with controllable strength and disentanglement in the representations of attributes to preserve the other at-tributes during edits. For this goal, inspired by the latent space factorization works of fixed pretrained GANs, we design the attribute editing by latent space factorization, and for each attribute, we learn a linear direction that is orthogonal to the others. We train these directions with orthogonality constraints and dis-entanglement losses. To project images to semantically organized latent spaces, we set an encoder-decoder architecture with attention-based skip connections. We extensively compare with previous image translation algorithms and editing with pretrained GAN works. Our extensive experiments show that our method significantly improves over the state-of-the-arts.xii, 64 leaves : color illustrations, photography, charts ; 30 cm.Englishinfo:eu-repo/semantics/openAccessImage-to-image translationGenerative adversarial networksLatent space manipulationFace attribute editingImage-to-image translation for face attribute editing with disentangled latent directionsAyrıştırılmış örtülü vektörlerle yüz özelliklerini düzenleme için resimden resime çeviriThesisB162133