Learning portrait drawing with unsupervised parts

buir.contributor.authorTaşdemir, Burak
buir.contributor.authorEldenk, Doğaç
buir.contributor.authorDündar, Ayşegül
buir.contributor.orcidTaşdemir, Burak|0009-0006-0593-6096
buir.contributor.orcidDündar, Ayşegül|0000-0003-2014-6325
dc.citation.epage14en_US
dc.citation.spage1
dc.contributor.authorTaşdemir, Burak
dc.contributor.authorGudukbay, M. G.
dc.contributor.authorEldenk, Doğaç
dc.contributor.authorMeric, A.
dc.contributor.authorDündar, Ayşegül
dc.date.accessioned2024-03-11T07:57:52Z
dc.date.available2024-03-11T07:57:52Z
dc.date.issued2023-11-01
dc.departmentDepartment of Computer Engineering
dc.description.abstractTranslating face photos into portrait drawings takes hours for a skilled artist which makes automatic generation of them desirable. Portrait drawing is a difficult image translation task with its own unique challenges. It requires emphasizing important key features of faces as well as ignoring many details of them. Therefore, an image translator should have the capacity to detect facial features and output images with the selected content of the photo preserved. In this work, we propose a method for portrait drawing that only learns from unpaired data with no additional labels. Our method via unsupervised feature learning shows good domain generalization behavior. Our first contribution is an image translation architecture that combines the high-level understanding of images with unsupervised parts and the identity preservation behavior of shallow networks. Our second contribution is a novel asymmetric pose-based cycle consistency loss. This loss relaxes the constraint on the cycle consistency loss which requires an input image to be reconstructed after transformations to a portrait and back to the input image. However, going from an RGB image to a portrait, information loss is expected (e.g. colors, background). This is what cycle consistency constraint tries to prevent and when applied to this scenario, results in learning a translation network that embeds the overall information of RGB images into portraits and causes artifacts in portrait images. Our proposed loss solves this issue. Lastly, we run extensive experiments both on in-domain and out-of-domain images and compare our method with state-of-the-art approaches. We show significant improvements both quantitatively and qualitatively on three datasets.
dc.description.provenanceMade available in DSpace on 2024-03-11T07:57:52Z (GMT). No. of bitstreams: 1 s11263-023-01927-2 (2).pdf: 3217271 bytes, checksum: 61d95ea66600ef3f5fb5a8e9e73f8e48 (MD5) Previous issue date: 2023-11-01en
dc.identifier.doi10.1007/s11263-023-01927-2en_US
dc.identifier.eissn1573-1405en_US
dc.identifier.issn0920-5691en_US
dc.identifier.urihttps://hdl.handle.net/11693/114480en_US
dc.language.isoEnglishen_US
dc.publisherSpringer New York LLCen_US
dc.relation.isversionofhttps://dx.doi.org/10.1007/s11263-023-01927-2
dc.rightsCC BY 4.0 Deed (Attribution 4.0 International)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.source.titleInternational Journal of Computer Vision
dc.subjectPortrait drawing
dc.subjectUnsupervised part segmentations
dc.subjectUnpaired image translation
dc.subjectCycle consistency adversarial networks
dc.titleLearning portrait drawing with unsupervised parts
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Learning_Portrait_Drawing_with_Unsupervised_Parts.pdf
Size:
3.07 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.01 KB
Format:
Item-specific license agreed upon to submission
Description: