Unsupervised disentanglement of pose, appearance and background from images and videos

buir.contributor.authorDündar, Ayşegül
dc.contributor.authorDündar, Ayşegül
dc.contributor.authorJ. Shih, K.
dc.contributor.authorGarg, A.
dc.contributor.authorPottorf, R.
dc.contributor.authorTao, A.
dc.contributor.authorCatanzaro, B.
dc.date.accessioned2022-01-31T10:52:29Z
dc.date.available2022-01-31T10:52:29Z
dc.date.issued2021-01-29
dc.departmentDepartment of Computer Engineeringen_US
dc.description( Early Access )en_US
dc.description.abstractUnsupervised landmark learning is the task of learning semantic keypoint-like representations without the use of expensive input keypoint-level annotations. A popular approach is to factorize an image into a pose and appearance data stream, then to reconstruct the image from the factorized components. The pose representation should capture a set of consistent and tightly localized landmarks in order to facilitate reconstruction of the input image. Ultimately, we wish for our learned landmarks to focus on the foreground object of interest. However, the reconstruction task of the entire image forces the model to allocate landmarks to model the background. Using a motion-based foreground assumption, this work explores the effects of factorizing the reconstruction task into separate foreground and background reconstructions in an unsupervised way, allowing the model to condition only the foreground reconstruction on the unsupervised landmarks. Our experiments demonstrate that the proposed factorization results in landmarks that are focused on the foreground object of interest when measured against ground-truth foreground masks. Furthermore, the rendered background quality is also improved as ill-suited landmarks are no longer forced to model this content. We demonstrate this improvement via improved image fidelity in a video-prediction task. Code is available at https://github.com/NVIDIA/UnsupervisedLandmarkLearningen_US
dc.identifier.doi10.1109/TPAMI.2021.3055560en_US
dc.identifier.eissn1939-3539en_US
dc.identifier.issn0162-8828en_US
dc.identifier.urihttp://hdl.handle.net/11693/76909en_US
dc.language.isoEnglishen_US
dc.publisherIEEEen_US
dc.relation.isversionofhttps://doi.org/10.1109/TPAMI.2021.3055560en_US
dc.source.titleIEEE Transactions on Pattern Analysis and Machine Intelligenceen_US
dc.subjectUnsupervised landmarksen_US
dc.subjectKeypointsen_US
dc.subjectForeground-background separationen_US
dc.subjectVideo predictionen_US
dc.titleUnsupervised disentanglement of pose, appearance and background from images and videosen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Unsupervised_disentanglement_of_pose,_appearance_and_background_from_images_and_videos.pdf
Size:
6.67 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: