dc.contributor.advisor | Çapın, Tolga | |
dc.contributor.author | Yaz, İlker O. | |
dc.date.accessioned | 2016-01-08T18:25:32Z | |
dc.date.available | 2016-01-08T18:25:32Z | |
dc.date.issued | 2013 | |
dc.identifier.uri | http://hdl.handle.net/11693/15850 | |
dc.description | Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013. | en_US |
dc.description | Thesis (Master's) -- Bilkent University, 2013. | en_US |
dc.description | Includes bibliographical references leaves 51-55. | en_US |
dc.description.abstract | Animation of mesh models can be accomplished in many ways, including character
animation with skinned skeletons, deformable models, or physic-based simulation.
Generating animations with all of these techniques is time consuming
and laborious for novice users; however adapting already available wide-range
human motion capture data might simplify the process signi cantly. This thesis
presents a method for retargeting human motion to arbitrary 3D mesh models
with as little user interaction as possible. Traditional motion retargeting systems
try to preserve original motion as is, while satisfying several motion constraints.
In our approach, we use a few pose-to-pose examples provided by the user to
extract desired semantics behind retargeting process by not limiting the transfer
to be only literal. Hence, mesh models, which have di erent structures and/or
motion semantics from humanoid skeleton, become possible targets. Also considering
mesh models which are widely available and without any additional structure
(e.g. skeleton), our method does not require such a structure by providing
a build-in surface-based deformation system. Since deformation for animation
purpose can require more than rigid behaviour, we augment existing rigid deformation
approaches to provide volume preserving and cartoon-like deformation.
For demonstrating results of our approach, we retarget several motion capture
data to three well-known models, and also investigate how automatic retargeting
methods developed considering humanoid models work on our models. | en_US |
dc.description.statementofresponsibility | Yaz, İlker O | en_US |
dc.format.extent | xii, 55 leaves, illustrations, tables | en_US |
dc.language.iso | English | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Mesh deformation | en_US |
dc.subject | Volume preservation | en_US |
dc.subject | Animation | en_US |
dc.subject | Motion retargeting | en_US |
dc.subject.lcc | T385 .Y39 2013 | en_US |
dc.subject.lcsh | Computer animation. | en_US |
dc.subject.lcsh | Computer graphics. | en_US |
dc.subject.lcsh | Body, Human--Digital techniques. | en_US |
dc.subject.lcsh | Human locomotion--Computer simulation. | en_US |
dc.subject.lcsh | Image processing--Digital techniques. | en_US |
dc.subject.lcsh | Three-dimensional display systems. | en_US |
dc.title | Example based retargeting human motion to arbitrary mesh models | en_US |
dc.type | Thesis | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.publisher | Bilkent University | en_US |
dc.description.degree | M.S. | en_US |
dc.identifier.itemid | B135590 | |