Show simple item record

dc.contributor.advisorÇapın, Tolga
dc.contributor.authorKabak, Mustafa
dc.date.accessioned2016-01-08T18:14:14Z
dc.date.available2016-01-08T18:14:14Z
dc.date.issued2010
dc.identifier.urihttp://hdl.handle.net/11693/15151
dc.descriptionAnkara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent Univ, 2010.en_US
dc.descriptionThesis (Master's) -- Bilkent University, 2010.en_US
dc.descriptionIncludes biblioraphical references 56-57.en_US
dc.description.abstractPlacing cameras to view an animation that takes place in a virtual 3D environment is a di cult task. Correctly placing an object in space and orienting it, and furthermore, animating it to follow the action in the scene is an activity that requires considerable expertise. Approaches to automating this activity to various degrees have been proposed in the literature. Some of these approaches have constricted assumptions about the nature of the animation and the scene they visualize, therefore they can be used only under limited conditions. While some approaches require a lot of attention from the user, others fail to give the user su cient means to a ect the camera placement. We propose a novel abstraction called Task for implementing camera placement functionality. Tasks strike a balance between ease of use and ability to control the output by enabling users to easily guide camera placement without dealing with low-level geometric constructs. Users can utilize tasks to control camera placement in terms of high-level, understandable notions like objects, their relations, and impressions on viewers while designing video presentations of 3D animations. Our framework of camera placement automation reconciles the demands brought by di erent tasks, and provides tasks with common low-level geometric foundations. The exibility and extensibility of the framework facilitates its use with diverse 3D scenes and visual variety in its output.en_US
dc.description.statementofresponsibilityKabak, Mustafaen_US
dc.format.extentx, 57 leaves, illustrationsen_US
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectCamera planningen_US
dc.subjectAutonomous cinematographyen_US
dc.subjectTask-level interactionen_US
dc.subject.lccTR850 .K33 2010en_US
dc.subject.lcshCinematography.en_US
dc.subject.lcshCameras.en_US
dc.subject.lcshComputer vision.en_US
dc.subject.lcshComputer graphics.en_US
dc.subject.lcshThree-dimensional display systems.en_US
dc.subject.lcshVirtual reality.en_US
dc.subject.lcshElectronic surveillance.en_US
dc.titleTask-based automatic camera placementen_US
dc.typeThesisen_US
dc.departmentDepartment of Computer Engineeringen_US
dc.publisherBilkent Universityen_US
dc.description.degreeM.S.en_US
dc.identifier.itemidB122810


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record