Browsing by Subject "Cameras."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Camera-based 3D interaction for handheld devices(2010) Pekin, Tacettin SercanUsing handheld devices is a very important part of our daily life. Interacting with them is the most unavoidable part of using them. Today’s user interface designs are mostly adapted from desktop computers. The result of this was difficulties of using handheld devices. However processing power, new sensing technologies and cameras are already available for mobile devices. This gives us the possibility to develop systems to communicate through different modalities. This thesis proposes some novel approaches, including finger detection, finger tracking and object motion analysis, to allow efficient interaction with mobile devices. As the result of my thesis, a new interface between users and mobile devices is created. This is a new way of interaction with the mobile device. It enables direct manipulation on objects. The technique does not require any extra hardware. The interaction method, maps an object’s motion (such as a finger’s or a predefined marker’s motion) to a virtual space to achieve manipulation which is moving in front of the camera. For Finger Detection, a new method is created based on the usage of the mobile devices and structure of thumb. A fast two dimensional color-based scene analysis method is applied to solve the problem. For Finger Tracking, a new method is created based on the movement ergonomics of thumb when holding the mobile device on hand. Extracting the three dimensional movement from the two dimensional RGB data is an important part of this section of the study. A new 3D pointer data and pointer image is created for usage with 3D input and 3D interaction of 3D scenes. Also direct manipulation for low cost is achieved.Item Open Access Task-based automatic camera placement(2010) Kabak, MustafaPlacing cameras to view an animation that takes place in a virtual 3D environment is a di cult task. Correctly placing an object in space and orienting it, and furthermore, animating it to follow the action in the scene is an activity that requires considerable expertise. Approaches to automating this activity to various degrees have been proposed in the literature. Some of these approaches have constricted assumptions about the nature of the animation and the scene they visualize, therefore they can be used only under limited conditions. While some approaches require a lot of attention from the user, others fail to give the user su cient means to a ect the camera placement. We propose a novel abstraction called Task for implementing camera placement functionality. Tasks strike a balance between ease of use and ability to control the output by enabling users to easily guide camera placement without dealing with low-level geometric constructs. Users can utilize tasks to control camera placement in terms of high-level, understandable notions like objects, their relations, and impressions on viewers while designing video presentations of 3D animations. Our framework of camera placement automation reconciles the demands brought by di erent tasks, and provides tasks with common low-level geometric foundations. The exibility and extensibility of the framework facilitates its use with diverse 3D scenes and visual variety in its output.