Browsing by Subject "Virtual reality."
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Open Access The fluid experience of space : physical body in virtual spaces over an analysis of Osmose(2003) Varinlioğlu, GüzdenBy the naissance of virtual reality, the body is repressed and transformed into representation in technological virtuality, and the cyberspace has defined as the space experienced by the mind that is separated from the body. By this transformation to ‘simulacra’, this dystopian world of Neuromancer has become the model for future works. Whereas by the help of Char Davies’ Osmose using Virtual Reality technology, the boundaries of technological virtuality is expanded in such a way to include the de-technologized virtuality: the virtuality of nature. By the use of virtual reality technology, Davies’s interpretation to cyberspace is transgressive in terms of body and space notion. Starting from the definition virtuality of nature, my aim is to analyze the virtuality of water, that will help the thesis to criticize the technology per se and proposes ‘other’ space and body relation in this newly created environment: water space. By the direct ‘contact’ of the body, water space become united with the element, dissolving the boundaries of object/subject, inside/outside splits. Drawing parallel lines between water and imagination, virtuality and freedom, this thesis proposes a look at the cyberspace notion through water.Item Open Access Model-based camera tracking for augmented reality(2014) Aman, AytekAugmented reality (AR) is the enhancement of real scenes with virtual entities. It is used to enhance user experience and interaction in various ways. Educational applications, architectural visualizations, military training scenarios and pure entertainment-based applications are often enhanced by augmented reality to provide more immersive and interactive experience for the users. With hand-held devices getting more powerful and cheap, such applications are becoming very popular. To provide natural AR experiences, extrinsic camera parameters (position and rotation) must be calculated in an accurate, robust and efficient way so that virtual entities can be overlaid onto the real environments correctly. Estimating extrinsic camera parameters in real-time is a challenging task. In most camera tracking frameworks, visual tracking serve as the main method for estimating the camera pose. In visual tracking systems, keypoint and edge features are often used for pose estimation. For rich-textured environments, keypoint-based methods work quite well and heavily used. Edge-based tracking, on the other hand, is more preferable when the environment is rich in geometry but has little or no visible texture. Pose estimation for edge based tracking systems generally depends on the control points that are assigned on the model edges. For accurate tracking, visibility of these control points must be determined in a correct manner. Control point visibility determination is computationally expensive process. We propose a method to reduce computational cost of the edge-based tracking by preprocessing the visibility information of the control points. For that purpose, we use persistent control points which are generated in the world space during preprocessing step. Additionally, we use more accurate adaptive projection algorithm for persistent control points to provide more uniform control point distribution in the screen space. We test our camera tracker in different environments to show the effectiveness and performance of the proposed algorithm. The preprocessed visibility information enables constant time calculations of control point visibility while preserving the accuracy of the tracker. We demonstrate a sample AR application with user interaction to present our AR framework, which is developed for a commercially available and widely used game engine.Item Open Access A perceptional model to understand immersion(2009) Alper, C. ArmağanThe aim of this study is to offer a new model for the concept of immersion based on the process of perception in humans. The motivation behind the study is that although the concept of immersion points to an important experience both in various media and daily life, current approaches to the concept do not provide an analytical framework needed for understanding it. This study offers a cognitive and parameter based model which is based on the perception theory put forward by Henri Bergson in his book Matter and Memory. The model makes possible the analysis of the concept of immersion in terms of various cognitive and physical factors.Item Open Access Perceptually driven stereoscopic camera control in 3D virtual environments(2013) Kevinç, Elif BengüDepth notion and how to perceive depth have long been studied in the eld of psychology, physiology, and even art. Human visual perception enables to perceive spatial layout of the outside world by using visual depth cues. Binocular disparity among these depth cues, is based on the separation between two di erent views that are observed by two eyes. Disparity concept constitutes the base of the construction of the stereoscopic vision. Emerging technologies try to replicate binocular disparity principles in order to provide 3D illusion and stereoscopic vision. However, the complexity of applying the underlying principles of 3D perception, confronted researchers the problem of wrongly produced stereoscopic contents. It is still a great challenge to give realistic but also comfortable 3D experience. In this work, we present a camera control mechanism: a novel approach for disparity control and a model for path generation. We try to address the challenges of stereoscopic 3D production by presenting comfortable viewing experience to users. Therefore, our disparity system approaches the accommodation/convergence con- ict problem, which is the most known issue that causes visual fatigue in stereo systems, by taking objects' importance into consideration. Stereo camera parameters are calculated automatically with an optimization process. In the second part of our control mechanism, the camera path is constructed for a given 3D environment and scene elements. Moving around important regions of objects is a desired scene exploration task. In this respect, object saliencies are used for viewpoint selection around scene elements. Path structure is generated by using linked B ezier curves which assures to pass through pre-determined viewpoints. Though there is considerable amount of research found in the eld of stereo creation, we believe that approaching this problem from scene content aspect provides a uniquely promising experience. We validate our assumption with user studies in which our method and existing two other disparity control models are compared. The study results show that our method shows superior results in quality, depth, and comfort.Item Open Access Task-based automatic camera placement(2010) Kabak, MustafaPlacing cameras to view an animation that takes place in a virtual 3D environment is a di cult task. Correctly placing an object in space and orienting it, and furthermore, animating it to follow the action in the scene is an activity that requires considerable expertise. Approaches to automating this activity to various degrees have been proposed in the literature. Some of these approaches have constricted assumptions about the nature of the animation and the scene they visualize, therefore they can be used only under limited conditions. While some approaches require a lot of attention from the user, others fail to give the user su cient means to a ect the camera placement. We propose a novel abstraction called Task for implementing camera placement functionality. Tasks strike a balance between ease of use and ability to control the output by enabling users to easily guide camera placement without dealing with low-level geometric constructs. Users can utilize tasks to control camera placement in terms of high-level, understandable notions like objects, their relations, and impressions on viewers while designing video presentations of 3D animations. Our framework of camera placement automation reconciles the demands brought by di erent tasks, and provides tasks with common low-level geometric foundations. The exibility and extensibility of the framework facilitates its use with diverse 3D scenes and visual variety in its output.Item Open Access Use of virtual environments in interior design education: a case study with VRML(1999) Sagun, AysuCommunicating spatial thought and visual perception is not an easy process. However, introducing virtual worlds into communication and visual perception of complex concepts and information make the process easier. The observer using Virtual Reality (VR) applications navigates within an information environment that increases data awareness and understanding with the help of specific effects such as immersion, presence and interactivity (Dagit, 1993). These capabilities of Virtual Environments (VE) can easily be used for the design education, as well as the designer together with the clients uses them during the design phase. In this way, the students would have the flexibility in the education life since there will be no restriction and limitation in the time and place for following the lectures. Within this context, this thesis investigates the benefits of VE in interior design education using web-based communication. When preparing this application Virtual Reality Modelling Language (VRML) has been used as a tool which is a language for describing multi-participant interactive simulations, in other words, VE, networked through the Internet. In the study, the logic of how VRML works has been explained including various examples. The design process and navigation in VRML world is experienced. Additionally, virtual libraries for texture, material and furniture are prepared. The research has been concluded with the construction of a sample extension course design for the senior Interior Architecture course: Modular Interior Systems, using VRML.Item Open Access Virtual realities and real virtualities(2002) Telhan, OrkanThis study endeavors to explicate different conceptions of virtuality in relation to the concept of technology. Departing from the popular conceptions of virtuality discussed within the framework of digital technologies, the study aims to elaborate on the subject within different contexts where the nature of virtuality is not confined to a specific definition but expanded within all different considerations. The nature of relation between virtuality and reality is discussed under the influence of a number of complimentary conceptions introduced by G. Deleuze and H. Bergson.Item Open Access Virtual sculpting with advanced gestural interface(2013) Kılıboz, Nurettin ÇağrıIn this study, we propose a virtual reality application that can be utilized to design preliminary/conceptual models similar to real world clay sculpting. The proposed system makes use of the innovative gestural interface that enhances the experience of the human-computer interaction. The gestural interface employs advanced motion capture hardware namely data gloves and six-degrees-of-freedom position tracker instead of classical input devices like keyboard or mouse. The design process takes place in the virtual environment that contains volumetric deformable model, design tools and a virtual hand that is driven by the data glove and the tracker. The users manipulate the design tools and the deformable model via the virtual hand. The deformation on the model is done by stuffing or carving material (voxels) in or out of the model with the help of the tools or directly by the virtual hand. The virtual sculpting system also includes volumetric force feedback indicator that provides visual aid. We also offer a mouse like interaction approach in which the users can still interact with conventional graphical user interface items such as buttons with the data glove and tracker. The users can also control the application with gestural commands thanks to our real time trajectory based dynamic gesture recognition algorithm. The gesture recognition technique exploits a fast learning mechanism that does not require extensive training data to teach gestures to the system. For recognition, gestures are represented as an ordered sequence of directional movements in 2D. In the learning phase, sample gesture data is filtered and processed to create gesture recognizers, which are basically finite-state machine sequence recognizers. We achieve real time gesture recognition by these recognizers without needing to specify gesture start and end points. The results of the conducted user study show that the proposed method is very promising in terms of gesture detection and recognition performance (73% accuracy) in a stream of motion. Additionally, the assessment of the user attitude survey denotes that the gestural interface is very useful and satisfactory. One of the novel parts of the proposed approach is that it gives users the freedom to create gesture commands according to their preferences for selected tasks. Thus, the presented gesture recognition approach makes the human-computer interaction process more intuitive and user specific.