Model-based camera tracking for augmented reality
MetadataShow full item record
Please cite this item using this persistent URLhttp://hdl.handle.net/11693/18331
Augmented reality (AR) is the enhancement of real scenes with virtual entities. It is used to enhance user experience and interaction in various ways. Educational applications, architectural visualizations, military training scenarios and pure entertainment-based applications are often enhanced by augmented reality to provide more immersive and interactive experience for the users. With hand-held devices getting more powerful and cheap, such applications are becoming very popular. To provide natural AR experiences, extrinsic camera parameters (position and rotation) must be calculated in an accurate, robust and efficient way so that virtual entities can be overlaid onto the real environments correctly. Estimating extrinsic camera parameters in real-time is a challenging task. In most camera tracking frameworks, visual tracking serve as the main method for estimating the camera pose. In visual tracking systems, keypoint and edge features are often used for pose estimation. For rich-textured environments, keypoint-based methods work quite well and heavily used. Edge-based tracking, on the other hand, is more preferable when the environment is rich in geometry but has little or no visible texture. Pose estimation for edge based tracking systems generally depends on the control points that are assigned on the model edges. For accurate tracking, visibility of these control points must be determined in a correct manner. Control point visibility determination is computationally expensive process. We propose a method to reduce computational cost of the edge-based tracking by preprocessing the visibility information of the control points. For that purpose, we use persistent control points which are generated in the world space during preprocessing step. Additionally, we use more accurate adaptive projection algorithm for persistent control points to provide more uniform control point distribution in the screen space. We test our camera tracker in different environments to show the effectiveness and performance of the proposed algorithm. The preprocessed visibility information enables constant time calculations of control point visibility while preserving the accuracy of the tracker. We demonstrate a sample AR application with user interaction to present our AR framework, which is developed for a commercially available and widely used game engine.
Embargo Lift Date2016-08-28
Showing items related by title, author, creator and subject.
Toklu, C.; Tekalp, A.M.; Erdem, A.T. (1997)In this paper, we describe a method for temporal tracking of video objects in video clips. We employ a 2D triangular mesh to represent each video object, which allows us to describe the motion of the object by the displacements ...
Aksay, A.; Temizel, A.; Çetin, A.E. (2007)In the recent years, number of surveillance cameras deployed has increased significantly. However it is important that these cameras are functioning as intended and capturing meaningful data. Offenders resort to techniques ...
Urfalioglu, O.; Thormählen, T.; Broszio, H.; Mikulastik, P.; Cetin, A. E. (Elsevier, 2011-01-04)In general, feature points and camera parameters can only be estimated with limited accuracy due to noisy images. In case of collinear feature points, it is possible to benefit from this geometrical regularity by correcting ...