Browsing by Subject "Mobile devices"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Open Access 3D thumbnails for mobile media browser interface with autostereoscopic displays(Springer, 2010-01) Gündoğdu, R. Bertan; Yiğit, Yeliz; Çapin, TolgaIn this paper, we focus on the problem of how to visualize and browse 3D videos and 3D images in a media browser application, running on a 3D-enabled mobile device with an autostereoscopic display. We propose a 3D thumbnail representation format and an algorithm for automatic 3D thumbnail generation from a 3D video + depth content. Then, we present different 3D user interface layout schemes for 3D thumbnails, and discuss these layouts with the focus on their usability and ergonomics. © 2010 Springer-Verlag Berlin Heidelberg.Item Open Access Camera-based virtual environment interaction on mobile devices(Springer, 2006-11) Çapin, Tolga; Haro, A.; Setlur, V.; Wilkinson, S.Mobile virtual environments, with real-time 3D and 2D graphics, are now possible on smart phone and other camera-enabled devices. Using computer vision, the camera sensor can be treated as an input modality in applications by analyzing the incoming live video. We present our tracking algorithm and several mobile virtual environment and gaming prototypes including: a 3D first person shooter, a 2D puzzle game and a simple action game. Camera-based interaction provides a user experience that is not possible through traditional means, and maximizes the use of the limited display size. © Springer-Verlag Berlin Heidelberg 2006.Item Open Access A color-based face tracking algorithm for enhancing interaction with mobile devices(Springer, 2010-05) Bulbul, A.; Cipiloglu, Z.; Capin, T.A color-based face tracking algorithm is proposed to be used as a human-computer interaction tool on mobile devices. The solution provides a natural means of interaction enabling a motion parallax effect in applications. The algorithm considers the characteristics of mobile useconstrained computational resources and varying environmental conditions. The solution is based on color comparisons and works on images gathered from the front camera of a device. In addition to color comparisons, the coherency of the facial pixels is considered in the algorithm. Several applications are also demonstrated in this work, which use the face position to determine the viewpoint in a virtual scene, or for browsing large images. The accuracy of the system is tested under different environmental conditions such as lighting and background, and the performance of the system is measured in different types of mobile devices. According to these measurements the system allows for accurate (7% RMS error) face tracking in real time (20-100 fps). © Springer-Verlag 2010.Item Open Access Enhanced user performance in an image gallery application with a mobile autostereoscopic touch display(Elsevier, 2014) Sassi, A.; Pöyhönen P.; Jakonen, S.; Suomi, S.; Capin, T.; Häkkinen J.In this study, we explored how stereoscopic depth affects performance and user experience in a mobile device with an autostereoscopic touch display. Participants conducted a visual search task with an image gallery application on three layouts with different depth ranges. The task completion times were recorded, and the participants were asked to rate their experiences. The results revealed that the image search times were facilitated by a mild depth effect and that too great a depth slowed search times and decreased user-experience ratings. © 2014 Elsevier B.V. All rights reserved.Item Open Access A face tracking algorithm for user interaction in mobile devices(IEEE, 2009-09) Bülbül, Abdullah; Çipiloğlu, Zeynep; Çapin, TolgaA new face tracking algorithm, and a human-computer interaction technique based on this algorithm, are proposed for use on mobile devices. The face tracking algorithm considers the limitations of mobile use case - constrained computational resources and varying environmental conditions. The solution is based on color comparisons and works on images gathered from the front camera of a device. The face tracking system generates 2D face position as an output that can be used for controlling different applications. Two of such applications are also presented in this work; the first example uses face position to determine the viewpoint, and the second example enables an intuitive way of browsing large images. © 2009 IEEE.Item Open Access Integrating social features into mobile local search(Elsevier Inc., 2016) Kahveci, B.; Altıngövde, İ. S.; Ulusoy, ÖzgürAs availability of Internet access on mobile devices develops year after year, users have been able to make use of search services while on the go. Location information on these devices has enabled mobile users to use local search services to access various types of location-related information easily. Mobile local search is inherently different from general web search. Namely, it focuses on local businesses and points of interest instead of general web pages, and finds relevant search results by evaluating different ranking features. It also strongly depends on several contextual factors, such as time, weather, location etc. In previous studies, rankings and mobile user context have been investigated with a small set of features. We developed a mobile local search application, Gezinio, and collected a data set of local search queries with novice social features. We also built ranking models to re-rank search results. We reveal that social features can improve performance of the machine-learned ranking models with respect to a baseline that solely ranks the results based on their distance to user. Furthermore, we find out that a feature that is important for ranking results of a certain query category may not be so useful for other categories.Item Open Access Mobile image search using multi-query images(IEEE, 2015) Çalışır, Fatih; Bastan, M.; Güdükbay, Uğur; Ulusoy, ÖzgürRecent advances in mobile device technology have turned the mobile phones into powerfull devices with high resolution cameras and fast processing capabilities. Having more user interaction potential compared to regular PCs, mobile devices with cameras can enable richer content-based object image queries: the user can capture multiple images of the query object from different viewing angles and at different scales, thereby providing much more information about the object to improve the retrieval accuracy. The goal of this paper is to improve the mobile image retrieval performance using multiple query images. To this end, we use the well-known bag-of-visual-words approach to represent the images, and employ early and late fusion strategies to utilize the information in multiple query images. With extensive experiments on an object image dataset with a single object per image, we show that multi-image queries result in higher average precision performance than single image queries. © 2015 IEEE.