Browsing by Subject "Gesture recognition"
Now showing 1 - 12 of 12
- Results Per Page
- Sort Options
Item Open Access A hand gesture recognition technique for human-computer interaction(Academic Press, 2015) Kılıboz, N. Ç.; Güdükbay, UğurWe propose an approach to recognize trajectory-based dynamic hand gestures in real time for human-computer interaction (HCI). We also introduce a fast learning mechanism that does not require extensive training data to teach gestures to the system. We use a six-degrees-of-freedom position tracker to collect trajectory data and represent gestures as an ordered sequence of directional movements in 2D. In the learning phase, sample gesture data is filtered and processed to create gesture recognizers, which are basically finite-state machine sequence recognizers. We achieve online gesture recognition by these recognizers without needing to specify gesture start and end positions. The results of the conducted user study show that the proposed method is very promising in terms of gesture detection and recognition performance (73% accuracy) in a stream of motion. Additionally, the assessment of the user attitude survey denotes that the gestural interface is very useful and satisfactory. One of the novel parts of the proposed approach is that it gives users the freedom to create gesture commands according to their preferences for selected tasks. Thus, the presented gesture recognition approach makes the HCI process more intuitive and user specific.Item Open Access Histogram of oriented rectangles: a new pose descriptor for human action recognition(Elsevier BV, 2009-09-02) İkizler, N.; Duygulu, P.Most of the approaches to human action recognition tend to form complex models which require lots of parameter estimation and computation time. In this study, we show that, human actions can be simply represented by pose without dealing with the complex representation of dynamics. Based on this idea, we propose a novel pose descriptor which we name as Histogram-of-Oriented-Rectangles (HOR) for representing and recognizing human actions in videos. We represent each human pose in an action sequence by oriented rectangular patches extracted over the human silhouette. We then form spatial oriented histograms to represent the distribution of these rectangular patches. We make use of several matching strategies to carry the information from the spatial domain described by the HOR descriptor to temporal domain. These are (i) nearest neighbor classification, which recognizes the actions by matching the descriptors of each frame, (ii) global histogramming, which extends the idea of Motion Energy Image proposed by Bobick and Davis to rectangular patches, (iii) a classifier-based approach using Support Vector Machines, and (iv) adaptation of Dynamic Time Warping on the temporal representation of the HOR descriptor. For the cases when pose descriptor is not sufficiently strong alone, such as to differentiate actions "jogging" and "running", we also incorporate a simple velocity descriptor as a prior to the pose based classification step. We test our system with different configurations and experiment on two commonly used action datasets: the Weizmann dataset and the KTH dataset. Results show that our method is superior to other methods on Weizmann dataset with a perfect accuracy rate of 100%, and is comparable to the other methods on KTH dataset with a very high success rate close to 90%. These results prove that with a simple and compact representation, we can achieve robust recognition of human actions, compared to complex representations. © 2009 Elsevier B.V. All rights reserved.Item Open Access Human action recognition using distribution of oriented rectangular patches(Springer, 2007-10) İkizler, Nazlı; Duygulu, PınarWe describe a "bag-of-rectangles" method for representing and recognizing human actions in videos. In this method, each human pose in an action sequence is represented by oriented rectangular patches extracted over the whole body. Then, spatial oriented histograms are formed to represent the distribution of these rectangular patches. In order to carry the information from the spatial domain described by the bag-of-rectangles descriptor to temporal domain for recognition of the actions, four different methods are proposed. These are namely, (i) frame by frame voting, which recognizes the actions by matching the descriptors of each frame, (ii) global histogramming, which extends the idea of Motion Energy Image proposed by Bobick and Davis by rectangular patches, (iii) a classifier based approach using SVMs, and (iv) adaptation of Dynamic Time Warping on the temporal representation of the descriptor. The detailed experiments are carried out on the action dataset of Blank et. al. High success rates (100%) prove that with a very simple and compact representation, we can achieve robust recognition of human actions, compared to complex representations. © Springer-Verlag Berlin Heidelberg 2007.Item Open Access Human action recognition with line and flow histograms(IEEE, 2008-12) İkizler, Nazlı; Cinbiş, R. Gökberk; Duygulu, PınarWe present a compact representation for human action recognition in videos using line and optical flow histograms. We introduce a new shape descriptor based on the distribution of lines which are fitted to boundaries of human figures. By using an entropy-based approach, we apply feature selection to densify our feature representation, thus, minimizing classification time without degrading accuracy. We also use a compact representation of optical flow for motion information. Using line and flow histograms together with global velocity information, we show that high-accuracy action recognition is possible, even in challenging recording conditions. © 2008 IEEE.Item Open Access iki diferansiyel PIR algılayıcı ve bir kamera yardımıyla el hareketlerinin sınıflandırılması(IEEE, 2014-04) Erden, Fatih; Bingol, A. S.; Çetin, A. EnisBu makalede, iki diferansiyel kızılberisi algılayıcı (PIR) ve bir kamera kullanılarak geliştirilen el jestleri algılama ve sınıflandırma sistemi tanıtılmaktadır. İzlenen alanda diferansiyel PIR algılayıcı dizisi ile hareket varlığı araştırılır. Bir hareket algılanması durumunda kamera yardımıyla söz konusu hareketin el olup olmadığına, el ise çok modlu sistem verilerinin birlikte değerlendirilmesiyle hareketin hangi tanımlı sınıfa ait olduğuna karar verilir. Kamera ile el jestleri algılama ve hareketleri sınıflandırma aşamasında ten algılama ve dışbükey zarf-gedik hesaplama yöntemleri kullanılır. Farklı el hareketlerinin PIR algılayıcı verileri yardımıyla sınıflandırılması Winner-Take-All (WTA) imza metoduyla gerçekleştirilir. Bu makalenin temel katkısı, WTA imza kodlarının tek boyutlu sinyallerin sınıflandırılmasında kullanılabileceğini ve çoklu algılayıcı tümleştirmesiyle jestleri tanıma sonuçlarının geliştirilebileceğini göstermektir.Item Open Access A line based pose representation for human action recognition(2013) Baysal, S.; Duygulu, P.In this paper, we utilize a line based pose representation to recognize human actions in videos. We represent the pose in each frame by employing a collection of line-pairs, so that limb and joint movements are better described and the geometrical relationships among the lines forming the human figure are captured. We contribute to the literature by proposing a new method that matches line-pairs of two poses to compute the similarity between them. Moreover, to encapsulate the global motion information of a pose sequence, we introduce line-flow histograms, which are extracted by matching line segments in consecutive frames. Experimental results on Weizmann and KTH datasets emphasize the power of our pose representation, and show the effectiveness of using pose ordering and line-flow histograms together in grasping the nature of an action and distinguishing one from the others. © 2013 Elsevier B.V. All rights reserved.Item Open Access On recognizing actions in still images via multiple features(Springer, Berlin, Heidelberg, 2012) Şener, Fadime; Bas, C.; Ikizler-Cinbis, N.We propose a multi-cue based approach for recognizing human actions in still images, where relevant object regions are discovered and utilized in a weakly supervised manner. Our approach does not require any explicitly trained object detector or part/attribute annotation. Instead, a multiple instance learning approach is used over sets of object hypotheses in order to represent objects relevant to the actions. We test our method on the extensive Stanford 40 Actions dataset [1] and achieve significant performance gain compared to the state-of-the-art. Our results show that using multiple object hypotheses within multiple instance learning is effective for human action recognition in still images and such an object representation is suitable for using in conjunction with other visual features. © 2012 Springer-Verlag.Item Open Access Real time hand gesture recognition for computer interaction(IEEE, 2014-04) Farooq, J.; Ali, Muhaddisa BaratHand gesture recognition is a natural and intuitive way to interact with the computer, since interactions with the computer can be increased through multidimensional use of hand gestures as compare to other input methods. The purpose of this paper is to explore three different techniques for HGR (hand gesture recognition) using finger tips detection. A new approach called 'Curvature of Perimeter' is presented with its application as a virtual mouse. The system presented, uses only a webcam and algorithms which are developed using computer vision, image and the video processing toolboxes of Matlab. © 2014 IEEE.Item Open Access Recognition of occupational therapy exercises and detection of compensation mistakes for cerebral palsy(Elsevier, 2020) Ongun, Mehmet Faruk; Güdükbay, Uğur; Aksoy, SelimDepth camera-based virtual rehabilitation systems are gaining attention in occupational therapy for cerebral palsy patients. When developing such a system, domain-specific exercise recognition is vital. To design such a gesture recognition method, some obstacles need to be overcome: detection of gestures not related to the defined exercise set and recognition of incorrect exercises performed by the patients to compensate for their lack of ability. We propose a framework based on hidden Markov models for the recognition of upper extremity functional exercises. We determine critical compensation mistakes together with restrictions for classifying these mistakes with the help of occupational therapists. We first eliminate undefined gestures by evaluating two models that produce adaptive threshold values. Then we utilize specific negative models based on feature thresholding and train them for each exercise to detect compensation mistakes. We perform various tests using our method in a laboratory environment under the supervision of occupational therapists.Item Open Access Recognizing human actions from noisy videos via multiple instance learning(IEEE, 2013) şener, Fadime; Samet, Nermin; Duygulu, Pınar; Ikizler-Cinbis, N.In this work, we study the task of recognizing human actions from noisy videos and effects of noise to recognition performance and propose a possible solution. Datasets available in computer vision literature are relatively small and could include noise due to labeling source. For new and relatively big datasets, noise amount would possible increase and the performance of traditional instance based learning methods is likely to decrease. In this work, we propose a multiple instance learning-based solution in case of an increase in noise. For this purpose, each video is represented with spatio-temporal features, then bag-of-words method is applied. Then, using support vector machines (SVM), both instance-based learning and multiple instance learning classifiers are constructed and compared. The classification results show that multiple instance learning classifiers has better performance than instance based learning counterparts on noisy videos. © 2013 IEEE.Item Open Access Recognizing human actions using key poses(IEEE, 2010) Baysal, Sermetcan; Kurt, Mehmet Can; Duygulu, PınarIn this paper, we explore the idea of using only pose, without utilizing any temporal information, for human action recognition. In contrast to the other studies using complex action representations, we propose a simple method, which relies on extracting "key poses" from action sequences. Our contribution is two-fold. Firstly, representing the pose in a frame as a collection of line-pairs, we propose a matching scheme between two frames to compute their similarity. Secondly, to extract "key poses" for each action, we present an algorithm, which selects the most representative and discriminative poses from a set of candidates. Our experimental results on KTH and Weizmann datasets have shown that pose information by itself is quite effective in grasping the nature of an action and sufficient to distinguish one from others. © 2010 IEEE.Item Open Access Vision-based single-stroke character recognition for wearable computing(IEEE, 2001) Özer, Ö. F.; Özün, O.; Tüzel, C. Ö.; Atalay, V.; Çetin, A. EnisParticularly when compared to traditional tools such as a keyboard or mouse, wearable computing data entry tools offer increased mobility and flexibility. Such tools include touch screens, hand gesture and facial expression recognition, speech recognition, and key systems. We describe a new approach for recognizing characters drawn by hand gestures or by a pointer on a user's forearm captured by a digital camera. We draw each character as a single, isolated stroke using a Graffiti-like alphabet. Our algorithm enables effective and quick character recognition. The resulting character recognition system has potential for application in mobile communication and computing devices such as phones, laptop computers, handheld computers and personal data assistants.