Browsing by Subject "Interest points"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Benzer yüzlerin bulunması(IEEE, 2009-04) Torun, R. Baturalp; Yurdakul, Merve; Duygulu, PınarIn this paper, we propose a method to match similar faces despite photos, which are taken from different sources on Internet, could have different scenes, illumination and posing. Interest points are used to recognize faces, and some points are eliminated in order to find best matching points which pair the similar face. Difference between two matching points is used to define similarity of the faces. Inspite of physical changes due to old age, plastic surgery, make-up or clothing, experiments show that given face image can be successfully matched with a similar face within a database composed of celebrity photos. ©2009 IEEE.Item Open Access Interesting faces: a graph-based approach for finding people in news(Elsevier, 2010-05) Ozkan, D.; Duygulu, P.In this study, we propose a method for finding people in large news photograph and video collections. Our method exploits the multi-modal nature of these data sets to recognize people and does not require any supervisory input. It first uses the name of the person to populate an initial set of candidate faces. From this set, which is likely to include the faces of other people, it selects the group of most similar faces corresponding to the queried person in a variety of conditions. Our main contribution is to transform the problem of recognizing the faces of the queried person in a set of candidate faces to the problem of finding the highly connected sub-graph (the densest component) in a graph representing the similarities of faces. We also propose a novel technique for finding the similarities of faces by matching interest points extracted from the faces. The proposed method further allows the classification of new faces without needing to re-build the graph. The experiments are performed on two data sets: thousands of news photographs from Yahoo! news and over 200 news videos from TRECVid2004. The results show that the proposed method provides significant improvements over textbased methods. (C) 2009 Elsevier Ltd. All rights reserved