Browsing by Subject "Low-level features"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Open Access Bilkent University at TRECVID 2005(National Institute of Standards and Technology, 2005-11) Aksoy, Selim; Avcı, Akın; Balçık, Erman; Çavuş, Özge; Duygulu, Pınar; Karaman, Zeynep; Kavak, Pınar; Kaynak, Cihan; Küçükayvaz, Emre; Öcalan, Çağdaş; Yıldız, PınarWe describe our second-time participation, that includes one high-level feature extraction run, and three manual and one interactive search runs, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual and textual information were used where visual information consisted of color, texture and edgebased low-level features and textual information consisted of the speech transcript provided in the collection. With the experience gained with our second-time participation, we are in the process of building a system for automatic classification and indexing of video archives.Item Open Access Bilkent university at TRECVID 2006(National Institute of Standards and Technology, 2006-11) Aksoy, Selim; Duygulu, Pınar; Akçay, Hüseyin Gökhan; Ataer, Esra; Baştan, Muhammet; Can, Tolga; Çavuş, Özge; Doǧgrusöz, Emel; Gökalp, Demir; Akaydın, Ateş; Akoǧlu, Leman; Angın, Pelin; Cinbiş, R. Gökberk; Gür, Tunay; Ünlü, MehmetWe describe our third participation, that includes one high-level feature extraction run, and two manual and one interactive search runs, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual and textual information were used where visual information consisted of color, texture and edge-based low-level features and textual information consisted of the speech transcript provided in the collection.Item Open Access Bilkent University at TRECVID 2007(National Institute of Standards and Technology, 2007) Aksoy, Selim; Duygulu, Pınar; Aksoy, C.; Aydin, E.; Gunaydin, D.; Hadimli, K.; Koç L.; Olgun, Y.; Orhan, C.; Yakin G.We describe our fourth participation, that includes two high-level feature extraction runs, and one manual search run, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual information, consisting of color, texture and edge-based low-level features, was used.Item Open Access Bilkent University Multimedia Database Group at TRECVID 2008(National Institute of Standards and Technology, 2008-11) Küçüktunç, Onur; Baştan, Muhammet; Güdükkbay, Uğur; Ulusoy, ÖzgürBilkent University Multimedia Database Group (BILMDG) participated in two tasks at TRECVID 2008: content-based copy detection (CBCD) and high-level feature extraction (FE). Mostly MPEG-7 [1] visual features, which are also used as low-level features in our MPEG-7 compliant video database management system, are extracted for these tasks. This paper discusses our approaches in each task.Item Open Access Oscillatory synchronization model of attention to moving objects(Elsevier, 2012) Yilmaz, O.The world is a dynamic environment hence it is important for the visual system to be able to deploy attention on moving objects and attentively track them. Psychophysical experiments indicate that processes of both attentional enhancement and inhibition are spatially focused on the moving objects; however the mechanisms of these processes are unknown. The studies indicate that the attentional selection of target objects is sustained via a feedforward-feedback loop in the visual cortical hierarchy and only the target objects are represented in attention-related areas. We suggest that feedback from the attention-related areas to early visual areas modulates the activity of neurons; establishes synchronization with respect to a common oscillatory signal for target items via excitatory feedback, and also establishes de-synchronization for distractor items via inhibitory feedback. A two layer computational neural network model with integrate-and-fire neurons is proposed and simulated for simple attentive tracking tasks. Consistent with previous modeling studies, we show that via temporal tagging of neural activity, distractors can be attentively suppressed from propagating to higher levels. However, simulations also suggest attentional enhancement of activity for distractors in the first layer which represents neural substrate dedicated for low level feature processing. Inspired by this enhancement mechanism, we developed a feature based object tracking algorithm with surround processing. Surround processing improved tracking performance by 57% in PETS 2001 dataset, via eliminating target features that are likely to suffer from faulty correspondence assignments. © 2012 Elsevier Ltd.