BUIR logo
Communities & Collections
All of BUIR
  • English
  • Türkçe
Log In
Please note that log in via username/password is only available to Repository staff.
Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Low-level features"

Filter results by typing the first few letters
Now showing 1 - 5 of 5
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Bilkent University at TRECVID 2005
    (National Institute of Standards and Technology, 2005-11) Aksoy, Selim; Avcı, Akın; Balçık, Erman; Çavuş, Özge; Duygulu, Pınar; Karaman, Zeynep; Kavak, Pınar; Kaynak, Cihan; Küçükayvaz, Emre; Öcalan, Çağdaş; Yıldız, Pınar
    We describe our second-time participation, that includes one high-level feature extraction run, and three manual and one interactive search runs, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual and textual information were used where visual information consisted of color, texture and edgebased low-level features and textual information consisted of the speech transcript provided in the collection. With the experience gained with our second-time participation, we are in the process of building a system for automatic classification and indexing of video archives.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Bilkent university at TRECVID 2006
    (National Institute of Standards and Technology, 2006-11) Aksoy, Selim; Duygulu, Pınar; Akçay, Hüseyin Gökhan; Ataer, Esra; Baştan, Muhammet; Can, Tolga; Çavuş, Özge; Doǧgrusöz, Emel; Gökalp, Demir; Akaydın, Ateş; Akoǧlu, Leman; Angın, Pelin; Cinbiş, R. Gökberk; Gür, Tunay; Ünlü, Mehmet
    We describe our third participation, that includes one high-level feature extraction run, and two manual and one interactive search runs, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual and textual information were used where visual information consisted of color, texture and edge-based low-level features and textual information consisted of the speech transcript provided in the collection.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Bilkent University at TRECVID 2007
    (National Institute of Standards and Technology, 2007) Aksoy, Selim; Duygulu, Pınar; Aksoy, C.; Aydin, E.; Gunaydin, D.; Hadimli, K.; Koç L.; Olgun, Y.; Orhan, C.; Yakin G.
    We describe our fourth participation, that includes two high-level feature extraction runs, and one manual search run, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual information, consisting of color, texture and edge-based low-level features, was used.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Bilkent University Multimedia Database Group at TRECVID 2008
    (National Institute of Standards and Technology, 2008-11) Küçüktunç, Onur; Baştan, Muhammet; Güdükkbay, Uğur; Ulusoy, Özgür
    Bilkent University Multimedia Database Group (BILMDG) participated in two tasks at TRECVID 2008: content-based copy detection (CBCD) and high-level feature extraction (FE). Mostly MPEG-7 [1] visual features, which are also used as low-level features in our MPEG-7 compliant video database management system, are extracted for these tasks. This paper discusses our approaches in each task.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Oscillatory synchronization model of attention to moving objects
    (Elsevier, 2012) Yilmaz, O.
    The world is a dynamic environment hence it is important for the visual system to be able to deploy attention on moving objects and attentively track them. Psychophysical experiments indicate that processes of both attentional enhancement and inhibition are spatially focused on the moving objects; however the mechanisms of these processes are unknown. The studies indicate that the attentional selection of target objects is sustained via a feedforward-feedback loop in the visual cortical hierarchy and only the target objects are represented in attention-related areas. We suggest that feedback from the attention-related areas to early visual areas modulates the activity of neurons; establishes synchronization with respect to a common oscillatory signal for target items via excitatory feedback, and also establishes de-synchronization for distractor items via inhibitory feedback. A two layer computational neural network model with integrate-and-fire neurons is proposed and simulated for simple attentive tracking tasks. Consistent with previous modeling studies, we show that via temporal tagging of neural activity, distractors can be attentively suppressed from propagating to higher levels. However, simulations also suggest attentional enhancement of activity for distractors in the first layer which represents neural substrate dedicated for low level feature processing. Inspired by this enhancement mechanism, we developed a feature based object tracking algorithm with surround processing. Surround processing improved tracking performance by 57% in PETS 2001 dataset, via eliminating target features that are likely to suffer from faulty correspondence assignments. © 2012 Elsevier Ltd.

About the University

  • Academics
  • Research
  • Library
  • Students
  • Stars
  • Moodle
  • WebMail

Using the Library

  • Collections overview
  • Borrow, renew, return
  • Connect from off campus
  • Interlibrary loan
  • Hours
  • Plan
  • Intranet (Staff Only)

Research Tools

  • EndNote
  • Grammarly
  • iThenticate
  • Mango Languages
  • Mendeley
  • Turnitin
  • Show more ..

Contact

  • Bilkent University
  • Main Campus Library
  • Phone: +90(312) 290-1298
  • Email: dspace@bilkent.edu.tr

Bilkent University Library © 2015-2025 BUIR

  • Privacy policy
  • Send Feedback