Aksoy, SelimAvcı, AkınBalçık, ErmanÇavuş, ÖzgeDuygulu, PınarKaraman, ZeynepKavak, PınarKaynak, CihanKüçükayvaz, EmreÖcalan, ÇağdaşYıldız, Pınar2016-02-082016-02-082005-11http://hdl.handle.net/11693/27390Date of Conference: 14-15 November, 2005Conference name: TREC Video Retrieval Evaluation (TRECVID), 2005We describe our second-time participation, that includes one high-level feature extraction run, and three manual and one interactive search runs, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual and textual information were used where visual information consisted of color, texture and edgebased low-level features and textual information consisted of the speech transcript provided in the collection. With the experience gained with our second-time participation, we are in the process of building a system for automatic classification and indexing of video archives.EnglishAutomatic indexingFeature extractionAutomatic classificationHigh-level feature extractionsInteractive searchLow-level featuresSpeech transcriptsTextual informationVideo retrievalVisual informationImage retrievalBilkent University at TRECVID 2005Conference Paper