Automatic Ranking of Retrieval Systems in Imperfect Environments

dc.citation.epage380en_US
dc.citation.spage379en_US
dc.contributor.authorNuray, Rabiaen_US
dc.contributor.authorCan, Fazlıen_US
dc.coverage.spatialToronto, Canada
dc.date.accessioned2016-02-08T11:55:17Z
dc.date.available2016-02-08T11:55:17Z
dc.date.issued2003-07-08en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionDate of Conference: July 28 - August 01, 2003
dc.descriptionConference name: SIGIR '03 Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
dc.description.abstractThe empirical investigation of the effectiveness of information retrieval (IR) systems requires a test collection, a set of query topics, and a set of relevance judgments made by human assessors for each query. Previous experiments show that differences in human relevance assessments do not affect the relative performance of retrieval systems. Based on this observation, we propose and evaluate a new approach to replace the human relevance judgments by an automatic method. Ranking of retrieval systems with our methodology correlates positively and significantly with that of human-based evaluations. In the experiments, we assume a Web-like imperfect environment: the indexing information for all documents is available for ranking, but some documents may not be available for retrieval. Such conditions can be due to document deletions or network problems. Our method of simulating imperfect environments can be used for Web search engine assessment and in estimating the effects of network conditions (e.g., network unreliability) on IR system performance.en_US
dc.description.provenanceMade available in DSpace on 2016-02-08T11:55:17Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2003en
dc.identifier.doi10.1145/860435.860510
dc.identifier.urihttp://hdl.handle.net/11693/27506
dc.language.isoEnglishen_US
dc.publisherACM
dc.relation.isversionofhttps://doi.org/10.1145/860435.860510
dc.source.titleSIGIR '03 Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrievalen_US
dc.subjectAutomatic Performance Evaluationen_US
dc.subjectIR Evaluationen_US
dc.subjectAutomationen_US
dc.subjectComputer simulationen_US
dc.subjectCorrelation methodsen_US
dc.subjectDatabase systemsen_US
dc.subjectQuery languagesen_US
dc.subjectSearch enginesen_US
dc.subjectWorld Wide Weben_US
dc.subjectAutomatic performance evaluationen_US
dc.subjectInformation retrieval (IR) evaluationen_US
dc.subjectInformation retrieval systemsen_US
dc.titleAutomatic Ranking of Retrieval Systems in Imperfect Environmentsen_US
dc.typeConference Paperen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Automatic Ranking of Retrieval Systems in Imperfect Environments.pdf
Size:
124.27 KB
Format:
Adobe Portable Document Format
Description:
Full printable version