Automatic Ranking of Retrieval Systems in Imperfect Environments
dc.citation.epage | 380 | en_US |
dc.citation.spage | 379 | en_US |
dc.contributor.author | Nuray, Rabia | en_US |
dc.contributor.author | Can, Fazlı | en_US |
dc.coverage.spatial | Toronto, Canada | |
dc.date.accessioned | 2016-02-08T11:55:17Z | |
dc.date.available | 2016-02-08T11:55:17Z | |
dc.date.issued | 2003-07-08 | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.description | Date of Conference: July 28 - August 01, 2003 | |
dc.description | Conference name: SIGIR '03 Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval | |
dc.description.abstract | The empirical investigation of the effectiveness of information retrieval (IR) systems requires a test collection, a set of query topics, and a set of relevance judgments made by human assessors for each query. Previous experiments show that differences in human relevance assessments do not affect the relative performance of retrieval systems. Based on this observation, we propose and evaluate a new approach to replace the human relevance judgments by an automatic method. Ranking of retrieval systems with our methodology correlates positively and significantly with that of human-based evaluations. In the experiments, we assume a Web-like imperfect environment: the indexing information for all documents is available for ranking, but some documents may not be available for retrieval. Such conditions can be due to document deletions or network problems. Our method of simulating imperfect environments can be used for Web search engine assessment and in estimating the effects of network conditions (e.g., network unreliability) on IR system performance. | en_US |
dc.identifier.doi | 10.1145/860435.860510 | en_US |
dc.identifier.uri | http://hdl.handle.net/11693/27506 | en_US |
dc.language.iso | English | en_US |
dc.publisher | ACM | en_US |
dc.relation.isversionof | https://doi.org/10.1145/860435.860510 | |
dc.source.title | SIGIR '03 Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval | en_US |
dc.subject | Automatic Performance Evaluation | en_US |
dc.subject | IR Evaluation | en_US |
dc.subject | Automation | en_US |
dc.subject | Computer simulation | en_US |
dc.subject | Correlation methods | en_US |
dc.subject | Database systems | en_US |
dc.subject | Query languages | en_US |
dc.subject | Search engines | en_US |
dc.subject | World Wide Web | en_US |
dc.subject | Automatic performance evaluation | en_US |
dc.subject | Information retrieval (IR) evaluation | en_US |
dc.subject | Information retrieval systems | en_US |
dc.title | Automatic Ranking of Retrieval Systems in Imperfect Environments | en_US |
dc.type | Conference Paper | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Automatic Ranking of Retrieval Systems in Imperfect Environments.pdf
- Size:
- 124.27 KB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version