Automatic performance evaluation of information retrieval systems using data fusion
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
BUIR Usage Stats
views
downloads
Series
Abstract
The empirical investigation of the effectiveness of information retrieval systems (search engines) requires a test collection composed of a set of documents, a set of query topics and a set of relevance judgments indicating which documents are relevant to which topics. The human relevance judgments are expensive and subjective. In addition to this databases and user interests change quickly. Hence there is a great need of automatic way of evaluating the performance of search engines. Furthermore, recent studies show that differences in human relevance assessments do not affect the relative performance of information retrieval systems. Based on these observations, in this thesis, we propose and use data fusion to replace human relevance judgments and introduce an automatic evaluation method and provide its comprehensive statistical assessment with several Text Retrieval Conference (TREC) systems which shows that the method results correlates positively and significantly with the actual human based evaluations. The major contributions of this thesis are: (1) an automatic information retrieval performance evaluation method that uses data fusion algorithms for the first time in the literature, (2) system selection methods for data fusion aiming even higher correlation among automatic and human-based results, (3) several practical implications stemming from the fact that the automatic precision values are strongly correlated with those of actual information retrieval systems.