Nuray, Rabia2016-07-012016-07-012003http://hdl.handle.net/11693/29371Cataloged from PDF version of article.The empirical investigation of the effectiveness of information retrieval systems (search engines) requires a test collection composed of a set of documents, a set of query topics and a set of relevance judgments indicating which documents are relevant to which topics. The human relevance judgments are expensive and subjective. In addition to this databases and user interests change quickly. Hence there is a great need of automatic way of evaluating the performance of search engines. Furthermore, recent studies show that differences in human relevance assessments do not affect the relative performance of information retrieval systems. Based on these observations, in this thesis, we propose and use data fusion to replace human relevance judgments and introduce an automatic evaluation method and provide its comprehensive statistical assessment with several Text Retrieval Conference (TREC) systems which shows that the method results correlates positively and significantly with the actual human based evaluations. The major contributions of this thesis are: (1) an automatic information retrieval performance evaluation method that uses data fusion algorithms for the first time in the literature, (2) system selection methods for data fusion aiming even higher correlation among automatic and human-based results, (3) several practical implications stemming from the fact that the automatic precision values are strongly correlated with those of actual information retrieval systems.xvi, 95 leaves, graphics, tables, 30 cmEnglishinfo:eu-repo/semantics/openAccessautomatic performance evaluationTRECsystem performance predictionsocial welfare functionsinformation retrieval systemdata fusionZ699.A1 N87 2003Information storage and retrieval systems.Automatic performance evaluation of information retrieval systems using data fusionThesisBILKUTUPB071875