|M.Sc Student||Butman Olga|
|Subject||Query-Performance Prediction Using Minimal Relevance|
|Department||Department of Industrial Engineering and Management||Supervisor||Professor Oren Kurland|
|Full Thesis text|
Search engines have become a crucial means for finding information in large digital repositories such as the Web. The core task that every search engine has to cope with is the ad hoc retrieval task: finding documents in a corpus (repository) that are relevant to the information need underlying a user's query. However, the effectiveness of ad hoc retrieval can significantly vary from one query to another. Accordingly, there has been much work on devising query-performance prediction approaches that estimate search effectiveness in lack of relevance judgments. Existing query-performance prediction methods can be split into two categories: pre-retrieval and post-retrieval methods. This work focuses on post-retrieval prediction methods.
Post-retrieval prediction methods analyze the result list of top-retrieved documents. In this thesis, we show that if relevance feedback for a very few documents at the highest ranks of the list is available, even if only for the top-most ranked one, then it can be exploited so as to dramatically improve prediction quality. Empirical evaluation demonstrates that some state-of-the-art post-retrieval predictors, when employed over only very few relevant documents, post prediction quality that significantly transcends that of their zero-feedback-based prediction. Furthermore, we show that integrating prediction based on relevant documents with zero-feedback-based prediction is highly effective; specifically, the prediction is much better than utilizing direct estimates of retrieval effectiveness that are based on the given feedback.