Language models form a class of successful probabilistic models in information retrieval. However, knowledge of why some methods perform better than others in a particular situation remains limited. In this study we analyze what language model factors influence information retrieval performance. Starting from popular smoothing methods we review what data features have been used. Document length and a measure of document word distribution turned out to be the important factors, in addition to a distinction in estimating the probability of seen and unseen words. We propose a class of parameter-free smoothing methods, of which multiple specific instances are possible. Instead of parameter tuning however, an analysis of data features should be used to decide upon a specific method. Finally, we discuss some initial experiments.
Content
Author and article information
Contributors
M. van der Heijden
Conference
Publication date:
September
2008
Publication date
(Print):
September
2008
Pages: 30-37
Affiliations
[0001]Radboud University Nijmegen
[0002]Radboud University Nijmegen
Donders Institute for Brain Cognition and Behavior
[0003]Radboud University Nijmegen
Institute for Computing and Information Science