As various recommender approaches are increasingly considered in e–learning, the need for actual use cases to guide development efforts is growing. We report on our experiences of using non–algorithmic recommender features to recommend additional study materials on an undergraduate course in 2009–2011. The study data comes from student e–questionnaire replies and actual click–by–click use data. Our discussion centres on using binary (useful/not useful) rating scale (2009–2010) vis–à–vis five–star rating scale (2011). Using five-star scale to increase the complexity of the rating decision significantly reduced dishonesty (rating items without viewing them), but at the price of fewer ratings overall and increased complexity of interpreting the ratings. In addition to explaining how ratings and other factors inter-influenced item-selecting, we also discuss how different scales (binary and five-star) affect the rating behaviour in e-learning and how the five-star rating distributions in e-learning relate to those in other domains. Furthermore, we discuss two models, high-quality approach and low-cost approach, of employing non-algorithmic recommending features in e-learning that emerge from our findings. The findings provide the field with insight into the actual dynamics of using recommender features in e-learning. Moreover, they provide practitioners with actionable information on dishonesty.