1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Toward Building Conversational Recommender Systems: A Contextual Bandit Approach

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Contextual bandit algorithms have gained increasing popularity in recommender systems, because they can learn to adapt recommendations by making exploration-exploitation trade-off. Recommender systems equipped with traditional contextual bandit algorithms are usually trained with behavioral feedback (e.g., clicks) from users on items. The learning speed can be slow because behavioral feedback by nature does not carry sufficient information. As a result, extensive exploration has to be performed. To address the problem, we propose conversational recommendation in which the system occasionally asks questions to the user about her interest. We first generalize contextual bandit to leverage not only behavioral feedback (arm-level feedback), but also verbal feedback (users' interest on categories, topics, etc.). We then propose a new UCB- based algorithm, and theoretically prove that the new algorithm can indeed reduce the amount of exploration in learning. We also design several strategies for asking questions to further optimize the speed of learning. Experiments on synthetic data, Yelp data, and news recommendation data from Toutiao demonstrate the efficacy of the proposed algorithm.

          Related collections

          Most cited references7

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          A contextual-bandit approach to personalized news article recommendation

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Multi-armed bandits in metric spaces

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Collaborative Filtering Bandits

                Bookmark

                Author and article information

                Journal
                04 June 2019
                Article
                1906.01219
                c4b5b95f-947e-4c5e-a757-e2ac72c51c42

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                12 pages
                cs.LG cs.IR stat.ML

                Information & Library science,Machine learning,Artificial intelligence
                Information & Library science, Machine learning, Artificial intelligence

                Comments

                Comment on this article