6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Exploration Bonus for Regret Minimization in Undiscounted Discrete and Continuous Markov Decision Processes

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We introduce and analyse two algorithms for exploration-exploitation in discrete and continuous Markov Decision Processes (MDPs) based on exploration bonuses. SCAL\(^+\) is a variant of SCAL (Fruit et al., 2018) that performs efficient exploration-exploitation in any unknown weakly-communicating MDP for which an upper bound C on the span of the optimal bias function is known. For an MDP with \(S\) states, \(A\) actions and \(\Gamma \leq S\) possible next states, we prove that SCAL\(^+\) achieves the same theoretical guarantees as SCAL (i.e., a high probability regret bound of \(\widetilde{O}(C\sqrt{\Gamma SAT})\)), with a much smaller computational complexity. Similarly, C-SCAL\(^+\) exploits an exploration bonus to achieve sublinear regret in any undiscounted MDP with continuous state space. We show that C-SCAL\(^+\) achieves the same regret bound as UCCRL (Ortner and Ryabko, 2012) while being the first implementable algorithm with regret guarantees in this setting. While optimistic algorithms such as UCRL, SCAL or UCCRL maintain a high-confidence set of plausible MDPs around the true unknown MDP, SCAL\(^+\) and C-SCAL\(^+\) leverage on an exploration bonus to directly plan on the empirically estimated MDP, thus being more computationally efficient.

          Related collections

          Most cited references4

          • Record: found
          • Abstract: not found
          • Article: not found

          Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            On Tail Probabilities for Martingales

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              An analysis of model-based Interval Estimation for Markov Decision Processes

                Bookmark

                Author and article information

                Journal
                11 December 2018
                Article
                1812.04363
                289023d4-407f-4478-9693-7835d9d4903a

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.LG stat.ML

                Machine learning,Artificial intelligence
                Machine learning, Artificial intelligence

                Comments

                Comment on this article