21
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Distributed Exploration in Multi-Armed Bandits

      Preprint
      , , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We study exploration in Multi-Armed Bandits in a setting where \(k\) players collaborate in order to identify an \(\epsilon\)-optimal arm. Our motivation comes from recent employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between them. In particular, our main result shows that by allowing the \(k\) players to communicate only once, they are able to learn \(\sqrt{k}\) times faster than a single player. That is, distributing learning to \(k\) players gives rise to a factor \(\sqrt{k}\) parallel speed-up. We complement this result with a lower bound showing this is in general the best possible. On the other extreme, we present an algorithm that achieves the ideal factor \(k\) speed-up in learning performance, with communication only logarithmic in \(1/\epsilon\).

          Related collections

          Most cited references3

          • Record: found
          • Abstract: not found
          • Article: not found

          MapReduce

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Distributed Learning in Multi-Armed Bandit with Multiple Players

            , (2010)
            We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A decentralized policy is constructed to achieve this optimal order while ensuring fairness among players and without assuming any pre-agreement or information exchange among players. Based on a Time Division Fair Sharing (TDFS) of the M best arms, the proposed policy is constructed and its order optimality is proven under a general reward model. Furthermore, the basic structure of the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. We also establish a lower bound on the system regret growth rate for a general class of decentralized polices, to which the proposed policy belongs. This problem finds potential applications in cognitive radio networks, multi-channel communication systems, multi-agent systems, web search and advertising, and social networks.
              Bookmark
              • Record: found
              • Abstract: not found
              • Book Chapter: not found

              Pure Exploration in Multi-armed Bandits Problems

                Bookmark

                Author and article information

                Journal
                04 November 2013
                Article
                1311.0800
                abf09dc5-d85c-4cad-a29d-c92d625e10cb

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.LG

                Comments

                Comment on this article