0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      AlphaZe∗∗: AlphaZero-like baselines for imperfect information games are surprisingly strong

      methods-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years, deep neural networks for strategy games have made significant progress. AlphaZero-like frameworks which combine Monte-Carlo tree search with reinforcement learning have been successfully applied to numerous games with perfect information. However, they have not been developed for domains where uncertainty and unknowns abound, and are therefore often considered unsuitable due to imperfect observations. Here, we challenge this view and argue that they are a viable alternative for games with imperfect information—a domain currently dominated by heuristic approaches or methods explicitly designed for hidden information, such as oracle-based techniques. To this end, we introduce a novel algorithm based solely on reinforcement learning, called AlphaZe∗∗, which is an AlphaZero-based framework for games with imperfect information. We examine its learning convergence on the games Stratego and DarkHex and show that it is a surprisingly strong baseline, while using a model-based approach: it achieves similar win rates against other Stratego bots like Pipeline Policy Space Response Oracle (P2SRO), while not winning in direct comparison against P2SRO or reaching the much stronger numbers of DeepNash. Compared to heuristics and oracle-based approaches, AlphaZe∗∗ can easily deal with rule changes, e.g., when more information than usual is given, and drastically outperforms other approaches in this respect.

          Related collections

          Most cited references40

          • Record: found
          • Abstract: found
          • Article: not found

          Human-level control through deep reinforcement learning.

          The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Squeeze-and-Excitation Networks

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Mastering the game of Go with deep neural networks and tree search.

              The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Artif Intell
                Front Artif Intell
                Front. Artif. Intell.
                Frontiers in Artificial Intelligence
                Frontiers Media S.A.
                2624-8212
                12 May 2023
                2023
                : 6
                : 1014561
                Affiliations
                [1] 1Artificial Intelligence and Machine Learning Lab, Technical University of Darmstadt , Darmstadt, Germany
                [2] 2Hessian Center for Artificial Intelligence (hessian.AI) , Darmstadt, Germany
                [3] 3Centre for Cognitive Science, Technical University of Darmstadt , Darmstadt, Germany
                Author notes

                Edited by: Gabriele Gianini, University of Milan, Italy

                Reviewed by: Hang Shuai, The University of Tennessee, Knoxville, United States; Wolfgang Konen, Technical University of Cologne, Germany

                *Correspondence: Jannis Blüml jannis.blueml@ 123456tu-darmstadt.de

                †ORCID: Jannis Blüml orcid.org/0000-0002-9400-0946

                Article
                10.3389/frai.2023.1014561
                10213697
                ba4d67aa-66cf-4e62-bf8d-e34958050b0b
                Copyright © 2023 Blüml, Czech and Kersting.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 08 August 2022
                : 20 April 2023
                Page count
                Figures: 6, Tables: 6, Equations: 8, References: 41, Pages: 18, Words: 14907
                Funding
                Funded by: Hessisches Ministerium für Wissenschaft und Kunst, doi 10.13039/501100003495;
                Award ID: Cluster project "The Third Wave of Artificial Intelligence - 3AI"
                This work was partially funded by the Hessian Ministry of Science and the Arts (HMWK) within the cluster project The Third Wave of Artificial Intelligence—3AI.
                Categories
                Artificial Intelligence
                Methods
                Custom metadata
                Machine Learning and Artificial Intelligence

                imperfect information games,deep neural networks,reinforcement learning,alphazero,monte-carlo tree search,perfect information monte-carlo

                Comments

                Comment on this article