35
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Predictive learning as a network mechanism for extracting low-dimensional latent space representations

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data.

          Abstract

          Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.

          Related collections

          Most cited references51

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Human-level control through deep reinforcement learning.

            The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Reducing the dimensionality of data with neural networks.

              High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.
                Bookmark

                Author and article information

                Contributors
                stefanor@uw.edu
                Journal
                Nat Commun
                Nat Commun
                Nature Communications
                Nature Publishing Group UK (London )
                2041-1723
                3 March 2021
                3 March 2021
                2021
                : 12
                : 1417
                Affiliations
                [1 ]GRID grid.34477.33, ISNI 0000000122986657, University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, ; Seattle, WA USA
                [2 ]GRID grid.34477.33, ISNI 0000000122986657, Department of Applied Mathematics, , University of Washington, ; Seattle, WA USA
                [3 ]GRID grid.14848.31, ISNI 0000 0001 2292 3357, Department of Mathematics and Statistics, , Université de Montréal, ; Montreal, QC Canada
                [4 ]Mila-Quebec Artificial Intelligence Institute, Montreal, QC Canada
                [5 ]Group for Neural Theory, Ecole Normal Superieur, Paris, France
                [6 ]GRID grid.481554.9, IBM Research AI, ; Yorktown Heights, NY USA
                [7 ]GRID grid.417881.3, Allen Institute for Brain Science, ; Seattle, WA USA
                Author information
                http://orcid.org/0000-0002-3576-9261
                http://orcid.org/0000-0003-2730-7291
                http://orcid.org/0000-0001-6466-2810
                Article
                21696
                10.1038/s41467-021-21696-1
                7930246
                33658520
                76e421b7-4684-4ffa-b875-d54246ffa04c
                © The Author(s) 2021

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 12 July 2019
                : 22 January 2021
                Funding
                Funded by: FundRef https://doi.org/10.13039/100008240, Fonds de Recherche du Québec-Société et Culture (FRQSC);
                Award ID: 2019-NC-253251
                Award Recipient :
                Funded by: FundRef https://doi.org/10.13039/501100002790, Canadian Network for Research and Innovation in Machining Technology, Natural Sciences and Engineering Research Council of Canada (NSERC Canadian Network for Research and Innovation in Machining Technology);
                Award ID: RGPIN-2018-04821
                Award Recipient :
                Funded by: Swartz Foundation Grant 807150 NSF DMS Grant 1514743
                Categories
                Article
                Custom metadata
                © The Author(s) 2021

                Uncategorized
                intelligence,perception,dynamical systems,learning algorithms,network models
                Uncategorized
                intelligence, perception, dynamical systems, learning algorithms, network models

                Comments

                Comment on this article