22
views
0
recommends
+1 Recommend
2 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found

      Learning-Based Model Predictive Control: Toward Safe Learning in Control

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recent successes in the field of machine learning, as well as the availability of increased sensing and computational capabilities in modern control systems, have led to a growing interest in learning and data-driven control techniques. Model predictive control (MPC), as the prime methodology for constrained control, offers a significant opportunity to exploit the abundance of data in a reliable manner, particularly while taking safety constraints into account. This review aims at summarizing and categorizing previous research on learning-based MPC, i.e., the integration or combination of MPC with learning methods, for which we consider three main categories. Most of the research addresses learning for automatic improvement of the prediction model from recorded data. There is, however, also an increasing interest in techniques to infer the parameterization of the MPC controller, i.e., the cost and constraints, that lead to the best closed-loop performance. Finally, we discuss concepts that leverage MPC to augment learning-based controllers with constraint satisfaction properties.

          Related collections

          Most cited references116

          • Record: found
          • Abstract: found
          • Article: not found

          Human-level control through deep reinforcement learning.

          The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Taking the Human Out of the Loop: A Review of Bayesian Optimization

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Constrained model predictive control: Stability and optimality

                Bookmark

                Author and article information

                Journal
                Annual Review of Control, Robotics, and Autonomous Systems
                Annu. Rev. Control Robot. Auton. Syst.
                Annual Reviews
                2573-5144
                2573-5144
                May 03 2020
                May 03 2020
                : 3
                : 1
                : 269-296
                Affiliations
                [1 ]Institute for Dynamic Systems and Control, ETH Zurich, Zurich 8092, Switzerland;, , ,
                Article
                10.1146/annurev-control-090419-075625
                a085a07c-efd9-4215-b1ae-18cad41aeccb
                © 2020
                History

                Social & Information networks,Data structures & Algorithms,Performance, Systems & Control,Robotics,Neural & Evolutionary computing,Artificial intelligence

                Comments

                Comment on this article