17
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Echo State Networks trained by Tikhonov least squares are L2({\mu}) approximators of ergodic dynamical systems

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Echo State Networks (ESNs) are a class of single-layer recurrent neural networks with randomly generated internal weights, and a single layer of tuneable outer weights, which are usually trained by regularised linear least squares regression. Remarkably, ESNs still enjoy the universal approximation property despite the training procedure being entirely linear. In this paper, we prove that an ESN trained on a sequence of scalar observations from an ergodic dynamical system (with invariant measure {\mu}) using Tikhonov least squares will approximate future observations of the dynamical system in the L2({\mu}) norm. We call this the ESN Training Theorem. We demonstrate the theory numerically by training an ESN using Tikhonov least squares on a sequence of scalar observations of the Lorenz system, and compare the invariant measure of these observations with the invariant measure of the future predictions of the autonomous ESN.

          Related collections

          Author and article information

          Journal
          14 May 2020
          Article
          2005.06967
          fe271fb1-8a72-48c7-a77d-8d9046302acd

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          10 pages, 5 figures
          cs.LG math.DS stat.ML

          Differential equations & Dynamical systems,Machine learning,Artificial intelligence

          Comments

          Comment on this article