4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Dreaming neural networks: rigorous results

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recently a daily routine for associative neural networks has been proposed: the network Hebbian-learns during the awake state (thus behaving as a standard Hopfield model), then, during its sleep state, optimizing information storage, it consolidates pure patterns and removes spurious ones: this forces the synaptic matrix to collapse to the projector one (ultimately approaching the Kanter-Sompolinksy model). This procedure keeps the learning Hebbian-based (a biological must) but, by taking advantage of a (properly stylized) sleep phase, still reaches the maximal critical capacity (for symmetric interactions). So far this emerging picture (as well as the bulk of papers on unlearning techniques) was supported solely by mathematically-challenging routes, e.g. mainly replica-trick analysis and numerical simulations: here we rely extensively on Guerra's interpolation techniques developed for neural networks and, in particular, we extend the generalized stochastic stability approach to the case. Confining our description within the replica symmetric approximation (where the previous ones lie), the picture painted regarding this generalization (and the previously existing variations on theme) is here entirely confirmed. Further, still relying on Guerra's schemes, we develop a systematic fluctuation analysis to check where ergodicity is broken (an analysis entirely absent in previous investigations). We find that, as long as the network is awake, ergodicity is bounded by the Amit-Gutfreund-Sompolinsky critical line (as it should), but, as the network sleeps, sleeping destroys spin glass states by extending both the retrieval as well as the ergodic region: after an entire sleeping session the solely surviving regions are retrieval and ergodic ones and this allows the network to achieve the perfect retrieval regime (the number of storable patterns equals the number of neurons in the network).

          Related collections

          Most cited references30

          • Record: found
          • Abstract: not found
          • Article: not found

          Spin-glass models of neural networks

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Analytic and algorithmic solution of random satisfiability problems.

            We study the satisfiability of random Boolean expressions built from many clauses with K variables per clause (K-satisfiability). Expressions with a ratio alpha of clauses to variables less than a threshold alphac are almost always satisfiable, whereas those with a ratio above this threshold are almost always unsatisfiable. We show the existence of an intermediate phase below alphac, where the proliferation of metastable states is responsible for the onset of complexity in search algorithms. We introduce a class of optimization algorithms that can deal with these metastable states; one such algorithm has been tested successfully on the largest existing benchmark of K-satisfiability.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Storing Infinite Numbers of Patterns in a Spin-Glass Model of Neural Networks

                Bookmark

                Author and article information

                Journal
                21 December 2018
                Article
                1812.09077
                74b4cc87-f1af-4809-bed5-ad2172a742ad

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                Roma01.Math
                cond-mat.dis-nn cs.AI stat.ML

                Theoretical physics,Machine learning,Artificial intelligence
                Theoretical physics, Machine learning, Artificial intelligence

                Comments

                Comment on this article