11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references37

          • Record: found
          • Abstract: not found
          • Article: not found

          Training with Noise is Equivalent to Tikhonov Regularization

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Pruning algorithms-a survey.

            R. Reed (1993)
            A rule of thumb for obtaining good generalization in systems trained by examples is that one should use the smallest system that will fit the data. Unfortunately, it usually is not obvious what size is best; a system that is too small will not be able to learn the data while one that is just big enough may learn very slowly and be very sensitive to initial conditions and learning parameters. This paper is a survey of neural network pruning algorithms. The approach taken by the methods described here is to train a network that is larger than necessary and then remove the parts that are not needed.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Network information criterion-determining the number of hidden units for an artificial neural network model.

              The problem of model selection, or determination of the number of hidden units, can be approached statistically, by generalizing Akaike's information criterion (AIC) to be applicable to unfaithful (i.e., unrealizable) models with general loss criteria including regularization terms. The relation between the training error and the generalization error is studied in terms of the number of the training examples and the complexity of a network which reduces to the number of parameters in the ordinary statistical theory of AIC. This relation leads to a new network information criterion which is useful for selecting the optimal network model based on a given training set.
                Bookmark

                Author and article information

                Journal
                IEEE Transactions on Neural Networks
                IEEE Trans. Neural Netw.
                Institute of Electrical and Electronics Engineers (IEEE)
                1045-9227
                1941-0093
                June 2010
                June 2010
                : 21
                : 6
                : 938-947
                Article
                10.1109/TNN.2010.2046179
                d3a5de9c-fa57-46de-bafb-66f3fe92cdc1
                © 2010
                History

                Comments

                Comment on this article