47
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets

      Preprint
      ,

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments.

          Related collections

          Author and article information

          Journal
          2014-02-03
          2015-01-22
          Article
          1402.0480
          f0988fff-149d-4214-974c-17dcc38865d5

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          Proceedings of The 31st International Conference on Machine Learning, pp. 1782-1790, 2014
          cs.LG stat.ML

          Machine learning,Artificial intelligence
          Machine learning, Artificial intelligence

          Comments

          Comment on this article