12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC

      , ,

      Statistics and Computing

      Springer Nature

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Related collections

          Most cited references 17

          • Record: found
          • Abstract: not found
          • Book: not found

          Bayesian Theory

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            A survey of cross-validation procedures for model selection

            Used to estimate the risk of an estimator or to perform model selection, cross-validation is a widespread strategy because of its simplicity and its apparent universality. Many results exist on the model selection performances of cross-validation procedures. This survey intends to relate these results to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results. As a conclusion, guidelines are provided for choosing the best cross-validation procedure according to the particular features of the problem in hand.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Penalized loss functions for Bayesian model comparison.

              The deviance information criterion (DIC) is widely used for Bayesian model comparison, despite the lack of a clear theoretical foundation. DIC is shown to be an approximation to a penalized loss function based on the deviance, with a penalty derived from a cross-validation argument. This approximation is valid only when the effective number of parameters in the model is much smaller than the number of independent observations. In disease mapping, a typical application of DIC, this assumption does not hold and DIC under-penalizes more complex models. Another deviance-based loss function, derived from the same decision-theoretic framework, is applied to mixture models, which have previously been considered an unsuitable application for DIC.
                Bookmark

                Author and article information

                Journal
                Statistics and Computing
                Stat Comput
                Springer Nature
                0960-3174
                1573-1375
                September 2017
                August 2016
                : 27
                : 5
                : 1413-1432
                Article
                10.1007/s11222-016-9696-4
                ace2e465-78d1-4bef-8c76-5b10c7ce66e5
                © 2017

                Comments

                Comment on this article