7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Bayesian Regression Trees for High-Dimensional Prediction and Variable Selection

      1
      Journal of the American Statistical Association
      Informa UK Limited

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: not found
          • Article: not found

          Bayesian CART Model Search

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            BART: Bayesian additive regression trees

            , , (2010)
            We develop a Bayesian "sum-of-trees" model where each tree is constrained by a regularization prior to be a weak learner, and fitting and inference are accomplished via an iterative Bayesian backfitting MCMC algorithm that generates samples from a posterior. Effectively, BART is a nonparametric Bayesian regression approach which uses dimensionally adaptive random basis elements. Motivated by ensemble methods in general, and boosting algorithms in particular, BART is defined by a statistical model: a prior and a likelihood. This approach enables full posterior inference including point and interval estimates of the unknown regression function as well as the marginal effects of potential predictors. By keeping track of predictor inclusion frequencies, BART can also be used for model-free variable selection. BART's many features are illustrated with a bake-off against competing methods on 42 different data sets, with a simulation experiment and on a drug discovery classification problem.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Optimal predictive model selection

              Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper we show that, for selection among normal linear models, the optimal predictive model is often the median probability model, which is defined as the model consisting of those variables which have overall posterior probability greater than or equal to 1/2 of being in a model. The median probability model often differs from the highest probability model.
                Bookmark

                Author and article information

                Journal
                Journal of the American Statistical Association
                Journal of the American Statistical Association
                Informa UK Limited
                0162-1459
                1537-274X
                December 20 2016
                December 20 2016
                : 1-11
                Affiliations
                [1 ] Department of Statistics, Florida State University, Tallahassee, FL
                Article
                10.1080/01621459.2016.1264957
                55c18bff-7a9e-49b5-abbe-af7f7e057917
                © 2016
                History

                Comments

                Comment on this article