9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Impact of predictor measurement heterogeneity across settings on the performance of prediction models: A measurement error perspective

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          It is widely acknowledged that the predictive performance of clinical prediction models should be studied in patients that were not part of the data in which the model was derived. Out‐of‐sample performance can be hampered when predictors are measured differently at derivation and external validation. This may occur, for instance, when predictors are measured using different measurement protocols or when tests are produced by different manufacturers. Although such heterogeneity in predictor measurement between derivation and validation data is common, the impact on the out‐of‐sample performance is not well studied. Using analytical and simulation approaches, we examined out‐of‐sample performance of prediction models under various scenarios of heterogeneous predictor measurement. These scenarios were defined and clarified using an established taxonomy of measurement error models. The results of our simulations indicate that predictor measurement heterogeneity can induce miscalibration of prediction and affects discrimination and overall predictive accuracy, to extents that the prediction model may no longer be considered clinically useful. The measurement error taxonomy was found to be helpful in identifying and predicting effects of heterogeneous predictor measurements between settings of prediction model derivation and validation. Our work indicates that homogeneity of measurement strategies across settings is of paramount importance in prediction research.

          Related collections

          Most cited references22

          • Record: found
          • Abstract: found
          • Article: not found

          A calibration hierarchy for risk models was defined: from utopia to empirical data.

          Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement

            Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              External validity of risk models: Use of benchmark values to disentangle a case-mix effect from incorrect coefficients.

              Various performance measures related to calibration and discrimination are available for the assessment of risk models. When the validity of a risk model is assessed in a new population, estimates of the model's performance can be influenced in several ways. The regression coefficients can be incorrect, which indeed results in an invalid model. However, the distribution of patient characteristics (case mix) may also influence the performance of the model. Here the authors consider a number of typical situations that can be encountered in external validation studies. Theoretical relations between differences in development and validation samples and performance measures are studied by simulation. Benchmark values for the performance measures are proposed to disentangle a case-mix effect from incorrect regression coefficients, when interpreting the model's estimated performance in validation samples. The authors demonstrate the use of the benchmark values using data on traumatic brain injury obtained from the International Tirilazad Trial and the North American Tirilazad Trial (1991-1994).
                Bookmark

                Author and article information

                Contributors
                K.Luijken@lumc.nl
                Journal
                Stat Med
                Stat Med
                10.1002/(ISSN)1097-0258
                SIM
                Statistics in Medicine
                John Wiley and Sons Inc. (Hoboken )
                0277-6715
                1097-0258
                31 May 2019
                15 August 2019
                : 38
                : 18 ( doiID: 10.1002/sim.v38.18 )
                : 3444-3459
                Affiliations
                [ 1 ] Department of Clinical Epidemiology Leiden University Medical Center Leiden The Netherlands
                [ 2 ] Department of Biomedical Data Sciences Leiden University Medical Center Leiden The Netherlands
                [ 3 ] Department of Development and Regeneration University of Leuven Leuven Belgium
                [ 4 ] Department of Public Health Erasmus University Medical Center Rotterdam The Netherlands
                Author notes
                [*] [* ] K. Luijken, Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, 2333 ZA, The Netherlands.

                Email: K.Luijken@ 123456lumc.nl

                Author information
                https://orcid.org/0000-0001-5192-8368
                https://orcid.org/0000-0001-9238-6999
                https://orcid.org/0000-0003-1613-7450
                https://orcid.org/0000-0002-7787-0122
                https://orcid.org/0000-0002-5529-1541
                Article
                SIM8183 sim.8183
                10.1002/sim.8183
                6619392
                31148207
                189e9776-d253-449e-b5dd-b40217f62f45
                © 2019 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

                This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

                History
                : 12 July 2018
                : 02 February 2019
                : 08 April 2019
                Page count
                Figures: 4, Tables: 5, Pages: 16, Words: 11834
                Funding
                Funded by: Netherlands Organisation for Scientific Research
                Award ID: ZonMW, project 917.16.430
                Funded by: Patient-Centered Outcomes Research Institute (PCORI)
                Award ID: ME-1606-35555
                Categories
                Research Article
                Research Articles
                Custom metadata
                2.0
                sim8183
                sim8183-hdr-0001
                15 August 2019
                Converter:WILEY_ML3GV2_TO_NLMPMC version:5.6.5 mode:remove_FC converted:10.07.2019

                Biostatistics
                brier score,calibration,discrimination,external validation,measurement error,measurement heterogeneity,prediction model

                Comments

                Comment on this article