65
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Systematic review of prognostic models in traumatic brain injury

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Traumatic brain injury (TBI) is a leading cause of death and disability world-wide. The ability to accurately predict patient outcome after TBI has an important role in clinical practice and research. Prognostic models are statistical models that combine two or more items of patient data to predict clinical outcome. They may improve predictions in TBI patients. Multiple prognostic models for TBI have accumulated for decades but none of them is widely used in clinical practice. The objective of this systematic review is to critically assess existing prognostic models for TBI

          Methods

          Studies that combine at least two variables to predict any outcome in patients with TBI were searched in PUBMED and EMBASE. Two reviewers independently examined titles, abstracts and assessed whether each met the pre-defined inclusion criteria.

          Results

          A total of 53 reports including 102 models were identified. Almost half (47%) were derived from adult patients. Three quarters of the models included less than 500 patients. Most of the models (93%) were from high income countries populations. Logistic regression was the most common analytical strategy to derived models (47%). In relation to the quality of the derivation models (n:66), only 15% reported less than 10% pf loss to follow-up, 68% did not justify the rationale to include the predictors, 11% conducted an external validation and only 19% of the logistic models presented the results in a clinically user-friendly way

          Conclusion

          Prognostic models are frequently published but they are developed from small samples of patients, their methodological quality is poor and they are rarely validated on external populations. Furthermore, they are not clinically practical as they are not presented to physicians in a user-friendly way. Finally because only a few are developed using populations from low and middle income countries, where most of trauma occurs, the generalizability to these setting is limited.

          Related collections

          Most cited references27

          • Record: found
          • Abstract: not found
          • Article: not found

          Systematic reviews in health care: Assessing the quality of controlled clinical trials.

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Assessing the generalizability of prognostic information.

            Physicians are often asked to make prognostic assessments but often worry that their assessments will prove inaccurate. Prognostic systems were developed to enhance the accuracy of such assessments. This paper describes an approach for evaluating prognostic systems based on the accuracy (calibration and discrimination) and generalizability (reproducibility and transportability) of the system's predictions. Reproducibility is the ability to produce accurate predictions among patients not included in the development of the system but from the same population. Transportability is the ability to produce accurate predictions among patients drawn from a different but plausibly related population. On the basis of the observation that the generalizability of a prognostic system is commonly limited to a single historical period, geographic location, methodologic approach, disease spectrum, or follow-up interval, we describe a working hierarchy of the cumulative generalizability of prognostic systems. This approach is illustrated in a structured review of the Dukes and Jass staging systems for colon and rectal cancer and applied to a young man with colon cancer. Because it treats the development of the system as a "black box" and evaluates only the performance of the predictions, the approach can be applied to any system that generates predicted probabilities. Although the Dukes and Jass staging systems are discrete, the approach can also be applied to systems that generate continuous predictions and, with some modification, to systems that predict over multiple time periods. Like any scientific hypothesis, the generalizability of a prognostic system is established by being tested and being found accurate across increasingly diverse settings. The more numerous and diverse the settings in which the system is tested and found accurate, the more likely it will generalize to an untested setting.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Clinical prediction rules. A review and suggested modifications of methodological standards.

              Clinical prediction rules are decision-making tools for clinicians, containing variables from the history, physical examination, or simple diagnostic tests. To review the quality of recently published clinical prediction rules and to suggest methodological standards for their development and evaluation. Four general medical journals were manually searched for clinical prediction rules published from 1991 through 1994. Four hundred sixty potentially eligible reports were identified, of which 30 were clinical prediction rules eligible for study. Most methodological standards could only be evaluated in 29 studies. Two investigators independently evaluated the quality of each report using a standard data sheet. Disagreements were resolved by consensus. The mathematical technique was used to develop the rule, and the results of the rule were described in 100% (29/29) of the reports. All the rules but 1 (97% [28/29]) were felt to be clinically sensible. The outcomes and predictive variables were clearly defined in 83% (24/29) and 59% (17/29) of the reports, respectively. Blind assessment of outcomes and predictive variables occurred in 41% (12/29) and 79% (23/29) of the reports, respectively, and the rules were prospectively validated in 79% (11/14). Reproducibility of predictive variables was assessed in only 3% (1/29) of the reports, and the effect of the rule on clinical use was prospectively measured in only 3% (1/30). Forty-one percent (12/29) of the rules were felt to be easy to use. Although clinical prediction rules comply with some methodological criteria, for other criteria, better compliance is needed.
                Bookmark

                Author and article information

                Journal
                BMC Med Inform Decis Mak
                BMC Medical Informatics and Decision Making
                BioMed Central (London )
                1472-6947
                2006
                14 November 2006
                : 6
                : 38
                Affiliations
                [1 ]Nutrition and Public Health Intervention Research Unit, Epidemiology and Population Health Department, London School of Hygiene & Tropical Medicine, Keppel Street, London WC1E 7HT, UK
                Article
                1472-6947-6-38
                10.1186/1472-6947-6-38
                1657003
                17105661
                41117b60-f214-4fd1-b3c6-1996e6c26d2b
                Copyright © 2006 Perel et al; licensee BioMed Central Ltd.

                This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 3 August 2006
                : 14 November 2006
                Categories
                Research Article

                Bioinformatics & Computational biology
                Bioinformatics & Computational biology

                Comments

                Comment on this article