157
views
0
recommends
+1 Recommend
0 collections
    7
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5–18 items per box about design aspects and statistical methods. Our aim was to develop a scoring system for the COSMIN checklist to calculate quality scores per measurement property when using the checklist in systematic reviews of measurement properties.

          Methods

          The scoring system was developed based on discussions among experts and testing of the scoring system on 46 articles from a systematic review. Four response options were defined for each COSMIN item (excellent, good, fair, and poor). A quality score per measurement property is obtained by taking the lowest rating of any item in a box (“worst score counts”).

          Results

          Specific criteria for excellent, good, fair, and poor quality for each COSMIN item are described. In defining the criteria, the “worst score counts” algorithm was taken into consideration. This means that only fatal flaws were defined as poor quality. The scores of the 46 articles show how the scoring system can be used to provide an overview of the methodological quality of studies included in a systematic review of measurement properties.

          Conclusions

          Based on experience in testing this scoring system on 46 articles, the COSMIN checklist with the proposed scoring system seems to be a useful tool for assessing the methodological quality of studies included in systematic reviews of measurement properties.

          Related collections

          Most cited references9

          • Record: found
          • Abstract: found
          • Article: not found

          Grading quality of evidence and strength of recommendations in clinical practice guidelines: Part 2 of 3. The GRADE approach to grading quality of evidence about diagnostic tests and strategies.

          The GRADE approach to grading the quality of evidence and strength of recommendations provides a comprehensive and transparent approach for developing clinical recommendations about using diagnostic tests or diagnostic strategies. Although grading the quality of evidence and strength of recommendations about using tests shares the logic of grading recommendations for treatment, it presents unique challenges. Guideline panels and clinicians should be alert to these special challenges when using the evidence about the accuracy of tests as the basis for clinical decisions. In the GRADE system, valid diagnostic accuracy studies can provide high quality evidence of test accuracy. However, such studies often provide only low quality evidence for the development of recommendations about diagnostic testing, as test accuracy is a surrogate for patient-important outcomes at best. Inferring from data on accuracy that using a test improves outcomes that are important to patients requires availability of an effective treatment, improved patients' wellbeing through prognostic information, or - by excluding an ominous diagnosis - reduction of anxiety and the opportunity for earlier search for an alternative diagnosis for which beneficial treatment can be available. Assessing the directness of evidence supporting the use of a diagnostic test requires judgments about the relationship between test results and patient-important consequences. Well-designed and conducted studies of allergy tests in parallel with efforts to evaluate allergy treatments critically will encourage improved guideline development for allergic diseases.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Development of EMPRO: a tool for the standardized assessment of patient-reported outcome measures.

            This study was aimed to develop a tool for the standardized assessment of patient-reported outcomes (PROs) to assist the choice of instruments. An expert panel adapted the eight attributes proposed by the Medical Outcomes Trust as evaluation review criteria, created items to evaluate them, and included a response scale for each item. A pilot test was designed to test the new tool's feasibility and to obtain preliminary information concerning its psychometric properties. The Spanish versions of five measures were selected for assessment: the SF-36 Health Survey, the Nottingham Health Profile, the COOP-WONCA charts, the EuroQol-5D, and the Quality of Life Questionnaire EORTC-QLQ-C30. We assessed the new tool's reliability (Cronbach's alpha and intraclass correlation coefficient [ICC]) and construct validity. The new EMPRO (Evaluating the Measurement of Patient-Reported Outcomes) tool has 39 items covering eight key attributes: conceptual and measurement model, reliability, validity, responsiveness, interpretability, burden, alternative modes of administration, and cross-cultural and linguistic adaptations. Internal consistency was high (alpha = 0.95) as was interrater concordance (ICC: 0.87-0.94). Positive associations consistent with a priori hypotheses were observed between EMPRO attribute scores and the number of articles identified for the measures, the years elapsed since the publication of the first article, and the number of citations. A new tool for the standardized assessment of PRO measures is available. It has shown good preliminary reliability and validity and should be a useful aid to investigators who need to choose between alternative measures. Further assessment of the tool is necessary.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Evaluation of the methodological quality of systematic reviews of health status measurement instruments.

              A systematic review of measurement properties of health-status instruments is a tool for evaluating the quality of instruments. Our aim was to appraise the quality of the review process, to describe how authors assess the methodological quality of primary studies of measurement properties, and to describe how authors evaluate results of the studies. Literature searches were performed in three databases. One hundred and forty-eight reviews were included. The purpose of included reviews was to identify health status instruments used in an evaluative application and to report on the measurement properties of these instruments. Two independent reviewers selected the articles and extracted the data. Reviews were often of low quality: 22% of the reviews used one database, the search strategy was often poorly described, and in many cases it was not reported whether article selection (75%) and data extraction (71%) was done by two independent reviewers. In 11 reviews the methodological quality of the primary studies was evaluated for all measurement properties, and of these 11 reviews only 7 evaluated the results. Methods to evaluate the quality of the primary studies and the results differed widely. The poor quality of reviews hampers evidence-based selection of instruments. Guidelines for conducting and reporting systematic reviews of measurement properties should be developed.
                Bookmark

                Author and article information

                Contributors
                +31-20-4448187 , +31-20-4448181 , cb.terwee@vumc.nl
                Journal
                Qual Life Res
                Qual Life Res
                Quality of Life Research
                Springer Netherlands (Dordrecht )
                0962-9343
                1573-2649
                6 July 2011
                6 July 2011
                May 2012
                : 21
                : 4
                : 651-657
                Affiliations
                [1 ]Department of Epidemiology and Biostatistics and the EMGO Institute for Health and Care Research, VU University Medical Center, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands
                [2 ]Department of Health Sciences and the EMGO Institute for Health and Care Research, Faculty of Earth and Life Sciences, VU University Amsterdam, Amsterdam, The Netherlands
                [3 ]Executive Board of VU University Amsterdam, Amsterdam, The Netherlands
                Article
                9960
                10.1007/s11136-011-9960-1
                3323819
                21732199
                71897d55-a534-49d3-9962-a9c7f80635a4
                © The Author(s) 2011
                History
                : 25 June 2011
                Categories
                Article
                Custom metadata
                © Springer Science+Business Media B.V. 2012

                Public health
                reproducibility of results,psychometrics,systematic review,validation studies,outcome assessment,questionnaire

                Comments

                Comment on this article