79
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.

          Related collections

          Most cited references89

          • Record: found
          • Abstract: not found
          • Article: not found

          Improving the quality of reporting of randomized controlled trials. The CONSORT statement.

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Assessing the generalizability of prognostic information.

            Physicians are often asked to make prognostic assessments but often worry that their assessments will prove inaccurate. Prognostic systems were developed to enhance the accuracy of such assessments. This paper describes an approach for evaluating prognostic systems based on the accuracy (calibration and discrimination) and generalizability (reproducibility and transportability) of the system's predictions. Reproducibility is the ability to produce accurate predictions among patients not included in the development of the system but from the same population. Transportability is the ability to produce accurate predictions among patients drawn from a different but plausibly related population. On the basis of the observation that the generalizability of a prognostic system is commonly limited to a single historical period, geographic location, methodologic approach, disease spectrum, or follow-up interval, we describe a working hierarchy of the cumulative generalizability of prognostic systems. This approach is illustrated in a structured review of the Dukes and Jass staging systems for colon and rectal cancer and applied to a young man with colon cancer. Because it treats the development of the system as a "black box" and evaluates only the performance of the predictions, the approach can be applied to any system that generates predicted probabilities. Although the Dukes and Jass staging systems are discrete, the approach can also be applied to systems that generate continuous predictions and, with some modification, to systems that predict over multiple time periods. Like any scientific hypothesis, the generalizability of a prognostic system is established by being tested and being found accurate across increasingly diverse settings. The more numerous and diverse the settings in which the system is tested and found accurate, the more likely it will generalize to an untested setting.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Problems of spectrum and bias in evaluating the efficacy of diagnostic tests.

              To determine why many diagnostic tests have proved to be valueless after optimistic introduction into medical practice, we reviewed a series of investigations and identified two major problems that can cause erroneous statistical results for the "sensitivity" and "specificity" indexes of diagnostic efficacy. Unless an appropriately broad spectrum is chosen for the diseased and nondiseased patients who comprise the study population, the diagnostic test may receive falsely high values for its "rule-in" and "rule-out" performances. Unless the interpretation of the test and the establishment of the true diagnosis are done independently, bias may falsely elevate the test's efficacy. Avoidance of these problems might have prevented the early optimism and subsequent disillusionment with the diagnostic value of two selected examples: the carcinoembryonic antigen and nitro-blue tetrazolium tests.
                Bookmark

                Author and article information

                Journal
                BMJ Open
                BMJ Open
                bmjopen
                bmjopen
                BMJ Open
                BMJ Publishing Group (BMA House, Tavistock Square, London, WC1H 9JR )
                2044-6055
                2016
                14 November 2016
                : 6
                : 11
                : e012799
                Affiliations
                [1 ]Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Centre, University of Amsterdam , Amsterdam, The Netherlands
                [2 ]Department of Pediatrics, INSERM UMR 1153, Necker Hospital, AP-HP, Paris Descartes University , Paris, France
                [3 ]Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Centre for Statistics in Medicine, University of Oxford , Oxford, UK
                [4 ]Department of Pathology, University of Virginia School of Medicine , Charlottesville, Virginia, USA
                [5 ]Department of Biostatistics, Brown University School of Public Health , Providence, Rhode Island, USA
                [6 ]Cochrane Netherlands, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, University of Utrecht , Utrecht, The Netherlands
                [7 ]Screening and Diagnostic Test Evaluation Program, School of Public Health, University of Sydney , Sydney, New South Wales, Australia
                [8 ]Department of Radiology, Beth Israel Deaconess Medical Center , Boston, Massachusetts, USA
                [9 ]Radiology Editorial Office , Boston, Massachusetts, USA
                [10 ]Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, University of Utrecht , Utrecht, The Netherlands
                [11 ]Department of Epidemiology and Biostatistics, EMGO Institute for Health and Care Research, VU University Medical Center , Amsterdam, The Netherlands
                Author notes
                [Correspondence to ] Professor Patrick M M Bossuyt; p.m.bossuyt@ 123456amc.uva.nl

                JFC and DAK contributed equally to this manuscript and share first authorship.

                Article
                bmjopen-2016-012799
                10.1136/bmjopen-2016-012799
                5128957
                28137831
                ba53b697-1df3-40ec-87f1-cdac52d83f7e
                Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

                This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

                History
                : 26 May 2016
                : 3 August 2016
                : 25 August 2016
                Categories
                Medical Publishing and Peer Review
                Research
                1506
                1711
                1689
                1692
                1694
                1730

                Medicine
                reporting quality,sensitivity and specificity,diagnostic accuracy,research waste,peer review,medical publishing

                Comments

                Comment on this article