152
views
1
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Does the Reporting Quality of Diagnostic Test Accuracy Studies, as Defined by STARD 2015, Affect Citation?

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objective

          To determine the rate with which diagnostic test accuracy studies that are published in a general radiology journal adhere to the Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015, and to explore the relationship between adherence rate and citation rate while avoiding confounding by journal factors.

          Materials and Methods

          All eligible diagnostic test accuracy studies that were published in the Korean Journal of Radiology in 2011–2015 were identified. Five reviewers assessed each article for yes/no compliance with 27 of the 30 STARD 2015 checklist items (items 28, 29, and 30 were excluded). The total STARD score (number of fulfilled STARD items) was calculated. The score of the 15 STARD items that related directly to the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 was also calculated. The number of times each article was cited (as indicated by the Web of Science) after publication until March 2016 and the article exposure time (time in months between publication and March 2016) were extracted.

          Results

          Sixty-three articles were analyzed. The mean (range) total and QUADAS-2-related STARD scores were 20.0 (14.5–25) and 11.4 (7–15), respectively. The mean citation number was 4 (0–21). Citation number did not associate significantly with either STARD score after accounting for exposure time (total score: correlation coefficient = 0.154, p = 0.232; QUADAS-2-related score: correlation coefficient = 0.143, p = 0.266).

          Conclusion

          The degree of adherence to STARD 2015 was moderate for this journal, indicating that there is room for improvement. When adjusted for exposure time, the degree of adherence did not affect the citation rate.

          Related collections

          Most cited references76

          • Record: found
          • Abstract: found
          • Article: not found

          Case-control and two-gate designs in diagnostic accuracy studies.

          In some diagnostic accuracy studies, the test results of a series of patients with an established diagnosis are compared with those of a control group. Such case-control designs are intuitively appealing, but they have also been criticized for leading to inflated estimates of accuracy. We discuss similarities and differences between diagnostic and etiologic case-control studies, as well as the mechanisms that can lead to variation in estimates of diagnostic accuracy in studies with separate sampling schemes ("gates") for diseased (cases) and nondiseased individuals (controls). Diagnostic accuracy studies are cross-sectional and descriptive in nature. Etiologic case-control studies aim to quantify the effect of potential causal exposures on disease occurrence, which inherently involves a time window between exposure and disease occurrence. Researchers and readers should be aware of spectrum effects in diagnostic case-control studies as a result of the restricted sampling of cases and/or controls, which can lead to changes in estimates of diagnostic accuracy. These spectrum effects may be advantageous in the early investigation of a new diagnostic test, but for an overall evaluation of the clinical performance of a test, case-control studies should closely mimic cross-sectional diagnostic studies. As the accuracy of a test is likely to vary across subgroups of patients, researchers and clinicians might carefully consider the potential for spectrum effects in all designs and analyses, particularly in diagnostic accuracy studies with differential sampling schemes for diseased (cases) and nondiseased individuals (controls).
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative.

            To improve the accuracy and completeness of reporting of studies of diagnostic accuracy, to allow readers to assess the potential for bias in the study and to evaluate its generalisability. The Standards for Reporting of Diagnostic Accuracy (STARD) steering group searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines regarding diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25-item checklist, using evidence, whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of the clinicians, researchers, reviewers, journals, and the public. Copyright RSNA, 2003
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Assessment of the accuracy of diagnostic tests: the cross-sectional study.

              In diagnostic accuracy studies, the contrast of interest can be one of the following: one single test contrast; comparing two or more single tests; further testing in addition to previous diagnostics; and comparing alternative diagnostic strategies. The clinical diagnostic problem under study must be specified. Studies of "extreme contrasts" (as early phase evaluations) and studies in "clinical practice" settings (assessing clinical value) should be distinguished. Design options are (1) survey of the total study population, (2) case-referent approach, or (3) test-based enrollment. Data collection should generally be prospective, but ambispective and retrospective approaches are sometimes appropriate. In addition to determinants of primary interest [the test(s) under study] possible modifiers of test accuracy and confounding variables must be specified. The reference standard procedure should be independent from the test results. Applying a reference standard can be difficult in case of classification errors, lack of a clear pathophysiologic concept, incorporation bias, or invasive or complex investigations. Possible solutions are: an independent expert panel, and the delayed type cross-sectional study (clinical follow-up). Also, a prognostic criterion can be chosen. For studies to be relevant for practice, inclusion criteria must be based on "intention to diagnose" or "intention to screen." The recruitment procedure is preferably a consecutive series of presenting patients or a target population screening, respectively. Sample size estimation should be routine. Analysis has to be focused on the contrast of interest. Estimating test accuracy and prediction of outcome need different approaches. External (clinical) validation requires repeated studies in other, similar populations. Also, systematic reviews and meta-analysis have a role. To enable readers of diagnostic research reports to evaluate whether methodological key issues were addressed, authors are advised to follow the STARD guidelines.
                Bookmark

                Author and article information

                Journal
                Korean J Radiol
                Korean J Radiol
                KJR
                Korean Journal of Radiology
                The Korean Society of Radiology
                1229-6929
                2005-8330
                Sep-Oct 2016
                23 August 2016
                : 17
                : 5
                : 706-714
                Affiliations
                Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea.
                Author notes
                Corresponding author: Seong Ho Park, MD, PhD, Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Korea. Tel: (822) 3010-5984, Fax: (822) 476-4719, parksh.radiology@ 123456gmail.com
                Article
                10.3348/kjr.2016.17.5.706
                5007397
                c38308a9-9425-480e-b4e5-be40fe98c06b
                Copyright © 2016 The Korean Society of Radiology

                This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 28 May 2016
                : 29 May 2016
                Categories
                Experimental and Others
                Original Article

                Radiology & Imaging
                stard,stard 2015,citation,reporting quality,accuracy,adherence,impact,impact factor
                Radiology & Imaging
                stard, stard 2015, citation, reporting quality, accuracy, adherence, impact, impact factor

                Comments

                Comment on this article