• Record: found
  • Abstract: found
  • Article: found
Is Open Access

Receiver Operating Characteristic (ROC) Curve: Practical Review for Radiologists

Read this article at

      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


      The receiver operating characteristic (ROC) curve, which is defined as a plot of test sensitivity as the y coordinate versus its 1-specificity or false positive rate (FPR) as the x coordinate, is an effective method of evaluating the performance of diagnostic tests. The purpose of this article is to provide a nonmathematical introduction to ROC analysis. Important concepts involved in the correct use and interpretation of this analysis, such as smooth and empirical ROC curves, parametric and nonparametric methods, the area under the ROC curve and its 95% confidence interval, the sensitivity at a particular FPR, and the use of a partial area under the ROC curve are discussed. Various considerations concerning the collection of data in radiological ROC studies are briefly discussed. An introduction to the software frequently used for performing ROC analyses is also presented.

      Related collections

      Most cited references 16

      • Record: found
      • Abstract: found
      • Article: not found

      The meaning and use of the area under a receiver operating characteristic (ROC) curve.

      A representation and interpretation of the area under a receiver operating characteristic (ROC) curve obtained by the "rating" method, or by mathematical predictions based on patient characteristics, is presented. It is shown that in such a setting the area represents the probability that a randomly chosen diseased subject is (correctly) rated or ranked with greater suspicion than a randomly chosen non-diseased subject. Moreover, this probability of a correct ranking is the same quantity that is estimated by the already well-studied nonparametric Wilcoxon statistic. These two relationships are exploited to (a) provide rapid closed-form expressions for the approximate magnitude of the sampling variability, i.e., standard error that one uses to accompany the area under a smoothed ROC curve, (b) guide in determining the size of the sample required to provide a sufficiently reliable estimate of this area, and (c) determine how large sample sizes should be to ensure that one can statistically detect differences in the accuracy of diagnostic techniques.
        • Record: found
        • Abstract: found
        • Article: not found

        Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.

        Methods of evaluating and comparing the performance of diagnostic tests are of increasing importance as new tests are developed and marketed. When a test is based on an observed variable that lies on a continuous or graded scale, an assessment of the overall value of the test can be made through the use of a receiver operating characteristic (ROC) curve. The curve is constructed by varying the cutpoint used to determine which values of the observed variable will be considered abnormal and then plotting the resulting sensitivities against the corresponding false positive rates. When two or more empirical curves are constructed based on tests performed on the same individuals, statistical analysis on differences between curves must take into account the correlated nature of the data. This paper presents a nonparametric approach to the analysis of areas under correlated ROC curves, by using the theory on generalized U-statistics to generate an estimated covariance matrix.
          • Record: found
          • Abstract: found
          • Article: not found

          A method of comparing the areas under receiver operating characteristic curves derived from the same cases.

          Receiver operating characteristic (ROC) curves are used to describe and compare the performance of diagnostic technology and diagnostic algorithms. This paper refines the statistical comparison of the areas under two ROC curves derived from the same set of patients by taking into account the correlation between the areas that is induced by the paired nature of the data. The correspondence between the area under an ROC curve and the Wilcoxon statistic is used and underlying Gaussian distributions (binormal) are assumed to provide a table that converts the observed correlations in paired ratings of images into a correlation between the two ROC areas. This between-area correlation can be used to reduce the standard error (uncertainty) about the observed difference in areas. This correction for pairing, analogous to that used in the paired t-test, can produce a considerable increase in the statistical sensitivity (power) of the comparison. For studies involving multiple readers, this method provides a measure of a component of the sampling variation that is otherwise difficult to obtain.

            Author and article information

            [1 ]Department of Radiology, Seoul National University College of Medicine and Institute of Radiation Medicine, SNUMRC, Korea.
            [2 ]Biostatistics Section, Department of Pediatrics, University of Arkansas for Medical Sciences, Little Rock, AR, U.S.A.
            Author notes
            Address reprint requests to: Jin Mo Goo, MD, Department of Radiology, Seoul National University Hospital, 28 Yongon-dong, Chongro-gu, Seoul 110-744, Korea. Tel. (822) 760-2584, Fax. (822) 743-6385, jmgoo@
            Korean J Radiol
            Korean Journal of Radiology
            The Korean Radiological Society
            Jan-Mar 2004
            31 March 2004
            : 5
            : 1
            : 11-18
            Copyright © 2004 The Korean Radiological Society

            This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.



            Comment on this article