153
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Meta-DiSc: a software for meta-analysis of test accuracy data

      product-review

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Systematic reviews and meta-analyses of test accuracy studies are increasingly being recognised as central in guiding clinical practice. However, there is currently no dedicated and comprehensive software for meta-analysis of diagnostic data. In this article, we present Meta-DiSc, a Windows-based, user-friendly, freely available (for academic use) software that we have developed, piloted, and validated to perform diagnostic meta-analysis.

          Results

          Meta-DiSc a) allows exploration of heterogeneity, with a variety of statistics including chi-square, I-squared and Spearman correlation tests, b) implements meta-regression techniques to explore the relationships between study characteristics and accuracy estimates, c) performs statistical pooling of sensitivities, specificities, likelihood ratios and diagnostic odds ratios using fixed and random effects models, both overall and in subgroups and d) produces high quality figures, including forest plots and summary receiver operating characteristic curves that can be exported for use in manuscripts for publication. All computational algorithms have been validated through comparison with different statistical tools and published meta-analyses. Meta-DiSc has a Graphical User Interface with roll-down menus, dialog boxes, and online help facilities.

          Conclusion

          Meta-DiSc is a comprehensive and dedicated test accuracy meta-analysis software. It has already been used and cited in several meta-analyses published in high-ranking journals. The software is publicly available at http://www.hrc.es/investigacion/metadisc_en.htm.

          Related collections

          Most cited references18

          • Record: found
          • Abstract: found
          • Article: not found

          Combining independent studies of a diagnostic test into a summary ROC curve: data-analytic approaches and some additional considerations.

          We consider how to combine several independent studies of the same diagnostic test, where each study reports an estimated false positive rate (FPR) and an estimated true positive rate (TPR). We propose constructing a summary receiver operating characteristic (ROC) curve by the following steps. (i) Convert each FPR to its logistic transform U and each TPR to its logistic transform V after increasing each observed frequency by adding 1/2. (ii) For each study calculate D = V - U, which is the log odds ratio of TPR and FPR, and S = V + U, an implied function of test threshold; then plot each study's point (Si, Di). (iii) Fit a robust-resistant regression line to these points (or an equally weighted least-squares regression line), with V - U as the dependent variable. (iv) Back-transform the line to ROC space. To avoid model-dependent extrapolation from irrelevant regions of ROC space we propose defining a priori a value of FPR so large that the test simply would not be used at that FPR, and a value of TPR so low that the test would not be used at that TPR. Then (a) only data points lying in the thus defined north-west rectangle of the unit square are used in the data analysis, and (b) the estimated summary ROC is depicted only within that subregion of the unit square. We illustrate the methods using simulated and real data sets, and we point to ways of comparing different tests and of taking into account the effects of covariates.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Explaining heterogeneity in meta-analysis: a comparison of methods.

            Exploring the possible reasons for heterogeneity between studies is an important aspect of conducting a meta-analysis. This paper compares a number of methods which can be used to investigate whether a particular covariate, with a value defined for each study in the meta-analysis, explains any heterogeneity. The main example is from a meta-analysis of randomized trials of serum cholesterol reduction, in which the log-odds ratio for coronary events is related to the average extent of cholesterol reduction achieved in each trial. Different forms of weighted normal errors regression and random effects logistic regression are compared. These analyses quantify the extent to which heterogeneity is explained, as well as the effect of cholesterol reduction on the risk of coronary events. In a second example, the relationship between treatment effect estimates and their precision is examined, in order to assess the evidence for publication bias. We conclude that methods which allow for an additive component of residual heterogeneity should be used. In weighted regression, a restricted maximum likelihood estimator is appropriate, although a number of other estimators are also available. Methods which use the original form of the data explicitly, for example the binomial model for observed proportions rather than assuming normality of the log-odds ratios, are now computationally feasible. Although such methods are preferable in principle, they often give similar results in practice. Copyright 1999 John Wiley & Sons, Ltd.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Predicting difficult intubation in apparently normal patients: a meta-analysis of bedside screening test performance.

              The objective of this study was to systematically determine the diagnostic accuracy of bedside tests for predicting difficult intubation in patients with no airway pathology. Thirty-five studies (50,760 patients) were selected from electronic databases. The overall incidence of difficult intubation was 5.8% (95% confidence interval, 4.5-7.5%). Screening tests included the Mallampati oropharyngeal classification, thyromental distance, sternomental distance, mouth opening, and Wilson risk score. Each test yielded poor to moderate sensitivity (20-62%) and moderate to fair specificity (82-97%). The most useful bedside test for prediction was found to be a combination of the Mallampati classification and thyromental distance (positive likelihood ratio, 9.9; 95% confidence interval, 3.1-31.9). Currently available screening tests for difficult intubation have only poor to moderate discriminative power when used alone. Combinations of tests add some incremental diagnostic value in comparison to the value of each test alone. The clinical value of bedside screening tests for predicting difficult intubation remains limited.
                Bookmark

                Author and article information

                Journal
                BMC Med Res Methodol
                BMC Medical Research Methodology
                BioMed Central (London )
                1471-2288
                2006
                12 July 2006
                : 6
                : 31
                Affiliations
                [1 ]Clinical Biostatistics Unit, Ramón y Cajal Hospital, Madrid, Ctra. Colmenar km 9.100 Madrid 28034, Spain
                [2 ]University of Birmingham and Birmingham Women's Hospital, Edgbaston, Birmingham, UK
                Article
                1471-2288-6-31
                10.1186/1471-2288-6-31
                1552081
                16836745
                dbccbd13-5800-48d3-81ed-fd5816710487
                Copyright © 2006 Zamora et al; licensee BioMed Central Ltd.

                This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 31 March 2006
                : 12 July 2006
                Categories
                Software

                Medicine
                Medicine

                Comments

                Comment on this article