25
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Outcomes in Cochrane Systematic Reviews Addressing Four Common Eye Conditions: An Evaluation of Completeness and Comparability

      research-article
      * , , ,
      PLoS ONE
      Public Library of Science

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction

          Choice of outcomes is critical for clinical trialists and systematic reviewers. It is currently unclear how systematic reviewers choose and pre-specify outcomes for systematic reviews. Our objective was to assess the completeness of pre-specification and comparability of outcomes in all Cochrane reviews addressing four common eye conditions.

          Methods

          We examined protocols for all Cochrane reviews as of June 2013 that addressed glaucoma, cataract, age-related macular degeneration (AMD), and diabetic retinopathy (DR). We assessed completeness and comparability for each outcome that was named in ≥25% of protocols on those topics. We defined a completely-specified outcome as including information about five elements: domain, specific measurement, specific metric, method of aggregation, and time-points. For each domain, we assessed comparability in how individual elements were specified across protocols.

          Results

          We identified 57 protocols addressing glaucoma (22), cataract (16), AMD (15), and DR (4). We assessed completeness and comparability for five outcome domains: quality-of-life, visual acuity, intraocular pressure, disease progression, and contrast sensitivity. Overall, these five outcome domains appeared 145 times (instances). Only 15/145 instances (10.3%) were completely specified (all five elements) (median = three elements per outcome). Primary outcomes were more completely specified than non-primary (median = four versus two elements). Quality-of-life was least completely specified (median = one element). Due to largely incomplete outcome pre-specification, conclusive assessment of comparability in outcome usage across the various protocols per condition was not possible.

          Discussion

          Outcome pre-specification was largely incomplete; we encourage systematic reviewers to consider all five elements. This will indicate the importance of complete specification to clinical trialists, on whose work systematic reviewers depend, and will indirectly encourage comparable outcome choice to reviewers undertaking related research questions. Complete pre-specification could improve efficiency and reduce bias in data abstraction and analysis during a systematic review. Ultimately, more completely specified and comparable outcomes could make systematic reviews more useful to decision-makers.

          Related collections

          Most cited references15

          • Record: found
          • Abstract: found
          • Article: not found

          Reporting results of cancer treatment.

          On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Comparison of registered and published primary outcomes in randomized controlled trials.

            As of 2005, the International Committee of Medical Journal Editors required investigators to register their trials prior to participant enrollment as a precondition for publishing the trial's findings in member journals. To assess the proportion of registered trials with results recently published in journals with high impact factors; to compare the primary outcomes specified in trial registries with those reported in the published articles; and to determine whether primary outcome reporting bias favored significant outcomes. MEDLINE via PubMed was searched for reports of randomized controlled trials (RCTs) in 3 medical areas (cardiology, rheumatology, and gastroenterology) indexed in 2008 in the 10 general medical journals and specialty journals with the highest impact factors. For each included article, we obtained the trial registration information using a standardized data extraction form. Of the 323 included trials, 147 (45.5%) were adequately registered (ie, registered before the end of the trial, with the primary outcome clearly specified). Trial registration was lacking for 89 published reports (27.6%), 45 trials (13.9%) were registered after the completion of the study, 39 (12%) were registered with no or an unclear description of the primary outcome, and 3 (0.9%) were registered after the completion of the study and had an unclear description of the primary outcome. Among articles with trials adequately registered, 31% (46 of 147) showed some evidence of discrepancies between the outcomes registered and the outcomes published. The influence of these discrepancies could be assessed in only half of them and in these statistically significant results were favored in 82.6% (19 of 23). Comparison of the primary outcomes of RCTs registered with their subsequent publication indicated that selective outcome reporting is prevalent.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Standardising outcomes for clinical trials and systematic reviews

              Introduction Fifteen years ago, what was to become OMERACT met for the first time in The Netherlands to discuss ways in which the multitude of outcomes in assessments of the effects of treatments for rheumatoid arthritis might be standardised. In Trials, Tugwell et al have described the need for, and success of, this initiative [1] and Cooney and colleagues have set out their plans for a corresponding initiative for ulcerative colitis [2]. Why do we need such initiatives? What's the problem? And are these and other initiatives the solution? What's the problem? Every year, millions of journal articles are added to the tens of millions that already exist in the health literature, and tens of millions of web pages are added to the hundreds of millions currently available. Within these, there are many tens of thousands of research studies which might provide the evidence needed to make well-informed decisions about health care. The task of working through all this material is overwhelming enough, without then finding that the studies of relevance to the decision you wish to make all describe their findings in different ways, making it difficult if not impossible to draw out the relevant information. Of course, you might be able to find a systematic review, but even then there is no guarantee that the authors of that review will not have been faced with an insurmountable task of bringing together and making sense of a variety of studies that used a variety of outcomes and outcome measures. These difficulties are great enough but the problem gets even worse when one considers the potential for bias. If researchers have measured a particular outcome in a variety of ways, (for example using different pain instruments filled in by different people at different times) they might not report all of their findings from all of these measures. Studies have highlighted this problem in clinical trials, showing that this selectivity in reporting is usually driven by a desire to present the most positive or statistically significant results [3]. This will mean that, where the original researcher had a choice, the reader of the clinical trial report might be presented with an overly optimistic estimate of the effect of an intervention and therefore be led towards the wrong decision. In the 1990s, the potential scale of the problem of multiple outcome measures was highlighted in mental health by a comprehensive descriptive account of randomised trials in the treatment of people with schizophrenia. Thornley and Adams identified a total of 2000 such trials, which had assessed more than 600 different interventions. However, these trials had included an even greater number of rating scales for mental health than the number of interventions: 640 [4]. The potential for biased reported and the challenges of comparing the findings of different trials of different interventions using different ways of measuring illness make the identification of effective, ineffective and unproven treatments for this condition especially difficult [5]. This is true whether the readers of the report of a clinical trial are trying to use it to inform their decisions, or whether they are trying to combine similar trials within a systematic review. Thornley and Adams, who had done the descriptive study of the large number of rating scales in mental health trials, were faced with this very problem in a review of chlorpromazine. They concluded that review with the following implications for research, "if rating scales are to be employed, a concerted effort should be made to agree on which measures are the most useful. Studies within this review reported on so many scales that, even if results had not been poorly reported, they would have been difficult to synthesise in a clinically meaningful way." [6]. What's the solution? If we want to choose the shortest of three routes between two towns, how would we cope if told that one is 10 kilometres and another is 8 miles? Doing that conversion between miles and kilometres might not be too much of a problem, but what if the third route was said to be 32 furlongs? Now, imagine that the measurements had all been taken in different ways. One came from walking the route with a measuring wheel, one from an estimate based on the time taken to ride a horse between the two towns and one from using a ruler on a map. To make a well informed choice we would want the distances to be available to us in the same units, measured in the same ways. Making decisions about health care should be no different. We want to compare and contrast research findings on the basis of the same outcomes, measured in the same ways. Achieving this is not straightforward, but it is not impossible. Key steps are to decide on the core outcome measures and, in some cases, the core baseline variables, and for these to then be included in the conduct and reporting of research studies. One of the earliest examples is an initiative by the World Health Organisation in the late 1970s, relating to cancer trials. Meetings on the Standardization of Reporting Results of Cancer Treatment took place in Turin (1977) and in Brussels two years later. More than 30 representatives from cooperative groups doing randomised trials in cancer came together and their discussions led to a WHO Handbook of guidelines on the minimal requirements for data collection in cancer trials [7,8]. OMERACT has also grown by trying to reach a consensus among major stakeholders in the field of rheumatology [1] and the IMMPACT recommendations for chronic pain trials have arisen in a similar way [9]. Other approaches have included the use of literature surveys to identify the variety of outcome measures that have been used and reported, followed by group discussion. This is the case with low back pain [10], colon cancer [11] and an e-Delhi survey in maternity care [12]. Having developed these lists of outcomes measures, researchers need to use them and systematic reviewers need to build their reviews around them. These sets of standardised outcomes measures are not meant to stifle the development and use of other outcomes. Rather, they provide a core set of outcome measures, which researchers should use routinely. Researchers wishing to add other outcome measures in the context of their own trial would continue to do so but, when reporting their trial, selective reporting should be avoided through the presentation of the findings for both the core set and all additional outcome measures they collected. Furthermore, the use of the outcome measures in these core sets should not be restricted to research studies. They are also relevant within routine practice. If they are collected within such practice, they would help the provider and the receiver of health care to assess their progress and facilitate their understanding of the relevance to them of the findings of research. Journals such as Trials can help by highlighting initiatives such as those discussed in rheumatology [1] and ulcerative colitis [2]. They should encourage researchers to report their findings for the outcome measures in the core sets, and provide them with the space to do so. This will allow readers and systematic reviewers to make best use of the reported trials. Conclusion When there are differences among the results of similar clinical trials, the fundamental issues of interest to people making decisions about health care are likely to concern the interventions that were tested, the types of patient in the study, or both; not the different outcome measure used. The latter is important but if one remembers that the studies were probably not done to assess differences between the various ways of measuring outcomes, but, rather, differences between the interventions, the benefits of consistency become obvious. Achieving consistency is not something that can be left to serendipity. It will require consensus, guidelines and adherence. The papers in Trials and others mentioned in this commentary show how this might happen. Competing interests I am the author of one of the papers on a core set of outcomes for healthcare research, which is cited in this paper.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, USA )
                1932-6203
                2014
                16 October 2014
                : 9
                : 10
                : e109400
                Affiliations
                [1]Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, United States of America
                University of Newcastle, Australia, Australia
                Author notes

                Competing Interests: All authors are affiliated with the US Satellite of the Cochrane Eyes and Vision Group (the group responsible for producing the Cochrane Reviews evaluated as part of this work): Drs. Kay Dickersin and Tianjing Li are Faculty members; Dr. Xue Wang is a Methodologist; and Dr. Ian Saldanha is a Research Assistant. Drs. Kay Dickersin, Tianjing Li, and Xue Wang have authored several Cochrane systematic reviews that are assessed as part of this study. This does not alter the authors' adherence to PLOS ONE policies on sharing data and materials.

                Conceived and designed the experiments: IJS KD XW TL. Performed the experiments: IJS KD XW TL. Analyzed the data: IJS KD XW TL. Contributed reagents/materials/analysis tools: IJS KD XW TL. Wrote the paper: IJS KD XW TL.

                Article
                PONE-D-14-16168
                10.1371/journal.pone.0109400
                4199623
                25329377
                01bbf7d0-d039-4d44-8e00-0ddee395f4f6
                Copyright @ 2014

                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 10 April 2014
                : 26 August 2014
                Page count
                Pages: 10
                Funding
                This project was funded by the National Eye Institute, grant number 1U01EY020522 ( http://www.nei.nih.gov/). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Research Article
                Medicine and Health Sciences
                Epidemiology
                Clinical Epidemiology
                Ophthalmology
                Research and Analysis Methods
                Research Assessment
                Systematic Reviews
                Custom metadata
                The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are available at http://dx.doi.org/10.6084/m9.figshare.1157860.

                Uncategorized
                Uncategorized

                Comments

                Comment on this article