25
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Comparison of conference abstracts and presentations with full-text articles in the health technology assessments of rapidly evolving technologies.

      Health technology assessment (Winchester, England)
      Adult, Antibodies, Monoclonal, Arthritis, Rheumatoid, Congresses as Topic, Drug Delivery Systems, Evaluation Studies as Topic, Great Britain, Humans, Immunoglobulin G, Information Dissemination, Publication Bias, Receptors, Tumor Necrosis Factor, Stents, Technology Assessment, Biomedical

      Read this article at

      ScienceOpenPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          To assess the extent of use of data from conference abstracts and presentations in health technology assessments (HTAs) provided as part of the National Institute for Health and Clinical Excellence (NICE) appraisal process. Also to assess the methodological quality of trials from conference abstracts and presentations, the consistency of reporting major outcomes between these sources and subsequent full-length publications, the effect of inclusion or exclusion of data from these sources on the meta-analysis pooled effect estimates, and the timeliness of availability of data from these sources and full articles in relation to the development of technology assessment reviews (TARs). A survey of seven TAR groups. An audit of published TARs: included all NICE TARs published between January 2000 and October 2004. Case studies of selected TARs. Analyses of the results of the survey and audit were presented as a descriptive summary and in a tabular format. Sensitivity analyses were carried out to compare the effect of inclusion of data from abstracts and presentations on the meta-analysis pooled effect estimates by including data from both abstracts/presentations and full papers, and data from only full publications, included in the original TAR. These analyses were then compared with meta-analysis of data from trials that have subsequently been published in full. All seven TAR groups completed and returned the survey. Five out of seven groups reported a general policy that included searching for and including studies available as conference abstracts/presentations. Five groups responded that if they included data from these sources they would carry out methodological quality assessment of studies from these sources using the same assessment tools as for full publications, and manage the data from these sources in the same way as fully published reports. All groups reported that if relevant outcome data were reported in both an abstract/presentation and a full publication, they would only consider the data in the full publication. Conversely, if data were only available in conference abstract/presentation, all but two groups reported that they would extract and use the data from the abstract/presentation. In total, 63 HTA reports for NICE were identified. In 20 of 63 TARs (32%) explicit statements were made with regards to inclusion and assessment of data from abstracts/presentations. Thirty-eight (60%) identified at least one randomised controlled trial (RCT) available as a conference abstract or presentation. Of these, 26 (68%) included trials available as abstracts/presentations. About 80% (20/26) of the 26 TARs that included RCTs in abstract/presentation form carried out an assessment of the methodological quality of such trials. In 16 TARs full reports of these trials were used for quality assessment where both abstracts/presentations and subsequent full publications were available. Twenty-three of 63 TARs (37%) carried out a quantitative analysis of results. Of these, ten (43%) included trials that were available as abstracts/presentations in the review; however, only 60% (6/10) of these included data from abstracts/presentations in the data analysis of results. Thirteen TARs evaluated rapidly evolving technologies and only three of these identified and included trial data from conference abstracts/presentations and carried out a quantitative analysis where abstract/presentation data were used. These three TARs were used as case studies. In all three case studies the overall quality of reporting in abstracts/presentations was generally poor. In all case studies abstracts and presentations failed to describe the method of randomisation or allocation concealment. Overall, there was no mention of blinding in 66% (25/38) of the abstracts and in 26% (7/27) of the presentations included in case studies, and one presentation (4%) explicitly stated use of intention-to-treat analysis. Results from one case study demonstrated discrepancies in data made available in abstracts or online conference presentations. Not only were discrepancies evident between these sources, but also comparison of conference abstracts/presentations with subsequently published full-length articles demonstrates data discrepancies in reporting of results. Sensitivity analyses based on one case study indicated a change in significance of effect in two outcome measures when only full papers published to date were included. There are variations in policy and practice across TAR groups regarding searching for and inclusion of studies available as conference abstracts/presentations. There is also variation in the level of detail reported in TARs regarding the use of abstracts/presentations. Therefore, TAR teams should be encouraged to state explicitly their search strategies for identifying conference abstracts and presentations, their methods for assessing these for inclusion, and where appropriate how the data were used and their effect on the results. Comprehensive searching for trials available as conference abstracts/presentations is time consuming and may be of questionable value. However, there may be a case for searching for and including abstract/presentation data if, for example, other sources of data are limited. If conference abstracts/presentations are to be included, the TAR teams need to allocate additional time for searching and managing data from these sources. Incomplete reporting in conference abstracts and presentations limits the ability of reviewers to assess confidently the methodological quality of trials. Where conference abstracts and presentations are considered for inclusion in the review, the TAR teams should increase their efforts to obtain further study details by contacting trialists. Where abstract/presentation data are included, reviewers should discuss the effect of including data from these sources. Any data discrepancies identified across sources in TARs should be highlighted and their impact discussed in the review. In addition, there is a need to carry out, for example, a sensitivity analysis with and without abstract/presentation data in the analysis. There is a need for research into the development of search strategies specific to identification of studies available as conference abstracts and presentations in TARs. Such strategies may include guidance with regard to identification of relevant electronic databases and appropriate conference sites relevant to certain clinical areas. As there are limited case studies included in this report, analyses should be repeated as more TARs accrue, or include the work of other international HTA groups.

          Related collections

          Author and article information

          Comments

          Comment on this article