To survey the frequency of use of indirect comparisons in systematic reviews and evaluate the methods used in their analysis and interpretation. Also to identify alternative statistical approaches for the analysis of indirect comparisons, to assess the properties of different statistical methods used for performing indirect comparisons and to compare direct and indirect estimates of the same effects within reviews. Electronic databases. The Database of Abstracts of Reviews of Effects (DARE) was searched for systematic reviews involving meta-analysis of randomised controlled trials (RCTs) that reported both direct and indirect comparisons, or indirect comparisons alone. A systematic review of MEDLINE and other databases was carried out to identify published methods for analysing indirect comparisons. Study designs were created using data from the International Stroke Trial. Random samples of patients receiving aspirin, heparin or placebo in 16 centres were used to create meta-analyses, with half of the trials comparing aspirin and placebo and half heparin and placebo. Methods for indirect comparisons were used to estimate the contrast between aspirin and heparin. The whole process was repeated 1000 times and the results were compared with direct comparisons and also theoretical results. Further detailed case studies comparing the results from both direct and indirect comparisons of the same effects were undertaken. Of the reviews identified through DARE, 31/327 (9.5%) included indirect comparisons. A further five reviews including indirect comparisons were identified through electronic searching. Few reviews carried out a formal analysis and some based analysis on the naive addition of data from the treatment arms of interest. Few methodological papers were identified. Some valid approaches for aggregate data that could be applied using standard software were found: the adjusted indirect comparison, meta-regression and, for binary data only, multiple logistic regression (fixed effect models only). Simulation studies showed that the naive method is liable to bias and also produces over-precise answers. Several methods provide correct answers if strong but unverifiable assumptions are fulfilled. Four times as many similarly sized trials are needed for the indirect approach to have the same power as directly randomised comparisons. Detailed case studies comparing direct and indirect comparisons of the same effect show considerable statistical discrepancies, but the direction of such discrepancy is unpredictable. Direct evidence from good-quality RCTs should be used wherever possible. Without this evidence, it may be necessary to look for indirect comparisons from RCTs. However, the results may be susceptible to bias. When making indirect comparisons within a systematic review, an adjusted indirect comparison method should ideally be used employing the random effects model. If both direct and indirect comparisons are possible within a review, it is recommended that these be done separately before considering whether to pool data. There is a need to evaluate methods for the analysis of indirect comparisons for continuous data and for empirical research into how different methods of indirect comparison perform in cases where there is a large treatment effect. Further study is needed into when it is appropriate to look at indirect comparisons and when to combine both direct and indirect comparisons. Research into how evidence from indirect comparisons compares to that from non-randomised studies may also be warranted. Investigations using individual patient data from a meta-analysis of several RCTs using different protocols and an evaluation of the impact of choosing different binary effect measures for the inverse variance method would also be useful.