4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Action to protect the independence and integrity of global health research

      editorial

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction In a recent Viewpoint in the Lancet, some of us shared our experience of censorship in donor-funded evaluation research and warned about a potential trend in which donors and their implementing partners use ethical and methodological arguments to undermine research.1 Reactions to the Viewpoint—and lively debate at the 2018 Global Symposium on Health Systems Research—suggest that similar experiences are common in implementation and policy research commissioned by international donors to study and evaluate large-scale, donor-funded health interventions and programmes, which are primarily implemented in low resource settings. ‘We all have the same stories’, was one of the first comments on the Viewpoint, followed by many private messages divulging instances of personal and institutional pressure, intimidation and censorship following attempts to disseminate unwanted findings. Such pressure comes from major donors and from international non-governmental organisations (NGOs) obliged to have an external assessment but who then maintain a high degree of confidentiality and control. That such experiences are widespread reflects the deeply political nature of the field of ‘global health’ and the interconnections between priority setting, policy making and project implementation, which sit within a broader set of deeply entrenched power structures.2 3 Researchers in this field routinely find themselves working within—and studying—complex power relations and so experience challenges in negotiating their own position between interests of commissioning agencies and funders, implementers and country governments, as well as those of their own research institutions and their partnerships with other researchers spanning high-income, middle-income and low-income countries.4–7 They often receive research funding from major donor agencies like the UK Department of International Development (DFID), the US Agency for International Development (USAID), the Agence Française de Développement (AFD), UNITAID and the Bill and Melinda Gates Foundation,8 who commission evaluations for their own funded projects, even though they have a stake in results that demonstrate the success of a multibillion-dollar investment. Effects of interference in the research and evaluation process are compounded by more subtle acts of self-censorship and data embellishment that can arise as researchers become embroiled in what was recently called the global health ‘success cartel’.9 Their involvement in a collective drive to demonstrate success can unintentionally ‘instil a fear of failure, stifle risk-taking and innovation, and lead to the fabrication of achievement’.9 For example, research that threatens the position of powerful elites—such as research into high-level corruption—is lacking.10 Meanwhile, selective reporting of ‘unwelcome’ findings can be a way to avoid contractual terminations even though it undermines learning.1 11 12 Moreover, perverse incentives exist across the global health and development sectors to use simplistic indicators of success and bad or fudged data.13–15 Donor agencies exacerbate the problem by distorting research findings to exaggerate their own successes.16–19 Researchers are responsible for conducting research ethically and with integrity. Yet, without strong and reliable institutional support, they are often in a vulnerable position when faced with vested interests. What action is needed to avoid undermining independent and critical research findings? What kind of institutional structures and practices might support researchers in dealing with the ethical and political dilemmas associated with the dissemination of (potentially) contested research findings and evaluation results? To start a discussion on ways forward, we invited input from an international network of global health, health systems and policy researchers from diverse disciplines. Below, we discuss suggestions, endorsed by more than 200 researchers based in 40 different countries (see the full list of signatories below), on how the organisations that commission, undertake and publish research and evaluations can safeguard independence and integrity. Commissioning bodies In the first instance, those commissioning external research must enable conditions for independence. Commissioning agencies should be transparent about the purpose and principles of external evaluation and research to their implementing partners and should commit to upholding the principles of good research: ethical, methodologically sound and responsive to population needs. They should specify in the grant contract to researchers that they can review and provide input but will not interfere in the design, data collection, analysis or dissemination of any findings and that they fully commit to making all findings publicly available, whatever their content, including through academic (peer-reviewed) publication. Contractual clauses that limit the dissemination of potentially critical findings—such as DFID’s new standard terms and conditions for service contracts (including evaluations), which prevent researchers from embarrassing DFID or bringing it into disrepute20—should be deleted, since these terms jeopardise the independence of evaluation and research. For each study, an independent research oversight committee should be established. The committee should include a broad range of stakeholders to avoid institutional bias and linkages with key funders, as well as fairly selected representatives from the communities that are being studied or civil society organisations who can assess the potential benefits and risks generated by the research. A key mandate of oversight committees would be to identify potential conflicts of interests and develop guidelines on rules of engagement between the commissioners and researchers. Such committees should be in a position to intervene or arbitrate if conflict arises, such as if the commissioner or implementing partners pressure, harass or threaten researchers, or if implementing partners feel that the researchers have misrepresented, traduced or misunderstood their work. To prevent undue influence, donor agencies who commission research and evaluations should develop strong accountability measures between their operational departments and their research and evaluation departments. For example, it is well known in clinical medicine that pharmaceutical industry-funded trials are more likely to produce positive, flattering results than are independently funded trials.21–25 It is time to debate this important issue in global health too and to ask the question as to whether donor agencies should issue tenders for, commission and oversee evaluation and research involving their own programmes or whether it would be better for an arm’s length body to do so. To increase transparency and reduce selective reporting of findings, we recommend establishing a global health evaluation registry, similar to existing clinical trial registries.12 26 Researchers and research institutions Today, universities and research organisations across the world depend heavily on external funding from government departments, private foundations and industry.2. Therefore, they have an important responsibility to prevent conflicts of interest in research contracts. While better core funding would strengthen research institutions’ power over their own research priorities, they must also seek new ways to protect themselves from interference from external funders. Senior leadership in academia has a responsibility to discuss and develop terms of research with both funders and implementers. They should scrutinise all grants carefully and refuse those that have unfavourable contractual provisions (eg, those that limit researchers from disseminating potentially critical findings). Senior leadership should also create a supportive, collegial environment for all research staff facing attempts at censorship, including providing legal support when necessary and, ideally, referral to a cross-institutional or national ombudsperson who can serve as a reference point for particular research areas or disciplines. They should extend support to individuals subcontracted to conduct research on behalf of institutions, who may be in especially vulnerable positions. In addition, senior leadership should encourage methodological and disciplinary diversity to capture complexity and value the dissemination of both positive and negative research findings. Senior research staff being prepared to disseminate controversial and politically contentious analyses can pave the way for more junior researchers to do the same. Research ethics and integrity issues should be part of research training programmes. Research institutions can also provide researchers with access to mentors external to their research group, particularly for junior staff with soft funding. Unions can play an important role if institutional leadership fails. Ethics and research governance committees Ethics committees play a crucial role in ensuring the independence and integrity of research. Researchers seek approval from ethics committees, usually both at their research institutions and in the countries in which research is undertaken. Such committees have a remit to safeguard ethical conduct of research and protect the rights and welfare of research subjects, and primarily draw on biomedical research paradigms to do so. Although research ethics committees do often consider the safety risks posed to individual researchers (injury and incarceration), they do not typically consider concerns about protecting the researchers from interference and threats to their credibility. Therefore, they could play a fuller role in helping researchers navigate related unforeseen ethical dilemmas that arise in the course of research. For example, they can provide guidance on whether to extend protections intended for individual research subjects (eg, to ‘do no harm’) to organisations. They could also advice researchers on how to balance their ethical obligations to research participants and their obligations to wider society in the face of pressure from vested interests.27 Ethics committees should have representation from different research fields, with members who are trained in the epistemological and methodological bases of different disciplines.28 In addition, procedures for ensuring the independence of ethics committees are vital to prevent a situation in which members’ close ties to senior management, funders, ruling parties, governments or commercial interests lead them to use regulatory frameworks to shut down ‘unfavourable’ or disruptive research. University research governance offices, where they exist, can complement ethics committees by protecting the rights and welfare of researchers especially where research challenges powerful agendas. They can offer advice and arbitration assistance to researchers on conflicts of interest arising in relation to external research funding. They can monitor for instances of unethical practice to enable research institutions to push back if powerful external actors manipulate research ethics regulations to constrain the research process, as some British universities have done in the past.29 Research institutions should develop clear value statements and commit to implementing them through their ethics and governance protocols. Academic journals and editors The current practice is that academic journals ask or expect authors to declare any conflicts of interest relating to a publication. Journals ought to be challenging of these statements and refrain from publishing what is stated by authors in cases when it is obvious there is a gross conflict. Additional conflict of interest statements should be required from any co-authors that are part of the funding organisation. This can prevent funders from putting pressure on researchers to be included as co-authors of papers emanating from the research and use this role to influence how the results are reported. Academic journal editors have considerable potential to contribute towards dismantling the ‘success cartel’ within global health, for example, by publishing negative findings, and encouraging papers that explain the ‘hows and whys’ of both positive and negative findings.30 This includes process evaluations and in-depth political and social analyses of global health policy and practice, especially when these challenge the status quo. Editors of academic journals that publish global health research and evaluations should create procedures to select diverse peer reviewers without a vested interest and support them to rigorously question manuscripts that present uncritical and unexplained success stories. Editors should ensure diversity among peer reviewers and moderate dialogue between authors and peer reviewers where, for example, junior authors can challenge unduly hostile or politically motivated reviews by senior academics. They should ideally invite commentaries and responses from donor agencies, NGOs, civil society members, policy makers and researchers from the countries in which research and evaluations have been commissioned. Conclusion The tensions between research ethics and the wider politics of the global health field are increasingly recognised. However, the repercussions of these tensions for individuals and research institutions need careful consideration. While ‘rocking the boat’ is uncomfortable and may threaten individual career progression and research institutions’ external income, biased evidence can harm health programme beneficiaries and public trust in research. There are certainly no simple, fail-safe, technocratic quick fixes to resolving issues of power and politics, but the ideas proposed here should at least create better relationships between the institutions involved in commissioning, undertaking and publishing research, and feed into more sophisticated and thoughtful mechanisms of accountability, which do not simply re-enforce existing frameworks that favour accountability towards donors. The ideas we propose should be considered within broader discussions on how to address north–south power imbalances within the research community, and will hopefully catalyse wider action on protecting the independence of public universities and other research institutions globally. We believe this is necessary to enable researchers to hold power to account and advance informed and healthy debate on issues of public interest.

          Related collections

          Most cited references23

          • Record: found
          • Abstract: found
          • Article: not found

          Pharmaceutical industry sponsorship and research outcome and quality: systematic review.

          To investigate whether funding of drug studies by the pharmaceutical industry is associated with outcomes that are favourable to the funder and whether the methods of trials funded by pharmaceutical companies differ from the methods in trials with other sources of support. Medline (January 1966 to December 2002) and Embase (January 1980 to December 2002) searches were supplemented with material identified in the references and in the authors' personal files. Data were independently abstracted by three of the authors and disagreements were resolved by consensus. 30 studies were included. Research funded by drug companies was less likely to be published than research funded by other sources. Studies sponsored by pharmaceutical companies were more likely to have outcomes favouring the sponsor than were studies with other sponsors (odds ratio 4.05; 95% confidence interval 2.98 to 5.51; 18 comparisons). None of the 13 studies that analysed methods reported that studies funded by industry was of poorer quality. Systematic bias favours products which are made by the company funding the research. Explanations include the selection of an inappropriate comparator to the product being investigated and publication bias.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Factors Associated with Findings of Published Trials of Drug–Drug Comparisons: Why Some Statins Appear More Efficacious than Others

            Introduction Bias is the combination of various study design, data analysis, and presentation factors that make the results differ systematically from the truth [1]. Various factors can lead to bias in randomized controlled trials (RCTs) of drug efficacy, including framing of the research question, design and conduct of the study, and analysis of the data [2,3]. Whether the results are reported in full, or whether there is selective reporting of outcomes can also contribute to biased results and conclusions [4–7]. Most studies that have attempted to identify factors that may be associated with bias have examined individual design features, including randomization [8,9], concealment of allocation [10], double blinding [9,10], sample size [11–14], choice of drug comparator [15–18], and choice of statistical analysis [19,20]. In this study, we examine the relative contributions of different factors to favorable results and conclusions. One factor that has been associated with possible bias is funding source for a study. Trials supported by pharmaceutical companies are more likely than those with non-industry sponsors to report results and conclusions that are favorable towards the sponsor's product compared to placebo [14,15,21–28]. However, few studies examining the association of funding source and outcome adjust for potential confounders, such as study design characteristics or type of intervention. For example, study characteristics such as randomization sequence generation, concealment of allocation, blinding, sample size, or choice of drug comparator might also contribute to statistically significant results [28,29]. Although the association of pharmaceutical industry sponsorship with results and conclusions that favor the sponsor's product over placebo is clear [22,23], the potential influence of funding source when the funder manufactures one of two competing drugs undergoing comparison has not been well described. Although Heres et al. found that results of head-to-head comparisons of second-generation antipsychotics have contradictory conclusions depending on which company sponsored the study, they did not analyze the potential effects of other study design characteristics [30]. Statins are an interesting class of drug for investigating the influence of funding source on outcomes of head-to-head drug comparisons because a number of statins are manufactured by competing companies. Statins are widely prescribed as effective first-line agents for lowering cholesterol and other lipids. At the time of this study, seven statin drugs were marketed in the United States by competing companies, although one of these drugs has since been withdrawn. Alternative classes of drugs were also available to treat the same condition. As strong evidence suggests that statins are more effective than placebo in reducing lipids [31], drug–drug comparison trials involving statins are the most relevant for making policy decisions about choosing a statin. Choice of a statin should depend on data from statin–statin comparisons of low-density lipoprotein reduction at comparable doses, ability to achieve cholesterol-reduction goals, and effects on a variety of other outcomes such as death, coronary events, or stroke [32]. For example, formulary committees use head-to-head drug comparisons to decide which of the statins will be placed on their formulary. Therefore, it is important to explore possible biases in statin–statin comparisons. This cross-sectional study examines associations between research funding source, study design characteristics aimed at reducing bias, and other factors for which results and conclusions have been published in RCTs of statin–drug comparisons. We hypothesized that the results and conclusions of trials are more likely to favor the statin made by the sponsor of the study and that other design features, such as concealment of allocation, blinding, and sample size are also associated with statistically significant results that favor the statin produced by the study's sponsor. Methods Search Strategy We electronically searched PubMed to identify reports of RCTs published between January 1999 and May 2005. The following MeSH terms or Substance Names of the seven available statins were used: “simvastatin” OR “cerivastatin” OR “pravastatin” OR “atorvastatin” OR “fluvastatin” OR “rosuvastatin” OR “lovastatin”. The search was limited to “randomized controlled trials” and “humans”. We restricted our search to these years because journals strengthened their policies requiring disclosure of funding sources and financial ties of authors during this period [33]. We also searched the reference lists of all potentially relevant articles identified through the PubMed search. Our search included articles published in any language. Inclusion and Exclusion Criteria We reviewed abstracts of all citations and retrieved articles based on the following inclusion criteria: (1) RCT; (2) statin drug compared to a different statin drug or another, non-statin, drug; (3) efficacy measured in humans; and (4) original research, defined as studies that appeared to present original data and did not specifically state that they were reviews. Studies with the primary objective of assessing the effect of a combination of a statin and another drug were included if there was a comparison of the statin alone with the other drug. If a placebo arm was also included in the trial, we included only the data from the statin–drug comparison. The following exclusion criteria were used to screen all abstracts: (1) pharmacokinetic or pharmacodynamic studies, since they do not involve testing of clinical efficacy outcomes; (2) studies including only rationale and design elements, editorials, letters to the editor, commentaries, abstracts, unpublished reports, reviews; (3) studies comparing different doses of one type of statin; (4) studies comparing statins to placebo only; (5) studies comparing statins to a non-drug intervention (e.g., diet, exercise); (6) studies in which the statin was present in all the comparison groups; (7) absence of statistical comparison or lack of sufficient data; and (8) in vitro analyses. Any discrepancies about inclusion were discussed by the authors of the present paper until consensus was achieved. No identical publications were identified. However, as we were interested in the published reports from trials, we did include multiple publications from the same study if the publications reported different outcomes. Data Extraction One investigator (F. Oostvogel), who was not blinded to author names and affiliations, funding sources, and financial disclosure, extracted all data from each article. A second coder (L. Bero), who was blinded to funding source and financial tie information, independently extracted data on concealment of allocation, selection bias, blinding, sample size, results for primary outcomes, and author conclusions. Inter-coder reliability was very good (weighted kappa 0.80 to 0.97). In cases of disagreement, the two coders discussed the papers and reached agreement. We extracted data on the following publication characteristics, which have been shown to be independently related to favorable results or conclusions of drug studies [8–10,13,22,27,34,35]. Journal Characteristics Peer-review status. Each article was classified as peer reviewed, non-peer reviewed, or unknown, based on information found on the website of the journal where the article was published. A publication was considered peer reviewed if the website mentioned that the journal had a peer-review process or if it was stated that the manuscripts were evaluated by at least one external expert in the field; otherwise, a publication was considered non-peer reviewed. Peer-review status was classified as unknown if we could find no information on the journal. Impact factor. Impact factor was obtained from the Institute for Scientific Information, 2004 data [36]. Author Characteristics Institutional affiliation. The institutional affiliation of the corresponding author was obtained from the article and classified into (1) academic/university, (2) government, (3) private nonprofit, (4) industry, (5) hospital, (6) other, or (7) unable to determine. Country of origin. The country of origin of the corresponding author was recorded and categorized into low income, lower-middle income, upper-middle income, and high income economies based on the World Bank Group classifications [37]. Study Design Characteristics Study design. The study design for each article was classified as parallel or cross-over trial. Specific drugs being compared were recorded. Comparison group. The comparisons for the primary outcome were classified as (1) statin versus statin or (2) statin versus other drug. In statin-versus-statin comparisons, the “test” drug was defined as the newest statin (most recent FDA approval date) and the older statin as the “comparator” drug. In statin-versus-other-drug comparisons, the “test” drug was defined as the statin and the other drug as the “comparator” drug. Type of primary outcome measure. The primary outcome measured was classified as (1) surrogate if the end point was a marker for a clinical event (e.g., lipid levels, artery diameter, endothelial function) or (2) clinical if a real clinical event (e.g., stroke, myocardial infarction, death) was measured. Sample size. We recorded the number of patients that were included in the analyses. Primary results. For each published paper, the result reported for each primary outcome was categorized as (1) favorable if the result was statistically significant (p < 0.05 or confidence interval [CI] excluding no difference) and in the direction of the test drug being more efficacious or less harmful (in the case of side effects); (2) inconclusive if the result did not reach statistical significance; or (3) unfavorable if the result was statistically significant in the direction of the comparator drug being more efficacious or less harmful. If a study explicitly stated that it was designed as a non-inferiority study and the two comparisons drugs were equivalent, the result was coded as favorable. The entire set of results for all primary outcomes in each paper was then classified as favorable if at least one primary outcome was favorable and none were unfavorable; otherwise, the entire set was classified as unfavorable. Conclusion. The conclusions reported in the published papers were categorized as (1) favorable if the test drug was preferred to comparator; (2) about equal if the test drug was about equal to comparator; or (3) not favorable if the comparator drug was preferred to the test drug. If a study explicitly stated that it was designed as a non-inferiority study and the two drugs being compared were equivalent, the conclusion was coded as favorable. If an article did not clearly state that one of the two drugs was better or if the two drugs had different advantages, the conclusion was coded as “test drug about equal to comparator”. For analysis, conclusions were categorized as favorable or not favorable (combining about equal and not favorable). Funding Information Funding source. The funding source of each published study was categorized as (1) industry, (2) private nonprofit, (3) government, (4) other, (5) multiple sources, (6) no funding, and (7) none disclosed. For analysis, the funding-source categories were collapsed into (1) industry, (2) none disclosed/no funding, and (3) government/private nonprofit. Financial ties. Data about the financial ties of each author were extracted and coded for (1) whether or not there were any financial ties disclosed with the sponsor of the study and (2) whether or not there were any financial ties disclosed with any other company (yes, no, or none disclosed). Role of the sponsor. Information about the role of the sponsor was coded as (1) role of sponsor not mentioned, (2) sponsor not involved in study design and analyses, (3) sponsor involved, or (4) no sponsor involved. Study Design Characteristics Aimed at Reducing Bias Studies that met the inclusion criteria were rated for study design features according to the components reported by Chalmers et al. [38]. Chalmers used three different categories: method of treatment assignment (randomization and concealment of allocation), control for whether all participants enrolled in the trial have been included in the analysis (intention-to-treat analysis and loss to follow-up), and blinding of participants and investigators. For each category, the score can range from 0 to 3, where higher scores indicate better methodological quality. For analysis, we dichotomized each category into “adequate” (score of 2 or 3) or “inadequate” (score of 0 or 1). Statistical Analysis We report the frequency of the different characteristics of each article. For characteristics where there was sufficient variability, we analyzed the characteristics by the direction of results and conclusions to determine whether certain characteristics were associated with favorable results or conclusions. Proportions of manuscripts with favorable results or conclusions were first analyzed using univariate logistic regression and estimating odds ratios (ORs) to identify associations between independent variables and favorable results and conclusions. Although impact factor and sample size were continuous variables, they were modeled categorically because their effects were clearly nonlinear. To control for multiple variables simultaneously, we carried out multivariate logistic regression analysis and calculated ORs. These models included funding source and all factors that had p < 0.05 in univariate models for either favorable results or conclusions. For our primary analysis, we conducted the regression analyses on our full sample (n = 192). For our a priori analysis of drug industry–sponsored studies, we conducted the regression analyses on the subsample of studies that were industry funded (n = 95) in order to examine the association between funding from the test drug company and results or conclusions that favor the test drug. Our target sample size was to have 40 trials that had results or conclusions favoring the test drug. We chose this sample size so there would be at least ten trials with favorable results or conclusions per predictor in a multivariate analysis with up to four simultaneous predictors. We achieved this target sample size for both the full sample and subsample of industry-funded trials. Data were analyzed with SAS software (version 9.1, SAS Institute, Cary, North Carolina, United States). Results Characteristics of Included Studies Our final sample consisted of 192 published RCTs (see Figure 1). The characteristics of the full sample are shown in Table 1, and the full list of references to included studies is presented in Table S1. Almost all (98%, 189/192) studies reported only surrogate outcome measures. There was also little variability in the peer-review status, study design, or country of origin or institutional affiliation of corresponding authors. Therefore, these variables were not included in our regression analyses. Impact factor and sample size were divided into quartiles with the upper three quartiles compared with the lowest quartile. Forty-nine percent of articles had conclusions that favored the test drug, 15% had a conclusion that favored the comparator drug, and 36% concluded that the two drugs were about equal. Of the 192 included trials, 95 (49%) disclosed funding from industry sponsors. Among the 95 articles declaring industry funding, the role of the sponsor was disclosed in 20 (21%) of these. One trial stated that the sponsor was not involved in the study design and analyses, and 19 trials stated that the sponsor was involved by providing the study drug, data analyses, or writing and preparation of the manuscript. Analysis of Full Sample Table 2 shows the results of univariate logistic regression analyses. Studies with adequate blinding were substantially less likely to report results favoring the test drug than studies that did not include adequate blinding. Trials with larger sample sizes were more likely to report conclusions that favored the test drug, while trials with no disclosed funding sources were less likely to have conclusions favoring the test drug compared to trials with industry funding. In multivariate analyses, trials with adequate blinding remained significantly less likely to report statistically significant results favoring the test drug, and sample size remained associated with favorable conclusions when controlling for other factors (Table 3). Pooling non-industry-funded studies and those with no funding disclosure produced ORs of 1.49 (95% CI 0.75–3.0, p = 0.26) for results and 0.73 (95% CI 0.36–1.46, p = 0.37) for conclusions versus industry-funded studies. Adding interaction terms for industry funding versus all others with sample size quartile did not produce a statistically significant improvement in the fit to the data for results (p = 0.21 by likelihood ratio test) and also did not show a consistent pattern (OR of all others 2.1, 1.15, 3.3, 0.42 in first to fourth sample size quartiles, respectively). For conclusions, the interaction terms also did not reach statistical significance overall (p = 0.11), but the pattern was in a consistent direction (OR of all others 2.2, 1.02, 0.52, 0.18 in first to fourth sample size quartiles, respectively). We also conducted a multivariate analysis for the subset of articles that were statin–statin comparisons (n = 112), and the results were essentially the same as for the comparisons between statin and any non-statin or statin drug. Trials with adequate blinding remained significantly less likely to report statistically significant results favoring the test drug (OR = 0.28 [95% CI 0.11–0.73], p = 0.0095), and sample size remained associated with favorable conclusions (OR = 8.49 [95% CI 1.93–37.36], p = 0.0047) when controlling for other factors. The data used in the analyses are presented in Tables S2 and S3. Analysis of Industry-Sponsored Trials In univariate logistic regression analyses of the industry-sponsored trials, higher impact factor, larger sample size, and funding from the test drug company were associated with favorable results, while trials with adequate blinding were less likely to report statistically significant results favoring the test drug (Table 4). Larger sample size and funding from the test drug company were also associated with favorable conclusions (Table 4). In multivariate logistic regression analysis, funding from the test drug company remained associated with statistically significant results favoring the test drug (OR = 20.16 [95% CI 4.37–92.98], p < 0.001) or conclusions favoring the test drug (OR = 34.55 [95% CI 7.09–168.4], p < 0.001) (Table 5) when controlling for other factors. Studies with adequate blinding remained less likely to report statistically significant results favoring the test drug (Table 5). Adding interaction terms for funding by test drug company with sample size quartile did not produce a statistically significant improvement in the fit to the data for results (p = 0.38), although the ORs associated with test drug company funding did show an increasing pattern (3.8, 8.1, infinite, 59.5 in the first to fourth sample size quartiles, respectively). For conclusions, the interaction p-value was p = 0.066, with ORs associated with test drug company funding of infinite, 2.1, infinite, and 143.2 in the first to fourth sample size quartiles, respectively. The infinite estimated ORs result from no favorable outcomes for some combinations of funding and sample size quartile, making these results difficult to interpret. We also conducted the multivariate logistic regression analysis for the subset of industry-funded studies that were statin–statin comparisons (n = 63), and the results were essentially the same as for the comparisons between statin and any non-statin or statin drug. Funding from the test drug company remained associated with statistically significant results favoring the test drug (OR = 16.06 [95% CI 2.22–116.3], p = 0.043) or conclusions favoring the test drug (OR = 77.09 [95% CI 7.92–749.9], p < 0.001) when controlling for other factors. Discussion We examined the association between study design characteristics and the results and conclusions of RCTs of head-to-head comparisons of statins with other drugs. We hypothesized that the results and conclusions of published trials would be more likely to favor the statin made by the sponsor of the study and that other design features, such as concealment of allocation, blinding, and sample size, would also be associated with results or conclusions that favor the statin. We found that the main factor associated with the results and conclusions of industry-sponsored research to compare statin drugs with statin or non-statin drugs is research sponsorship. Our study adds new information to the body of literature showing that pharmaceutical industry–sponsored studies comparing drug and placebo are more likely to favor the drug [14,21–23,25,27,28,39]. Our finding suggests that favorable results and outcomes are associated with the specific sponsor of a study, even when all the studies are industry funded. This finding may help explain why well-designed head-to-head comparisons of statins and other drugs sometimes have contradictory results. There are several possible explanations for our finding of the strong association between funding source and outcomes that are favorable to the drug company sponsor. First, it is possible that pharmaceutical companies selectively fund trials on drugs that are likely to produce a statistically significant result. This can be accomplished by selecting nonequivalent doses of drugs for testing [15–17]. A recent review of 42 RCTs comparing the low-density lipoprotein–lowering ability of two or mores statins found that almost all of the trials compared nonequivalent doses of statins [32]. Second, as we examined only published studies, publication bias, or the phenomenon of statistically significant results being published more frequently than statistically nonsignificant results, may explain the association of funding and outcome [40]. Selective reporting of outcomes can also contribute to biased results and conclusions [4–6]. In addition, industry sponsorship may be associated with multiple reporting of studies with favorable findings, emphasizing the imbalance towards statistically significant results in the published literature [18,41,42]. Finally, more than one third of the studies in our sample had no disclosed sponsorship. It is possible that industry funders or industry-supported authors could fail to disclose the sponsorship of a published study if the findings do not support the sponsor's product. This, however, would have to be very prevalent to explain by itself the results in Tables 4 and 5 concerning sponsorship. For example, nearly all the 29 studies with favorable conclusions and undisclosed funding would have to be actually funded by the comparator drug company, and nearly all the 45 studies without favorable conclusions would have to be funded by the test drug company in order to erase the difference shown in Table 4. We identified a number of weaknesses common to published statin versus statin or non-statin drug comparisons that bring into question their clinical relevance. The most important weakness was the lack of patient-related clinical-outcome measures. Looking at the totality of available evidence, we found that almost all studies (98%, 189/192) used only surrogate outcome measures. Inadequate blinding, lack of concealment of allocation, poor follow-up, and lack of intention-to-treat analyses were common among these studies. These weaknesses suggest that these types of studies should be used with great caution by those making regulatory and purchasing decisions. We found that adequate double blinding was an influential design feature in our sample. Adequately blinded studies were less likely to report results favoring the sponsor's product. Although RCTs of poorer quality are more likely to reach biased conclusions [10,43,44], the unreliability of quality scores has led to the recommendation that trial quality be assessed for individual features that are aimed to reduce bias, such as concealment of allocation or blinding [45]. Recent research further suggests that specific study design characteristics are not reliably associated with treatment effect sizes across different studies and medical areas [46]. Thus, the study design characteristics associated with statistically significant results might vary with the type of research being examined. We used three items from the Chalmers et al. [38] methodological quality assessment scale to assess study design features that might be associated with results and conclusions. These items focus on important aspects of trial design: (1) concealment of allocation, (2) control for whether all patients enrolled in the trial were included in the analysis (drop outs and intention-to-treat analyses), and (3) blinding of participants and investigators. Previous studies showed that these specific characteristics are associated with bias in clinical trials [10,47,48]. For example, Schulz and colleagues found that estimates of treatment effects were exaggerated by 41% for inadequately concealed trials and by 17% for trials with inadequate double blinding [10]. Finally, journal characteristics may influence the results and conclusions of articles as the quality of the reporting may vary with the journal. For example, articles published in peer-reviewed journals have superior quality compared to articles published in non-peer-reviewed journals [15,27]. In our sample, we had no variability in peer-review status, but we did observe a small possible association between journal impact factors and results and conclusions that favored the test drug. Our study has several limitations, including our ability to identify funding sources and financial ties. We categorized studies as industry funded or not based on each article's disclosure of a trial's funding source(s). Krimsky showed, however, that there is a lack of disclosure of industry research support and personal financial ties across a wide variety of journals [49,50]. Thus, we may be underestimating the number of industry-sponsored studies and personal financial ties of investigators. The market for statins is competitive. The National Cholesterol Education Program update of the Adult Treatment Panel III guidelines expanded both the scope and intensity of low-density lipoprotein–lowering therapy for prevention of cardiovascular disease [51]. To achieve the goals in the guideline, millions of Americans would need to be placed on cholesterol-lowering medication in higher doses and for a longer period, thereby increasing the number of prescriptions for statin drugs [51,52]. Eight of the nine members of the National Cholesterol Education Program panel had financial ties with pharmaceutical companies that manufactured statin drugs [51,52]. Our findings suggest that available data on choosing between statins based on head-to-head comparisons may also be influenced by financial conflicts of interest. Our findings may be generalizable to other classes of drugs with competitive markets. There is increasing concern that the funding source influences outcomes and conclusions of medical research [3]. At the same time, industry support of biomedical research has increased dramatically during the past few decades [53,54]. The growing proportion of industry-funded studies could shift the balance of published trials more towards studies that favor new drugs [55]. This trend and our finding that, for one class of drugs, the results and conclusions of trials tend to favor the drug that is made by the sponsor raises important considerations for selecting drugs within a class. Sponsorship bias, even when controlling for other confounding study characteristics, may be the main explanation for contradictory findings of drug–drug comparison trials. This bias in drug–drug comparison trials should be considered when making health-policy decisions regarding drug choice, such as drug formulary decisions. Reviewers of published reports that disclose funding by the makers of the product being tested should be more critical of the methods than if the reports are not industry sponsored [56] Supporting Information Table S1 Included Studies Table (62 KB PDF) Click here for additional data file. Table S2 Statin Data: Adjudicated Full and Subsample (105 KB XLS) Click here for additional data file. Table S3 Statin-Only Comparison Data (48 KB XLS) Click here for additional data file.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              The 10 largest public and philanthropic funders of health research in the world: what they fund and how they distribute their funds

              Background Little is known about who the main public and philanthropic funders of health research are globally, what they fund and how they decide what gets funded. This study aims to identify the 10 largest public and philanthropic health research funding organizations in the world, to report on what they fund, and on how they distribute their funds. Methods The world’s key health research funding organizations were identified through a search strategy aimed at identifying different types of funding organizations. Organizations were ranked by their reported total annual health research expenditures. For the 10 largest funding organizations, data were collected on (1) funding amounts allocated towards 20 health areas, and (2) schemes employed for distributing funding (intramural/extramural, project/‘people’/organizational and targeted/untargeted funding). Data collection consisted of a review of reports and websites and interviews with representatives of funding organizations. Data collection was challenging; data were often not reported or reported using different classification systems. Results Overall, 55 key health research funding organizations were identified. The 10 largest funding organizations together funded research for $37.1 billion, constituting 40% of all public and philanthropic health research spending globally. The largest funder was the United States National Institutes of Health ($26.1 billion), followed by the European Commission ($3.7 billion), and the United Kingdom Medical Research Council ($1.3 billion). The largest philanthropic funder was the Wellcome Trust ($909.1 million), the largest funder of health research through official development assistance was USAID ($186.4 million), and the largest multilateral funder was the World Health Organization ($135.0 million). Funding distribution mechanisms and funding patterns varied substantially between the 10 largest funders. Conclusions There is a need for increased transparency about who the main funders of health research are globally, what they fund and how they decide on what gets funded, and for improving the evidence base for various funding models. Data on organizations’ funding patterns and funding distribution mechanisms are often not available, and when they are, they are reported using different classification systems. To start increasing transparency in health research funding, we have established www.healthresearchfunders.org that lists health research funding organizations worldwide and their health research expenditures. Electronic supplementary material The online version of this article (doi:10.1186/s12961-015-0074-z) contains supplementary material, which is available to authorized users.
                Bookmark

                Author and article information

                Journal
                BMJ Glob Health
                BMJ Glob Health
                bmjgh
                bmjgh
                BMJ Global Health
                BMJ Publishing Group (BMA House, Tavistock Square, London, WC1H 9JR )
                2059-7908
                2019
                18 June 2019
                : 4
                : 3
                : e001746
                Affiliations
                [1 ] departmentCentre for Development and the Environment , University of Oslo , Oslo, Norway
                [2 ] departmentDepartment of Infectious Disease Epidemiology , London School of Hygiene & Tropical Medicine , London, UK
                [3 ] departmentSchool of Public Health , University of Sydney , Sydney, New South Wales, Australia
                [4 ] departmentDepartment of Global Health and Development , London School of Hygiene & Tropical Medicine , London, UK
                [5 ] departmentCentre for Primary Care and Public Health , Queen Mary University London , London, UK
                [6 ] departmentInstitute for Research on Sustainable Development (IRD) , CEPED (IRD-Université deParis), Université de Paris, ERL INSERM SAGESUD , Paris, France
                [7 ] departmentDepartment of Mental Health, Faculty of Medicine , Gulu University , Gulu, Uganda
                Author notes
                [Correspondence to ] Katerini T Storeng; katerini.storeng@ 123456sum.uio.no
                Author information
                https://orcid.org/0000-0003-0032-7006
                http://orcid.org/0000-0001-9299-8266
                Article
                bmjgh-2019-001746
                10.1136/bmjgh-2019-001746
                6590965
                31297249
                8be86f11-7053-4028-86d1-bc74623d0579
                © Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.

                This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

                History
                : 30 May 2019
                : 30 May 2019
                Categories
                Editorial
                1506
                Custom metadata
                unlocked

                environmental health
                environmental health

                Comments

                Comment on this article