265
views
0
recommends
+1 Recommend
0 collections
    12
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Believe it or not: how much can we rely on published data on potential drug targets?

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: found
          • Article: not found

          What errors do peer reviewers detect, and does training improve their ability to detect them?

          To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Factors affecting reproducibility between genome-scale siRNA-based screens.

            RNA interference-based screening is a powerful new genomic technology that addresses gene function en masse. To evaluate factors influencing hit list composition and reproducibility, the authors performed 2 identically designed small interfering RNA (siRNA)-based, whole-genome screens for host factors supporting yellow fever virus infection. These screens represent 2 separate experiments completed 5 months apart and allow the direct assessment of the reproducibility of a given siRNA technology when performed in the same environment. Candidate hit lists generated by sum rank, median absolute deviation, z-score, and strictly standardized mean difference were compared within and between whole-genome screens. Application of these analysis methodologies within a single screening data set using a fixed threshold equivalent to a p-value < or = 0.001 resulted in hit lists ranging from 82 to 1140 members and highlighted the tremendous impact analysis methodology has on hit list composition. Intra- and interscreen reproducibility was significantly influenced by the analysis methodology and ranged from 32% to 99%. This study also highlighted the power of testing at least 2 independent siRNAs for each gene product in primary screens. To facilitate validation, the authors conclude by suggesting methods to reduce false discovery at the primary screening stage. In this study, they present the first comprehensive comparison of multiple analysis strategies and demonstrate the impact of the analysis methodology on the composition of the "hit list." Therefore, they propose that the entire data set derived from functional genome-scale screens, especially if publicly funded, should be made available as is done with data derived from gene expression and genome-wide association studies.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Hedging against academic risk

                Bookmark

                Author and article information

                Journal
                Nature Reviews Drug Discovery
                Nat Rev Drug Discov
                Springer Science and Business Media LLC
                1474-1776
                1474-1784
                September 2011
                August 31 2011
                September 2011
                : 10
                : 9
                : 712
                Article
                10.1038/nrd3439-c1
                21892149
                79087684-03ea-489d-b496-fce00817aeef
                © 2011

                http://www.springer.com/tdm

                History

                Comments

                Comment on this article