19
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Making progress with the automation of systematic reviews: principles of the International Collaboration for the Automation of Systematic Reviews (ICASR)

      letter

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Systematic reviews (SR) are vital to health care, but have become complicated and time-consuming, due to the rapid expansion of evidence to be synthesised. Fortunately, many tasks of systematic reviews have the potential to be automated or may be assisted by automation. Recent advances in natural language processing, text mining and machine learning have produced new algorithms that can accurately mimic human endeavour in systematic review activity, faster and more cheaply. Automation tools need to be able to work together, to exchange data and results. Therefore, we initiated the International Collaboration for the Automation of Systematic Reviews (ICASR), to successfully put all the parts of automation of systematic review production together. The first meeting was held in Vienna in October 2015. We established a set of principles to enable tools to be developed and integrated into toolkits.

          This paper sets out the principles devised at that meeting, which cover the need for improvement in efficiency of SR tasks, automation across the spectrum of SR tasks, continuous improvement, adherence to high quality standards, flexibility of use and combining components, the need for a collaboration and varied skills, the desire for open source, shared code and evaluation, and a requirement for replicability through rigorous and open evaluation.

          Automation has a great potential to improve the speed of systematic reviews. Considerable work is already being done on many of the steps involved in a review. The ‘Vienna Principles’ set out in this paper aim to guide a more coordinated effort which will allow the integration of work by separate teams and build on the experience, code and evaluations done by the many teams working across the globe.

          Related collections

          Most cited references13

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Systematic review automation technologies

          Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Better duplicate detection for systematic reviewers: evaluation of Systematic Review Assistant-Deduplication Module

            Background A major problem arising from searching across bibliographic databases is the retrieval of duplicate citations. Removing such duplicates is an essential task to ensure systematic reviewers do not waste time screening the same citation multiple times. Although reference management software use algorithms to remove duplicate records, this is only partially successful and necessitates removing the remaining duplicates manually. This time-consuming task leads to wasted resources. We sought to evaluate the effectiveness of a newly developed deduplication program against EndNote. Methods A literature search of 1,988 citations was manually inspected and duplicate citations identified and coded to create a benchmark dataset. The Systematic Review Assistant-Deduplication Module (SRA-DM) was iteratively developed and tested using the benchmark dataset and compared with EndNote’s default one step auto-deduplication process matching on (‘author’, ‘year’, ‘title’). The accuracy of deduplication was reported by calculating the sensitivity and specificity. Further validation tests, with three additional benchmarked literature searches comprising a total of 4,563 citations were performed to determine the reliability of the SRA-DM algorithm. Results The sensitivity (84%) and specificity (100%) of the SRA-DM was superior to EndNote (sensitivity 51%, specificity 99.83%). Validation testing on three additional biomedical literature searches demonstrated that SRA-DM consistently achieved higher sensitivity than EndNote (90% vs 63%), (84% vs 73%) and (84% vs 64%). Furthermore, the specificity of SRA-DM was 100%, whereas the specificity of EndNote was imperfect (average 99.75%) with some unique records wrongly assigned as duplicates. Overall, there was a 42.86% increase in the number of duplicates records detected with SRA-DM compared with EndNote auto-deduplication. Conclusions The Systematic Review Assistant-Deduplication Module offers users a reliable program to remove duplicate records with greater sensitivity and specificity than EndNote. This application will save researchers and information specialists time and avoid research waste. The deduplication program is freely available online.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials

              Objective To develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments. Methods We algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR. Results By retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated ‘highly relevant' v 56.5% of text from reviews; difference +3.9%, [−3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR). Conclusion Risk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses.
                Bookmark

                Author and article information

                Contributors
                ebeller@bond.edu.au
                jclark@bond.edu.au
                guy.tsafnat@mq.edu.au
                clive.adams@nottingham.ac.uk
                htd@fritha.org
                Hans.Lund@hvl.no
                mouzzani@qf.org.qa
                thayer@niehs.nih.gov
                james.thomas@ucl.ac.uk
                tari.turner@monash.edu
                jun.xia@nottingham.ac.uk
                krobin@jhmi.edu
                pglaszio@bond.edu.au
                Journal
                Syst Rev
                Syst Rev
                Systematic Reviews
                BioMed Central (London )
                2046-4053
                19 May 2018
                19 May 2018
                2018
                : 7
                : 77
                Affiliations
                [1 ]ISNI 0000 0004 0405 3820, GRID grid.1033.1, Centre for Research in Evidence-Based Practice, , Bond University, ; Robina, Australia
                [2 ]ISNI 0000 0001 2158 5405, GRID grid.1004.5, Australian Institute of Health Innovation, , Macquarie University, ; Sydney, Australia
                [3 ]ISNI 0000 0004 1936 8868, GRID grid.4563.4, Faculty of Medicine and Health Sciences, , University of Nottingham, ; Nottingham, UK
                [4 ]GRID grid.477239.c, Centre for Evidence-Based Practice, , Bergen University College, ; Bergen, Norway
                [5 ]GRID grid.477239.c, Western Norway University of Applied Sciences, ; Bergen, Norway
                [6 ]ISNI 0000 0004 1789 3191, GRID grid.452146.0, Qatar Computing Research Institute, , Hamad Bin Khalifa University, ; Doha, Qatar
                [7 ]National Institute of Environmental Health Sciences, PennState University, Pennsylvania, USA
                [8 ]ISNI 0000000121901201, GRID grid.83440.3b, University College London, ; London, UK
                [9 ]ISNI 0000 0004 1936 7857, GRID grid.1002.3, Monash University, ; Clayton, Australia
                [10 ]ISNI 0000 0001 2171 9311, GRID grid.21107.35, JHU Evidence-based Practice Center, , Johns Hopkins University, ; Baltimore, USA
                Author information
                http://orcid.org/0000-0002-3241-2611
                Article
                740
                10.1186/s13643-018-0740-7
                5960503
                29778096
                3e5267b6-e484-4fe5-99ec-7a107a1cd5c7
                © The Author(s). 2018

                Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

                History
                : 5 September 2017
                : 2 May 2018
                Categories
                Commentary
                Custom metadata
                © The Author(s) 2018

                Public health
                systematic review,automation,collaboration
                Public health
                systematic review, automation, collaboration

                Comments

                Comment on this article