6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Artificial intelligence in medical device software and high-risk medical devices – a review of definitions, expert recommendations and regulatory initiatives

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references82

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Prediction models for diagnosis and prognosis of covid-19 infection: systematic review and critical appraisal

          Abstract Objective To review and critically appraise published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at risk of being admitted to hospital for covid-19 pneumonia. Design Rapid systematic review and critical appraisal. Data sources PubMed and Embase through Ovid, Arxiv, medRxiv, and bioRxiv up to 24 March 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 2696 titles were screened, and 27 studies describing 31 prediction models were included. Three models were identified for predicting hospital admission from pneumonia and other events (as proxy outcomes for covid-19 pneumonia) in the general population; 18 diagnostic models for detecting covid-19 infection (13 were machine learning based on computed tomography scans); and 10 prognostic models for predicting mortality risk, progression to severe disease, or length of hospital stay. Only one study used patient data from outside of China. The most reported predictors of presence of covid-19 in patients with suspected disease included age, body temperature, and signs and symptoms. The most reported predictors of severe prognosis in patients with covid-19 included age, sex, features derived from computed tomography scans, C reactive protein, lactic dehydrogenase, and lymphocyte count. C index estimates ranged from 0.73 to 0.81 in prediction models for the general population (reported for all three models), from 0.81 to more than 0.99 in diagnostic models (reported for 13 of the 18 models), and from 0.85 to 0.98 in prognostic models (reported for six of the 10 models). All studies were rated at high risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, and high risk of model overfitting. Reporting quality varied substantially between studies. Most reports did not include a description of the study population or intended use of the models, and calibration of predictions was rarely assessed. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that proposed models are poorly reported, at high risk of bias, and their reported performance is probably optimistic. Immediate sharing of well documented individual participant data from covid-19 studies is needed for collaborative efforts to develop more rigorous prediction models and validate existing ones. The predictors identified in included studies could be considered as candidate predictors for new models. Methodological guidance should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, studies should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found
            Is Open Access

            A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance

            The UK Medical Research Council’s widely used guidance for developing and evaluating complex interventions has been replaced by a new framework, commissioned jointly by the Medical Research Council and the National Institute for Health Research, which takes account of recent developments in theory and methods and the need to maximise the efficiency, use, and impact of research.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

              Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Expert Review of Medical Devices
                Expert Review of Medical Devices
                Informa UK Limited
                1743-4440
                1745-2422
                June 03 2023
                May 08 2023
                June 03 2023
                : 20
                : 6
                : 467-491
                Affiliations
                [1 ]University Hospital of Wales, School of Medicine, Cardiff University, Heath Park, Cardiff, U.K
                [2 ]KU Leuven, Leuven, Belgium
                [3 ]Centre for IT & IP Law (CiTiP), KU Leuven, Leuven, Belgium
                [4 ]Engineering Sciences, Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
                [5 ]Department of Clinical and Experimental Information processing (Digital Cardiology), Erasmus Medical Center, Thoraxcenter, Rotterdam, the Netherlands
                [6 ]Department of Electronics, Information and Biomedical Engineering, Politecnico di Milano, Milan, Italy
                [7 ]Philips, Brussels, Belgium
                [8 ]Institute of Cardiovascular Science, University College London, London, U.K
                [9 ]Technische Universität Dresden, Else Kröner Fresenius Center for Digital Health, Dresden, Germany
                [10 ]Elekta, Stockholm, Sweden
                [11 ]Dedalus HealthCare GmbH, Bonn, Germany
                [12 ]Health Products Regulatory Authority, Dublin, Ireland
                [13 ]Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
                Article
                10.1080/17434440.2023.2184685
                37157833
                14d423f6-49e5-4eb7-b560-4b0bd6211f06
                © 2023
                History

                Comments

                Comment on this article