6
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      An adversarial training framework for mitigating algorithmic biases in clinical machine learning

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Machine learning is becoming increasingly prominent in healthcare. Although its benefits are clear, growing attention is being given to how these tools may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating biases that may have been acquired through data collection. We demonstrate this proposed framework on the real-world task of rapidly predicting COVID-19, and focus on mitigating site-specific (hospital) and demographic (ethnicity) biases. Using the statistical definition of equalized odds, we show that adversarial training improves outcome fairness, while still achieving clinically-effective screening performances (negative predictive values >0.98). We compare our method to previous benchmarks, and perform prospective and external validation across four independent hospital cohorts. Our method can be generalized to any outcomes, models, and definitions of fairness.

          Related collections

          Most cited references30

          • Record: found
          • Abstract: found
          • Article: not found

          The meaning and use of the area under a receiver operating characteristic (ROC) curve.

          A representation and interpretation of the area under a receiver operating characteristic (ROC) curve obtained by the "rating" method, or by mathematical predictions based on patient characteristics, is presented. It is shown that in such a setting the area represents the probability that a randomly chosen diseased subject is (correctly) rated or ranked with greater suspicion than a randomly chosen non-diseased subject. Moreover, this probability of a correct ranking is the same quantity that is estimated by the already well-studied nonparametric Wilcoxon statistic. These two relationships are exploited to (a) provide rapid closed-form expressions for the approximate magnitude of the sampling variability, i.e., standard error that one uses to accompany the area under a smoothed ROC curve, (b) guide in determining the size of the sample required to provide a sufficiently reliable estimate of this area, and (c) determine how large sample sizes should be to ensure that one can statistically detect differences in the accuracy of diagnostic techniques.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A Survey on Bias and Fairness in Machine Learning

            With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Genetic Misdiagnoses and the Potential for Health Disparities.

              For more than a decade, risk stratification for hypertrophic cardiomyopathy has been enhanced by targeted genetic testing. Using sequencing results, clinicians routinely assess the risk of hypertrophic cardiomyopathy in a patient's relatives and diagnose the condition in patients who have ambiguous clinical presentations. However, the benefits of genetic testing come with the risk that variants may be misclassified.
                Bookmark

                Author and article information

                Contributors
                jenny.yang@eng.ox.ac.uk
                Journal
                NPJ Digit Med
                NPJ Digit Med
                NPJ Digital Medicine
                Nature Publishing Group UK (London )
                2398-6352
                29 March 2023
                29 March 2023
                2023
                : 6
                : 55
                Affiliations
                [1 ]GRID grid.4991.5, ISNI 0000 0004 1936 8948, Institute of Biomedical Engineering, Department of Engineering Science, , University of Oxford, ; Oxford, England
                [2 ]GRID grid.410556.3, ISNI 0000 0001 0440 1440, John Radcliffe Hospital, , Oxford University Hospitals NHS Foundation Trust, ; Oxford, England
                [3 ]GRID grid.4991.5, ISNI 0000 0004 1936 8948, RDM Division of Cardiovascular Medicine, , University of Oxford, ; Oxford, England
                [4 ]GRID grid.4991.5, ISNI 0000 0004 1936 8948, Big Data Institute, Nuffield Department of Population Health, , University of Oxford, ; Oxford, England
                [5 ]GRID grid.16821.3c, ISNI 0000 0004 0368 8293, School of Public Health, , Shanghai Jiao Tong University School of Medicine, ; Shanghai, China
                [6 ]Oxford-Suzhou Centre for Advanced Research (OSCAR), Suzhou, China
                Author information
                http://orcid.org/0000-0003-0352-8452
                http://orcid.org/0000-0003-2391-5361
                http://orcid.org/0000-0001-5095-6367
                Article
                805
                10.1038/s41746-023-00805-y
                10050816
                36991077
                0aaf2726-be53-4e32-bb16-b99a027e8bed
                © The Author(s) 2023

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 1 November 2022
                : 13 March 2023
                Funding
                Funded by: This work was supported by the Wellcome Trust/University of Oxford Medical \ Life Sciences Translational Fund (Award: 0009350) and the Oxford National Institute of Research (NIHR) Biomedical Research Campus (BRC). JY is a Marie Sklodowska-Curie Fellow, under the European Union#x2019;s Horizon 2020 research and innovation programme (Grant agreement: 955681;MOIRA)
                Funded by: AAS is an NIHR Academic Clinical Fellow (Award: ACF-2020-13-015).
                Funded by: DWE is funded by a Robertson Foundation Fellowship.
                Categories
                Article
                Custom metadata
                © The Author(s) 2023

                medical ethics,public health
                medical ethics, public health

                Comments

                Comment on this article