24
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references82

          • Record: found
          • Abstract: found
          • Article: not found

          Dissecting racial bias in an algorithm used to manage the health of populations

          Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The global landscape of AI ethics guidelines

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

              This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                Philosophy & Technology
                Philos. Technol.
                Springer Science and Business Media LLC
                2210-5433
                2210-5441
                July 12 2020
                Article
                10.1007/s13347-020-00405-8
                35fd010b-032b-48c9-b355-ab3799416cca
                © 2020

                https://creativecommons.org/licenses/by/4.0

                https://creativecommons.org/licenses/by/4.0

                History

                Comments

                Comment on this article