26
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Resistance to Medical Artificial Intelligence

      , ,
      Journal of Consumer Research
      Oxford University Press (OUP)

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Artificial intelligence (AI) is revolutionizing healthcare, but little is known about consumer receptivity to AI in medicine. Consumers are reluctant to utilize healthcare provided by AI in real and hypothetical choices, separate and joint evaluations. Consumers are less likely to utilize healthcare (study 1), exhibit lower reservation prices for healthcare (study 2), are less sensitive to differences in provider performance (studies 3A–3C), and derive negative utility if a provider is automated rather than human (study 4). Uniqueness neglect, a concern that AI providers are less able than human providers to account for consumers’ unique characteristics and circumstances, drives consumer resistance to medical AI. Indeed, resistance to medical AI is stronger for consumers who perceive themselves to be more unique (study 5). Uniqueness neglect mediates resistance to medical AI (study 6), and is eliminated when AI provides care (a) that is framed as personalized (study 7), (b) to consumers other than the self (study 8), or (c) that only supports, rather than replaces, a decision made by a human healthcare provider (study 9). These findings make contributions to the psychology of automation and medical decision making, and suggest interventions to increase consumer acceptance of AI in medicine.

          Related collections

          Most cited references47

          • Record: found
          • Abstract: not found
          • Article: not found

          Spotlights, Floodlights, and the Magic Number Zero: Simple Effects Tests in Moderated Regression

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The totalitarian ego: Fabrication and revision of personal history.

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Algorithm aversion: people erroneously avoid algorithms after seeing them err.

              Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
                Bookmark

                Author and article information

                Journal
                Journal of Consumer Research
                Oxford University Press (OUP)
                0093-5301
                1537-5277
                December 2019
                December 01 2019
                May 03 2019
                December 2019
                December 01 2019
                May 03 2019
                : 46
                : 4
                : 629-650
                Article
                10.1093/jcr/ucz013
                47ea08cc-bae1-4ecc-9dd7-621b7205e601
                © 2019

                https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model

                History

                Comments

                Comment on this article