9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Ethics in human–AI teaming: principles and perspectives

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of such systems. Integrating ethics into artificial intelligence-based programs is crucial for preventing negative outcomes, such as privacy breaches and biased decision making. Human–AI teaming (HAIT) presents additional challenges, as the ethical principles and moral theories that provide justification for them are not yet computable by machines. To that effect, models of human judgments and decision making, such as the agent-deed-consequence (ADC) model, will be crucial to inform the ethical guidance functions in AI team mates and to clarify how and why humans (dis)trust machines. The current paper will examine the ADC model as it is applied to the context of HAIT, and the challenges associated with the use of human-centric ethical considerations when applied to an AI context.

          Related collections

          Most cited references73

          • Record: found
          • Abstract: found
          • Article: not found

          Algorithm aversion: people erroneously avoid algorithms after seeing them err.

          Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The Moral Machine experiment

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction

                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                AI and Ethics
                AI Ethics
                Springer Science and Business Media LLC
                2730-5953
                2730-5961
                September 20 2022
                Article
                10.1007/s43681-022-00214-z
                d9c9ef1b-6555-4120-a1b0-45be40e98c3e
                © 2022

                https://creativecommons.org/licenses/by/4.0

                https://creativecommons.org/licenses/by/4.0

                History

                Comments

                Comment on this article