16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Ethical governance is essential to building trust in robotics and artificial intelligence systems

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance.

          This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.

          Related collections

          Most cited references 29

          • Record: found
          • Abstract: not found
          • Article: not found

          Developing a framework for responsible innovation

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            The social dilemma of autonomous vehicles

            Autonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils-for example, running over pedestrians or sacrificing itself and its passenger to save them. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants to six MTurk studies approved of utilitarian AVs (that sacrifice their passengers for the greater good), and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. They would disapprove of enforcing utilitarian AVs, and would be less willing to buy such a regulated AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability

                Bookmark

                Author and article information

                Journal
                Philos Trans A Math Phys Eng Sci
                Philos Trans A Math Phys Eng Sci
                RSTA
                roypta
                Philosophical transactions. Series A, Mathematical, physical, and engineering sciences
                The Royal Society Publishing
                1364-503X
                1471-2962
                28 November 2018
                15 October 2018
                15 October 2018
                : 376
                : 2133 , Theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’ compiled and edited by Corinne Cath, Sandra Wachter, Brent Mittelstadt, Luciano Floridi
                Affiliations
                [1 ]Bristol Robotics Laboratory, University of the West of England , Coldharbour Lane, Bristol BS16 1QY, UK
                [2 ]Department of Computer Science, University of Oxford , Parks Road, Oxford OX1 3QD, UK
                Author notes
                Article
                rsta20180085
                10.1098/rsta.2018.0085
                6191667
                30323000
                © 2018 The Authors.

                Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.

                Product
                Funding
                Funded by: EPSRC;
                Award ID: EP/L024861/1
                Categories
                1003
                7
                164
                Articles
                Research Article
                Custom metadata
                November 28, 2018

                Comments

                Comment on this article