40
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants’ willingness to engage with AI-led health chatbots.

          Methods

          The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor.

          Results

          Three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’ were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI 95%:0.13–0.78] and dislike for talking to computers OR = 0.77 [CI 95%:0.60–0.99] as well as positively correlated with perceived utility OR = 5.10 [CI 95%:3.08–8.43], positive attitude OR = 2.71 [CI 95%:1.77–4.16] and perceived trustworthiness OR = 1.92 [CI 95%:1.13–3.25].

          Conclusion

          Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients’ perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          A fully automated conversational agent for promoting mental well-being: A pilot RCT using mixed methods

          Fully automated self-help interventions can serve as highly cost-effective mental health promotion tools for massive amounts of people. However, these interventions are often characterised by poor adherence. One way to address this problem is to mimic therapy support by a conversational agent. The objectives of this study were to assess the effectiveness and adherence of a smartphone app, delivering strategies used in positive psychology and CBT interventions via an automated chatbot (Shim) for a non-clinical population — as well as to explore participants' views and experiences of interacting with this chatbot. A total of 28 participants were randomized to either receive the chatbot intervention (n = 14) or to a wait-list control group (n = 14). Findings revealed that participants who adhered to the intervention (n = 13) showed significant interaction effects of group and time on psychological well-being (FS) and perceived stress (PSS-10) compared to the wait-list control group, with small to large between effect sizes (Cohen's d range 0.14–1.06). Also, the participants showed high engagement during the 2-week long intervention, with an average open app ratio of 17.71 times for the whole period. This is higher compared to other studies on fully automated interventions claiming to be highly engaging, such as Woebot and the Panoply app. The qualitative data revealed sub-themes which, to our knowledge, have not been found previously, such as the moderating format of the chatbot. The results of this study, in particular the good adherence rate, validated the usefulness of replicating this study in the future with a larger sample size and an active control group. This is important, as the search for fully automated, yet highly engaging and effective digital self-help interventions for promoting mental health is crucial for the public health.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Mixing methods in a qualitatively driven way

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Communities of Practice

                Bookmark

                Author and article information

                Journal
                Digit Health
                Digit Health
                DHJ
                spdhj
                Digital Health
                SAGE Publications (Sage UK: London, England )
                2055-2076
                21 August 2019
                Jan-Dec 2019
                : 5
                : 2055207619871808
                Affiliations
                [1 ]The University of Westminster, London, UK
                [2 ]University College London, London, UK
                [3 ]The University of Southampton, Southampton, UK
                Author notes
                [*]Tom Nadarzynski, The University of Westminster, 115 New Cavendish Street, London, W1W 6UW. Email: T.Nadarzynski@ 123456westminster.ac.uk Twitter: @TNadarzynski
                Author information
                https://orcid.org/0000-0001-7010-5308
                Article
                10.1177_2055207619871808
                10.1177/2055207619871808
                6704417
                31467682
                3e6448b5-e4c8-43ed-981f-68c2ba28b5ce
                © The Author(s) 2019

                Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License ( http://www.creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages ( https://us.sagepub.com/en-us/nam/open-access-at-sage).

                History
                : 12 March 2019
                : 3 August 2019
                Categories
                Original Research
                Custom metadata
                January-December 2019

                acceptability,ai,artificial intelligence,bot,chatbot
                acceptability, ai, artificial intelligence, bot, chatbot

                Comments

                Comment on this article