12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Future of Artificial Intelligence in Mental Health Nursing Practice: An Integrative Review

      review-article

      Read this article at

      ScienceOpenPublisherPMC
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          ABSTRACT

          Artificial intelligence (AI) has been increasingly used in delivering mental healthcare worldwide. Within this context, the traditional role of mental health nurses has been changed and challenged by AI‐powered cutting‐edge technologies emerging in clinical practice. The aim of this integrative review is to identify and synthesise the evidence of AI‐based applications with relevance for, and potential to enhance, mental health nursing practice. Five electronic databases (CINAHL, PubMed, PsycINFO, Web of Science and Scopus) were systematically searched. Seventy‐eight studies were identified, critically appraised and synthesised following a comprehensive integrative approach. We found that AI applications with potential use in mental health nursing vary widely from machine learning algorithms to natural language processing, digital phenotyping, computer vision and conversational agents for assessing, diagnosing and treating mental health challenges. Five overarching themes were identified: assessment, identification, prediction, optimisation and perception reflecting the multiple levels of embedding AI‐driven technologies in mental health nursing practice, and how patients and staff perceive the use of AI in clinical settings. We concluded that AI‐driven technologies hold great potential for enhancing mental health nursing practice. However, humanistic approaches to mental healthcare may pose some challenges to effectively incorporating AI into mental health nursing. Meaningful conversations between mental health nurses, service users and AI developers should take place to shaping the co‐creation of AI technologies to enhance care in a way that promotes person‐centredness, empowerment and active participation.

          Related collections

          Most cited references155

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

          The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
            • Record: found
            • Abstract: found
            • Article: not found

            The potential for artificial intelligence in healthcare

            The complexity and rise of data in healthcare means that artificial intelligence (AI) will increasingly be applied within the field. Several types of AI are already being employed by payers and providers of care, and life sciences companies. The key categories of applications involve diagnosis and treatment recommendations, patient engagement and adherence, and administrative activities. Although there are many instances in which AI can perform healthcare tasks as well or better than humans, implementation factors will prevent large-scale automation of healthcare professional jobs for a considerable period. Ethical issues in the application of AI to healthcare are also discussed.
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Key challenges for delivering clinical impact with artificial intelligence

              Background Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques being successfully deployed into clinical practice. This article explores the main challenges and limitations of AI in healthcare, and considers the steps required to translate these potentially transformative technologies from research to clinical practice. Main body Key challenges for the translation of AI systems in healthcare include those intrinsic to the science of machine learning, logistical difficulties in implementation, and consideration of the barriers to adoption as well as of the necessary sociocultural or pathway changes. Robust peer-reviewed clinical evaluation as part of randomised controlled trials should be viewed as the gold standard for evidence generation, but conducting these in practice may not always be appropriate or feasible. Performance metrics should aim to capture real clinical applicability and be understandable to intended users. Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes. Conclusion The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging. Robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy to include quality of care and patient outcomes, is essential. Further work is required (1) to identify themes of algorithmic bias and unfairness while developing mitigations to address these, (2) to reduce brittleness and improve generalisability, and (3) to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.

                Author and article information

                Contributors
                lucian.milasan@ntu.ac.uk
                Journal
                Int J Ment Health Nurs
                Int J Ment Health Nurs
                10.1111/(ISSN)1447-0349
                INM
                International Journal of Mental Health Nursing
                John Wiley and Sons Inc. (Hoboken )
                1445-8330
                1447-0349
                23 January 2025
                February 2025
                : 34
                : 1 ( doiID: 10.1111/inm.v34.1 )
                : e70003
                Affiliations
                [ 1 ] Institute of Health and Allied Professions Nottingham Trent University Nottingham UK
                Author notes
                [*] [* ] Correspondence:

                Lucian H. Milasan ( lucian.milasan@ 123456ntu.ac.uk )

                Author information
                https://orcid.org/0000-0003-1351-6463
                https://orcid.org/0000-0001-5888-0678
                Article
                INM70003 IJMHN-2024-1011.R1
                10.1111/inm.70003
                11755225
                39844734
                e8831586-1085-4997-be9b-e931d44fe98f
                © 2025 The Author(s). International Journal of Mental Health Nursing published by John Wiley & Sons Australia, Ltd.

                This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

                History
                : 10 December 2024
                : 09 September 2024
                : 05 January 2025
                Page count
                Figures: 5, Tables: 1, Pages: 19, Words: 15300
                Categories
                Review Article
                Review Article
                Custom metadata
                2.0
                February 2025
                Converter:WILEY_ML3GV2_TO_JATSPMC version:6.5.2 mode:remove_FC converted:23.01.2025

                Nursing
                artificial intelligence (ai),mental health nursing,psychiatric nursing,psychiatry
                Nursing
                artificial intelligence (ai), mental health nursing, psychiatric nursing, psychiatry

                Comments

                Comment on this article

                Related Documents Log