33
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Aesthetic Surgery Advice and Counseling from Artificial Intelligence: A Rhinoplasty Consultation with ChatGPT

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          ChatGPT is an open-source artificial large language model that uses deep learning to produce human-like text dialogue. This observational study evaluated the ability of ChatGPT to provide informative and accurate responses to a set of hypothetical questions designed to simulate an initial consultation about rhinoplasty.

          Methods

          Nine questions were prompted to ChatGPT on rhinoplasty. The questions were sourced from a checklist published by the American Society of Plastic Surgeons, and the responses were assessed for accessibility, informativeness, and accuracy by Specialist Plastic Surgeons with extensive experience in rhinoplasty.

          Results

          ChatGPT was able to provide coherent and easily comprehensible answers to the questions posed, demonstrating its understanding of natural language in a health-specific context. The responses emphasized the importance of an individualized approach, particularly in aesthetic plastic surgery. However, the study also highlighted ChatGPT’s limitations in providing more detailed or personalized advice.

          Conclusion

          Overall, the results suggest that ChatGPT has the potential to provide valuable information to patients in a medical context, particularly in situations where patients may be hesitant to seek advice from medical professionals or where access to medical advice is limited. However, further research is needed to determine the scope and limitations of AI language models in this domain and to assess the potential benefits and risks associated with their use.

          Level of Evidence V

          Observational study under respected authorities. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: found
          • Article: not found

          Dermatologist-level classification of skin cancer with deep neural networks

          Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images—two orders of magnitude larger than previous datasets—consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment

            Background Chat Generative Pre-trained Transformer (ChatGPT) is a 175-billion-parameter natural language processing model that can generate conversation-style responses to user input. Objective This study aimed to evaluate the performance of ChatGPT on questions within the scope of the United States Medical Licensing Examination Step 1 and Step 2 exams, as well as to analyze responses for user interpretability. Methods We used 2 sets of multiple-choice questions to evaluate ChatGPT’s performance, each with questions pertaining to Step 1 and Step 2. The first set was derived from AMBOSS, a commonly used question bank for medical students, which also provides statistics on question difficulty and the performance on an exam relative to the user base. The second set was the National Board of Medical Examiners (NBME) free 120 questions. ChatGPT’s performance was compared to 2 other large language models, GPT-3 and InstructGPT. The text output of each ChatGPT response was evaluated across 3 qualitative metrics: logical justification of the answer selected, presence of information internal to the question, and presence of information external to the question. Results Of the 4 data sets, AMBOSS-Step1 , AMBOSS-Step2 , NBME-Free-Step1 , and NBME-Free-Step2 , ChatGPT achieved accuracies of 44% (44/100), 42% (42/100), 64.4% (56/87), and 57.8% (59/102), respectively. ChatGPT outperformed InstructGPT by 8.15% on average across all data sets, and GPT-3 performed similarly to random chance. The model demonstrated a significant decrease in performance as question difficulty increased ( P =.01) within the AMBOSS-Step1 data set. We found that logical justification for ChatGPT’s answer selection was present in 100% of outputs of the NBME data sets. Internal information to the question was present in 96.8% (183/189) of all questions. The presence of information external to the question was 44.5% and 27% lower for incorrect answers relative to correct answers on the NBME-Free-Step1 ( P <.001) and NBME-Free-Step2 ( P =.001) data sets, respectively. Conclusions ChatGPT marks a significant improvement in natural language processing models on the tasks of medical question answering. By performing at a greater than 60% threshold on the NBME-Free-Step-1 data set, we show that the model achieves the equivalent of a passing score for a third-year medical student. Additionally, we highlight ChatGPT’s capacity to provide logic and informational context across the majority of answers. These facts taken together make a compelling case for the potential applications of ChatGPT as an interactive medical education tool to support learning.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Overview of artificial intelligence in medicine

              Background: Artificial intelligence (AI) is the term used to describe the use of computers and technology to simulate intelligent behavior and critical thinking comparable to a human being. John McCarthy first described the term AI in 1956 as the science and engineering of making intelligent machines. Objective: This descriptive article gives a broad overview of AI in medicine, dealing with the terms and concepts as well as the current and future applications of AI. It aims to develop knowledge and familiarity of AI among primary care physicians. Materials and Methods: PubMed and Google searches were performed using the key words ‘artificial intelligence’. Further references were obtained by cross-referencing the key articles. Results: Recent advances in AI technology and its current applications in the field of medicine have been discussed in detail. Conclusions: AI promises to change the practice of medicine in hitherto unknown ways, but many of its practical applications are still in their infancy and need to be explored and developed better. Medical professionals also need to understand and acclimatize themselves with these advances for better healthcare delivery to the masses.
                Bookmark

                Author and article information

                Contributors
                ishithseth1@gmail.com
                Journal
                Aesthetic Plast Surg
                Aesthetic Plast Surg
                Aesthetic Plastic Surgery
                Springer US (New York )
                0364-216X
                1432-5241
                24 April 2023
                24 April 2023
                2023
                : 47
                : 5
                : 1985-1993
                Affiliations
                [1 ]Department of Plastic Surgery, Peninsula Health, ( https://ror.org/02n5e6456) Melbourne, Victoria 3199 Australia
                [2 ]Faculty of Medicine, Monash University, ( https://ror.org/02bfwt286) Melbourne, Victoria 3004 Australia
                Author information
                http://orcid.org/0000-0001-5444-8925
                Article
                3338
                10.1007/s00266-023-03338-7
                10581928
                37095384
                acb1be5c-e013-4b57-9993-0cfbfaddc977
                © Crown 2023, corrected publication 2023

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 11 March 2023
                : 23 March 2023
                Funding
                Funded by: Monash University
                Categories
                Original Article
                Custom metadata
                © Springer Science+Business Media, LLC, part of Springer Nature and International Society of Aesthetic Plastic Surgery 2023

                Surgery
                chatgpt,artificial intelligence,chatbot,rhinoplasty
                Surgery
                chatgpt, artificial intelligence, chatbot, rhinoplasty

                Comments

                Comment on this article