31
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Use of Generative Artificial Intelligence, Including Large Language Models Such as ChatGPT, in Scientific Publications: Policies of KJR and Prominent Authorities

      editorial

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references12

          • Record: found
          • Abstract: not found
          • Article: not found

          Tools such as ChatGPT threaten transparent science; here are our ground rules for their use

          (2023)
            • Record: found
            • Abstract: found
            • Article: not found

            How AI Responds to Common Lung Cancer Questions: ChatGPT vs Google Bard.

            Background The recent release of large language models (LLMs) for public use, such as ChatGPT and Google Bard, has opened up a multitude of potential benefits as well as challenges. Purpose To evaluate and compare the accuracy and consistency of responses generated by publicly available ChatGPT-3.5 and Google Bard to non-expert questions related to lung cancer prevention, screening, and terminology commonly used in radiology reports based on the recommendation of Lung Imaging Reporting and Data System (Lung-RADS) v2022 from American College of Radiology and Fleischner society. Materials and Methods Forty of the exact same questions were created and presented to ChatGPT-3.5 and Google Bard experimental version as well as Bing and Google search engines by three different authors of this paper. Each answer was reviewed by two radiologists for accuracy. Responses were scored as correct, partially correct, incorrect, or unanswered. Consistency was also evaluated among the answers. Here, consistency was defined as the agreement between the three answers provided by ChatGPT-3.5, Google Bard experimental version, Bing, and Google search engines regardless of whether the concept conveyed was correct or incorrect. The accuracy among different tools were evaluated using Stata. Results ChatGPT-3.5 answered 120 questions with 85 (70.8%) correct, 14 (11.7%) partially correct, and 21 (17.5%) incorrect. Google Bard did not answer 23 (19.1%) questions. Among the 97 questions answered by Google Bard, 62 (51.7%) were correct, 11 (9.2%) were partially correct, and 24 (20%) were incorrect. Bing answered 120 questions with 74 (61.7%) correct, 13 (10.8%) partially correct, and 33 (27.5%) incorrect. Google search engine answered 120 questions with 66 (55%) correct, 27 (22.5%) partially correct, and 27 (22.5%) incorrect. The ChatGPT-3.5 is more likely to provide correct or partially answer than Google Bard, approximately by 1.5 folds (OR = 1.55, P = 0.004). ChatGPT-3.5 and Google search engine were more likely to be consistent than Google Bard by approximately 7 and 29 folds (OR = 6.65, P = 0.002 for ChatGPT and OR = 28.83, P = 0.002 for Google search engine, respectively). Conclusion Although ChatGPT-3.5 had a higher accuracy in comparison with the other tools, neither ChatGPT nor Google Bard, Bing and Google search engines answered all questions correctly and with 100% consistency.
              • Record: found
              • Abstract: not found
              • Article: not found

              Appropriateness of Breast Cancer Prevention and Screening Recommendations Provided by ChatGPT

                Author and article information

                Contributors
                Role: Editor-in-Chief
                Journal
                Korean J Radiol
                Korean J Radiol
                KJR
                Korean Journal of Radiology
                The Korean Society of Radiology
                1229-6929
                2005-8330
                August 2023
                17 July 2023
                : 24
                : 8
                : 715-718
                Affiliations
                Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
                Author notes
                Corresponding author: Seong Ho Park, MD, PhD, Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea. parksh.radiology@ 123456gmail.com
                Author information
                https://orcid.org/0000-0002-1257-8315
                Article
                10.3348/kjr.2023.0643
                10400373
                37500572
                9468d2f7-9757-4e48-8c9b-9303934a1bb6
                Copyright © 2023 The Korean Society of Radiology

                This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( https://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 10 July 2023
                : 10 July 2023
                Categories
                Editorial

                Radiology & Imaging
                generative,artificial intelligence,large language model,chatgpt,publication,writing,editing,peer review,policy

                Comments

                Comment on this article

                Related Documents Log