9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Should We Acknowledge ChatGPT as an Author?

      editorial
      1 , 2 , 3
      Journal of Epidemiology
      Japan Epidemiological Association

      Read this article at

      ScienceOpenPublisherPMC
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In 2020, large language models using artificial intelligence (AI) represented by GPT-3 (Generative Pretrained Transformer - 3) appeared, 1 making a solid step towards Artificial General Intelligence (AGI), an intelligence that can learn or comprehend any intellectual job that a human can accomplish. Open AI, an American AI research laboratory, made a breakthrough innovation using the technology of GPT-3 and launched ChatGPT in November 2022. ChatGPT is an artificial intelligence (AI) chatbot that generates coherent sentences by analyzing the statistical patterns found in a large database of text extracted from the Web. 2 This has had a major impact on the publishing industry, education, and science. There is no doubt that virtually every area of human writing will eventually involve AI technology, either explicitly or implicitly. As an online print paper in the Journal of Epidemiology, a letter to the editor on this topic has been published as a timely and well-thought-out insight about ChatGPT’s authorship. 3 The letter concluded, in accordance with the authorship guidelines recommended by the IMCJE (the International Committee of Medical Journal Editors), 4 that ChatGPT is not qualified as an author because it cannot approve the final manuscript nor take responsibility for manuscript content. Having received this letter, we conducted a simple survey among our Editorial Board members, asking them about the potential role of ChatGPT and the authors’ responsibilities. None of the respondents thought that it could be an author. The majority (74%) thought that ChatGPT could be used as a tool, among whom 63% thought that it should be disclosed during submission. Although officially unstated in our Guide for Authors, 5 we, as editors of the Journal of Epidemiology, agree that ChatGPT should not be acknowledged as an author of scientific papers. As mentioned in the letter cited above, 3 the Science Family of Journals banned the use of AI technologies without explicit permission from the editors, 6 while some other publishing groups subsequently announced a policy allowing the use of AI as a tool (not as an author) under the condition of appropriate disclosure. 7 , 8 Additional issues need to be considered beyond deciding whether to treat ChatGPT as an author. Of greatest concern, ChatGPT can cause fatal errors. Scientific papers should make inferences based on facts. However, the words written by ChatGPT are not necessarily facts; it often writes incorrect sentences. We provide several examples where ChatGPT has offered an incorrect response in the field of epidemiology. For example, when we asked, “Please tell me relevant citations to evaluate the association of coffee intake with liver cancer risk in Japan.”, it offered the following as a relevant citation on February 20, 2023: “Inoue et al. Coffee and green tea consumption and the risk of liver cancer in Japan: the Japan Public Health Center-based Prospective Study. Cancer Causes Control. 2009;20(5): 5 – 15. doi:10.1007/s10552-008-9235-5…”. 2 Unfortunately, there is no paper with the title, and the doi indicates an unrelated paper written by different authors. Furthermore, when we asked, “What are the top causes of death in Japan in 2020?”, it answered as follows on March 20, 2023: “According to the Ministry of Health, Labour and Welfare in Japan, the top three causes of death in 2019 were: 1. Cancer: 29.5% of deaths, 2. Heart disease: 15.1% of deaths, 3. Pneumonia: 8.4% of deaths”. 2 However, the actual top causes of death in 2019 were malignant neoplasms (27.3%), heart diseases (15.0%), and senility (8.8%). 9 ChatGPT incorrectly reported pneumonia as the third cause of death. A quick Internet search can detect that these sentence are false, but ChatGPT does not seem to judge them. Many of today’s ChatGPT sources are available on the Internet but have not undergone rigorous critical scrutiny. Thus, there is a risk that uncertain information or data will be treated as facts. Therefore, it is the responsibility of the authors to confirm that the texts written by ChatGPT are correct, and the scientific community is responsible for monitoring this. Another AI chatbot, Perplexity AI, 10 has recently drawn considerable attention because it sometimes provides more accurate answers and information sources than ChatGPT. However, it is unlikely that ChatGPT or Perplexity AI will satisfy the ICMJE requirements of authorship. Technologies such as ChatGPT are expected to advance significantly in the future. Indeed, Open AI has recently released GPT-4, a newer version of ChatGPT, on March 14, 2023, which seems to be more reliable and able to handle more complex instructions than the earlier version. 11 We hope that by recognizing the chatbot’s shortcomings and using the chatbot effectively, scientific evidence will be efficiently published, and science will progress.

          Related collections

          Most cited references2

          • Record: found
          • Abstract: found
          • Article: not found

          Language Models are Few-Shot Learners

          Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. 40+32 pages
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Can ChatGPT Be Considered an Author of a Medical Article?

            The technology of generative artificial intelligence (AI) is developing rapidly, and recently there has been great interest in the chatbot ChatGPT, released by OpenAI (San Francisco, CA, USA) in November 2022. 1 Its high performance is evidenced by the fact that it scored at or near the passing standard on the United States Medical Licensing Exam (USMLE), 2 and its potential implementation in healthcare is now under discussion in the United States. 3 It has also been reported to be difficult to distinguish between abstracts generated by ChatGPT and those written by humans, with scientists mistaking 32% of ChatGPT abstracts as being human produced. 4 These developments raise the issue of whether ChatGPT is capable of true authorship, especially as ChatGPT has already been named as a co-author of at least four scientific papers, including some in the fields of medicine and nursing. 5 To clarify this issue, we assessed whether ChatGPT actually meets the criteria for authorship of a medical article based on the guidelines of the International Committee of Medical Journal Editors (ICMJE). The ICMJE author criteria are as follows 6 : 1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND 2. Drafting the work or revising it critically for important intellectual content; AND 3. Final approval of the version to be published; AND 4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. In order to provide a timely assessment of these criteria in the rapidly changing context of AI development, the criteria were reviewed and assessed by the two specialists involved in the present research project (KI and TN), then further discussed with the other contributor (PH). The results of this analysis are shown in Table 1, outlining the extent to which ChatGPT fulfills the criteria. The table reveals that, depending on the user’s prompt, ChatGPT can fulfill criteria 1 and 2, but also that ChatGPT cannot fulfill criteria 3 and 4. Of course, it is possible that the ICMJE will change its authorship criteria in response to developments in AI. For example, as ChatGPT appears to be helpful for increasing the productivity of authors, the organization may allow the inclusion of ChatGPT as a co-author to allow readers to easily find articles that have used it. However, such changes would still not alter the fact that ChatGPT at present does not appear to be capable of thinking sufficiently independently to fulfill criteria 3 and 4, giving final approval to and being accountable for the work. Table 1. Does ChatGPT meet the authorship criteria of the International Committee of Medical Journal Editors? Criterion number Criterion content Yes No 1 Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work ✓   2 Drafting the work or revising it critically for important intellectual content ✓   3 Final approval of the version to be published   ✓ 4 Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved   ✓ Based on these considerations, we conclude that it is inappropriate for ChatGPT to be named as an author, at least in journals that have adopted the ICMJE criteria. Nevertheless, we emphasize that it is essential for the transparency of any study using ChatGPT to clearly mention its use in the study’s acknowledgments. The ICMJE may consider adding such principles to its recommendations in its “Non-Author Contributors” section. 6 This suggestion is in line with the view of the World Association of Medical Editors that chatbots cannot be authors and that, if they are used, the paper’s authors should be transparent about this usage and take responsibility for the content produced by the chatbots. 7 In response to our prompt “Can you be a co-author of a medical article?”, ChatGPT itself gave the following answer, which is consistent with our view: “As a language model, I am not able to be a co-author of a medical article because I am not a human and do not have the ability to conduct research or contribute original ideas. However, I can assist with the writing and editing of an article. It is important to note that any information generated by me should be fact-checked and independently verified by a qualified professional before being used in any formal research or publication.” The Science Family of Journals has already gone much further than this in restricting the use of AI in the articles it will accept for publication. It recently updated its editorial policies as follows 8 , 9 : Artificial intelligence (AI) policy: Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors. In addition, an AI program cannot be an author of a Science journal paper. A violation of this policy constitutes scientific misconduct. While we think that such a strict policy will certainly help to maintain authorial transparency, we are also concerned that it may overly strict, prematurely preventing researchers from benefiting from the enhanced productivity that AI promises. It is important for humanity to consider from an early stage how to adopt AI technologies both practically and ethically in order to creatively advance scientific research, including research in epidemiology. Such discussions would be meaningful for us to become more creative in co-creation with AI.

              Author and article information

              Journal
              J Epidemiol
              J Epidemiol
              JE
              Journal of Epidemiology
              Japan Epidemiological Association
              0917-5040
              1349-9092
              5 July 2023
              8 April 2023
              2023
              : 33
              : 7
              : 333-334
              Affiliations
              [1 ]Department of Public Health, School of Medicine, Yokohama City University, Yokohama, Japan
              [2 ]Department of Health Data Science, Graduate School of Data Science, Yokohama City University, Yokohama, Japan
              [3 ]Division of Surveillance and Policy Evaluation, National Cancer Center Institute for Cancer Control, Chuo-ku, Tokyo, Japan
              Author notes
              Address for correspondence. Atsushi Goto, MD, PhD, MPH, Department of Public Health, School of Medicine, Yokohama City University, 3-9 Fukuura, Kanazawa-Ku, Yokohama 236-0004, Japan (e-mail: agoto@ 123456yokohama-cu.ac.jp ).
              Author information
              http://orcid.org/0000-0003-0669-654X
              http://orcid.org/0000-0001-8687-1269
              Article
              JE20230078
              10.2188/jea.JE20230078
              10257990
              37032108
              5590c923-fafb-4ca5-b106-6d71ca94c074
              © 2023 Atsushi Goto et al.

              This is an open access article distributed under the terms of Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

              History
              : 23 March 2023
              : 23 March 2023
              Funding
              Funded by: JST
              Award ID: JPMJPF1234
              Funded by: Yokohama City University
              Award ID: the 2021–2022 Strategic Research Promotion [Gran
              Categories
              Editorial
              Others

              Comments

              Comment on this article

              Related Documents Log