Forum: Incorporating the Use of Generative Artificial Intelligence and Large Language Models into Publication Standards: A Call for Editorial Policy Based on Social Work Values

Archives > Volume 20 (2023) > Issue 2 > Item 10

DOI: 10.55521/10-020-210

Dawn Apgar, PhD, JSWVE Policy Board

Apgar, D. (2023). Forum: Incorporating the Use of Generative Artificial Intelligence and Large Language Models into Publication Standards: A Call for Editorial Policy Based on Social Work Values. International Journal of Social Work Values and Ethics, 20(2), 200-205.

This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of IFSW.


The use of generative Artificial Intelligence (AI) and Large Language Models (LLMs) in research and scholarly writing has challenged editorial policy, with varying tolerance about the extent to which this technology should be allowed. Journals need to re-examine their standards to ensure transparency and accountability with regard to the use of AI and LLMs in the preparation of manuscripts and the scientific investigation upon which these articles are based.  Social workers are ideally suited to assume leadership roles in the formulation of editorial policies given their core commitment to integrity and making certain that technology use is consistent with their comprehensive ethical code.

Keywords: Artificial intelligence, research, values, scholarship, editorial policy

Over the last year, since the launch of ChatGPT, the use of generative Artificial Intelligence (AI) and Large Language Models (LLMs) in research and scholarship has caused concern. Additionally, alarmed by the increasing access to and usage of generative AI by students, university faculty have scrambled to change the ways that they teach to ensure generative AI is not used in assignments and tests (Huang, 2023). Generative AI has also hijacked the focus of publishers as they wonder if its use will negatively impact online readership of their internet and open-access articles (Robertson, 2023) and/or the integrity of published work (Liverpool, 2023). Despite these concerns, there is a consensus that generative AI and LLMs have solidified a prominent place in higher education with many journals inviting scholarship on their use in teaching and learning (Bilal et al., 2023; Eysenbach, 2023).

Given this new landscape, there is a need for sound, comprehensive editorial policies with regard to the use of generative AI and LLMs to ensure both transparency and accountability. While Science has banned text generated by ChatGPT (or other AI tools) in journal submissions (Science, 2023), such a prohibition is not universal and considered naïve and even harmful by some (Claybourn, 2023; Heaven, 2023). Many journals and editorial societies, such as the World Association of Medical Editors, have developed recommendations for using chatbots and generative AI ethically in research and scholarly publication (Zielinski et al., 2023).

A recently published journal article, that relied heavily on ChatGPT in its preparation, states that LLMs have the ability to “revolutionize social work research,” calling for researchers and editors to mitigate any disrupting consequences through the use of quality controls and adherence to ethical standards (Victor et al., 2023). In response to this appeal, the Policy Board of the International Journal of Social Work Value and Ethics recently adopted and published standards to help authors understand the expectations when using generative AI in their submitted work ( The need for editorial standards is dire given that the number of manuscripts that use generative AI or LLMs in writing or research is likely to increase as scholars discover ways to use this technology more effectively and to be more prolific with less effort. Such policy makes transparent the agreed upon practices used to evaluate manuscripts and strengthens accountability to readers who rely on researchers, reviewers, and editors to ensure quality of published scholarship.

Standards about the ethical use of generative AI or LLMs in research will likely change and evolve, necessitating constant monitoring and modification. For example, while most current editorial policies in scientific journals now prohibit ChatGPT from being listed as an author, views about this practice dramatically shifted in a short time period and are inconsistent within the publishing industry (O’Connor, & ChatGPT. 2023; Zarefsky, 2023). For example, a search of the Amazon bookstore reveals more than 200 books have ChatGPT as an author or coauthor (Nolan, 2023). Thus, what may be considered acceptable differs within and across disciplines and publication types, with changes likely as the uses of generative AI and LLMs in scientific inquiry and writing are better understood.

While there are tools to assist reviewers and editors with detecting the use of generative AI and LLMs in scholarly writing (Salleh, 2023), there is a need to ensure that volunteers and staff involved in editorial decisions and publications are well versed on their use. Reviewers may not be familiar with generative AI and LLMs so they will need education about their application in the research process and guidance about what to do if manuscripts do not adhere to established editorial standards. Editors and editorial staff will also likely need to make themselves readily available to answer questions and provide support to authors and reviewers during this transformative period.

Recruiting manuscript reviewers generally can be challenging as academics are often not rewarded for this labor-intensive professional service. Publishers may not be able to keep pace with reviews if there are increases in scholarly productivity due to the use of generative AI and LLMs. Ironically, this potential problem can be alleviated by the use of generative AI and LLMs to assist with manuscript reviews. Generative AI software, such as ChatGPT, can identify language-related errors; provide suggestions for improving clarity, conciseness, and overall readability; ensure adherence to journal specific style guidelines; detect plagiarism; verify and cross-reference information; and generate prompts or questions that could be considered when conducting human peer reviews. If used for these purposes, policies about generative AI and LLMs in review processes will also be needed to ensure transparency and for accountability to both authors and readers.

There are currently unprecedented challenges related to how to appropriately use generative AI and LLMs in social work research, scholarship, and publication.  Fortunately, social workers are well poised to be leaders in formulating necessary policy as they have a comprehensive ethical code and are driven by the professional values of integrity and competence. Generative AI and LLMs are innovative technologies that have significant benefits and risks. The need for leadership in editorial policy development has never been more important.  Social workers are skilled at navigating complex situations and balancing competing interests, making them professionally prepared to ensure that generative AI and LLMs are used in a fair, just, and inclusive manner.


Bilal, D., He, J., & Liu, J. (2023). Call for papers: Artificial intelligence (AI) in education: Transforming teaching and learning. Information and Learning Sciences.

Claybourn, C. (2023, May 9). Why some college professors are embracing ChatGPT. U.S. News and World Report.

Eysenbach, G. (2023). The role of ChatGPT, generative language models, and artificial intelligence in medical education: A conversation with ChatGPT and a call for papers. JMIR Medical Education.

Heaven, W.D. (2023, April 6). ChatGPT is going to change education, not destroy it. MIT Technology Review.

Huang, K. (2019, March 22). Alarmed by A.I. chatbots, universities start revamping the way they teach. The New York Times.

Liverpool, L. (2023, May 31). AI intensifies fight against ‘paper mills’ that churn out fake research. Nature.

Nolan, B. (2023, February 23). More than 200 books in Amazon’s bookstore have ChatGPT listed as an author or coauthor.Insider.

O’Connor, S., & ChatGPT. (2023). Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Education in Practice, 66,

Robertson, K. (2023, March 30). Publishers worry A.I. chatbots will cut readership. The New York Times.

Salleh, A. (2023, June 7). ChatGPT-generated scientific papers could be picked up by new AI-detection tool, say researchers. ABC Science.

Science. (2023, June 8). Science journals: Editorial policies.

Victor, B., Sokol, R., Goldkind, L., & Perron, B. (2023). Recommendations for social work researchers and journal editors on the use of generative AI and large language models. Journal of the Society for Social Work and Research.

Zarefsky, M. (2023, April 21). ChatGPT shouldn’t be listed as JAMA author—and ChatGPT agrees. American Medical Association.

Zielinski, C., Winker, M., Aggarwal, R., Ferris, L., Heinemann, M., Lapeña, J., Pai, S., Ing, E., Citrome, L., Alam, M., Voight, M., & Habibzadeh, F. (2023, May 31). WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. World Association of Medical Editors (WAME).