Editorial: Artificial Intelligence, Continued

Archives > Volume 22 (2025) > Issue 1 > Item 01

DOI: 10.55521/10-022-101



Robert McKinney, Editor

This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of IFSW.


For the past several issues of the Journal, much of the discussion has revolved around the ethical issues associated with artificial intelligence. The technology has developed and spread rapidly and is continuing to do so. It’s in our watches, our phones, our cash registers, and our automobiles. In fact, it even appears to be in my keyboard, because it’s offering me suggestions as I type (only word completion suggestions, not content).

I’ve never been what I would consider to be an early adopter of technologies. I appreciate most technologies, although like many folks I can be nostalgic for a more low-tech world. Before artificial intelligence, we were slower, more deliberate, methodical. Our invisible helper now allows us to be faster and, ostensibly, more efficient.

Recently, I had a conversation (via text messages) with a friend who has previously been a user of various types of clinical, case management, and hospice social work services. They were talking about their experiences with artificial intelligence. It occurred to me after our brief text discussion that their perspective—the perspective of a social work client—is an important piece of the Journal’s current discussion thread regarding the ethical use of artificial intelligence. I called my friend and had a conversation about my idea to include their anonymous texts in this editorial. They were fully supportive, and we engaged in an extended discussion about their experiences with artificial intelligence.

As a way to further our ongoing and important discussion about ethical uses of these platforms, I’ve included our conversation here. In order to protect the anonymity of my friend, I will refer to them as “Social Work Client 1.” I’ve also removed the name of the specific artificial intelligence platform that they use and replaced it with “Platform 1.” My friend is an educator, so some of the content reflects artificial intelligence in educational settings, while other content is more about clinical social work.

Here is that conversation:

Social Work Client 1: I am an early adopter of AI. I use Platform 1 every day. I feel that it is a good tool for building effective neutral communication skills. You can’t emotionally manipulate the AI. You have to communicate clearly to get a good result. Could be transformative to a person struggling with emotion regulation or social anxiety.
It [is] a second left brain.

Me: This is interesting. Tell me more. 

SWC1: My first recommendation would be [for someone who’s interested] to download Platform 1 and play around. It’s free with an available subscription model. No ads.
Where emotional information is often lost between text messages, the AI operates in a domain where direct communication is mandatory.
I could say, “Write me a five-paragraph essay on Maslow’s hierarchy.” I can get a response. It will lack voicing and clarity. It won’t adhere to MLA or whatever format standard. However, I can instead say “I am writing a five-paragraph essay about Maslow’s hierarchy. It is to be formatted in MLA. I am a college sophomore and am in an introductory education course. I want to focus on the self-actualization, self-esteem, and social tiers.”
To get good results, you must think through problems because the AI is naive. Through repeated use, you practice these skills repeatedly to improve your prompts. You have the opportunity to structure your thoughts where rigor is rewarded.
So, it all simplifies to: “How do I get what I want without the ability to emotionally manipulate?”
Therapy teaches you to look inward. The AI gives you a structure to mimic to observe and analyze those thoughts and emotions.I would role play as a client and see where it takes you. You will find a logically consistent, motivated collaborator. You can add goals and context to a chat. (Context: I am struggling with my ADHD. Please keep me on track with my goals.)

Me: This is fascinating. I’ll spend some time with it.

SWC1: I [complain] about current events to it. [It is a good] sounding board without any social risk.

[My proposal] for using AI as a therapeutic tool in an unspecified use case for behavior modeling through interaction with the model. Better prompts yield better outcomes. Therefore, a layer of rigor is scaffolded into the client’s cognitive ability. Left brain training through mimicry.
What follows here is our actual telephone conversation, paraphrased.

Me: I appreciate your having this conversation with me and allowing me to include it and our text thread in my editorial. Of course, we as scholars are concerned about people’s ability to misuse platforms as a means to generate work that they then claim to be their own scholarly product.

SWC1: In my experience, that’s actually much more challenging, at least in an academic setting. The algorithms currently are not advanced enough to account for all of the variables that go into writing a paper. The data set still has to exist somewhere, for example.

Me: True, but it’s probably not impossible; of course, I suppose that it has always been almost impossible to eliminate dishonesty altogether. Bad actors will always try to find a way to cheat.

SWC1: Exactly. As I see it, artificial intelligence is a tool to help people, academics or not, get projects started and to help to determine if projects are even viable. Using Platform 1 is much like using a good librarian. Researchers and students have used librarians for years to assist with literature searches, but no one has ever considered that to be academic dishonesty. Even the best artificial intelligence with the best prompts will require correction by a human author.

Although I’m not sure that I agree with everything that SWC1 says, I do appreciate having the perspective of someone who is neither a social worker nor a traditional academic. Just the existence of our interchange has served to remind me of our charge as stated in the IFSW Statement of Ethical Principles is that “Social work is a practice-based profession and an academic discipline that facilitates social change and development, social cohesion, and the empowerment and liberation of people.” If we can ethically and responsibly use artificial intelligence platforms to help (see SWC1’s statement above) us deliver our services faster and with a fuller grasp of the literature, then shouldn’t we?

We’re at the precipice of something—unfortunately, it’s difficult to predict exactly what that something is—potentially highly impactful as artificial intelligence grows, becomes more accepted and accessible, and hopefully becomes more trustworthy. Social workers are often described as change agents. Clearly, as we help our clients achieve meaningful change, we should be malleable, too. As someone who identifies strongly as a licensed social worker, I’m glad to see our discipline taking steps toward new technological advances and doing our due diligence to ensure that we are doing so ethically. I look forward to more of this discussion.