Editorial: Embracing AI in Social Work: Why Ethical Concerns Should Drive Integration, not Avoidance

Archives > Volume 22 (2025) > Issue 1 > Item 02

DOI: 10.55521/10-022-102




Barbara Hiltz, Special Contributor

This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of IFSW.


Acknowledgement: The author utilized Claude 3.7 Sonnet to improve the clarity and flow of this piece. All final revisions and content decisions were made by the author.

As a social work professor, I’ve noticed a recurring pattern in my conversations with students about artificial intelligence (AI). Many are quick to dismiss AI technologies outright—citing ethical concerns about bias, privacy, environmental impact, and the potential dehumanization of social work practice. While I appreciate and share many of these concerns, I worry that students are “throwing the baby out with the bathwater,” missing both the ethical imperatives for thoughtful AI integration and the ways we can mitigate the very concerns they raise.

My perspective enters an ongoing and important dialogue in this journal about artificial intelligence in social work. In Volume 21, Number 1, editor Dr. Stephen Marson (2024) shared his frustrating experiences with AI hallucinations when seeking scholarly references. Victor, Goldkind, and Perron (2024) responded with valuable insights about the limitations of large language models (LLMs) and emerging correctives. While acknowledging these legitimate concerns about AI accuracy, I argue that social work education faces an ethical imperative to engage with, rather than avoid, these technologies.

Addressing Legitimate Concerns

Much like the internet revolution in the 1990s and 2000s, AI is quickly, and fundamentally, changing our professional landscape. It is also here to stay. Rather than avoiding AI, social workers must engage with it critically and ethically. Here’s how social workers can address some common concerns in their daily practice.

Bias and Fairness

Yes, AI systems can perpetuate biases. Social workers are uniquely positioned to identify and challenge this algorithmic bias in the systems they encounter. When a housing assistance algorithm consistently disadvantages certain demographic groups, social workers should advocate for the use of more equitable models. If a social worker encounters automated eligibility screening systems that create barriers for clients with language differences or limited digital literacy—there is work to do. These systems may use terminology or require knowledge that confuses clients, leading to benefit denials for those most in need. Social workers who understand these technological barriers can intervene—helping clients navigate systems, documenting systemic problems, and advocating for more accessible interfaces. By understanding the technology, even at a basic level, social workers become essential ethical guardrails, ensuring equitable access. 

Privacy and Confidentiality

Privacy concerns are valid, but avoidance isn’t the answer. Privacy concerns in AI-assisted practice extend beyond basic HIPAA compliance. Social workers must become fluent in understanding how information flows through digital systems and explaining these complexities to clients. This includes creating clear and consent processes that explain how AI tools may be involved in their care. Importantly, social workers have a professional obligation to understand the vendors and platforms they use—asking careful and critical questions about how data is owned and stored, who has access, and how it is used. Social workers should be leading conversations about ethical data use, ensuring that client confidentiality remains paramount—even as delivery systems evolve.

Environmental Impact

The environmental cost of AI—from energy-intensive data centers to electronic waste—is a serious concern that aligns with social work’s commitment to environmental justice. However, avoiding AI doesn’t eliminate these impacts; it merely removes our voice from conversations about sustainable implementation. Practitioners can advocate for their agencies to conduct environmental impact assessments of AI systems, weighing the carbon footprint against potential benefits. This might mean choosing more efficient algorithms or limiting unnecessary data processing. 

It is also true, however, that organizations make all sorts of other decisions that have tremendous environmental impact. If one is going to focus on the environmental impacts of AI, it would also be worth asking other crucial behavioral questions about things like transportation and travel emissions, paper waste, expanding the lifespan of existing hardware, and use of other systems that will optimize resource allocation to reduce waste. 

Human Connection

Perhaps the most persistent concern among social workers is that AI will erode the human relationship at the heart of our practice. In reality, thoughtful integration can actually deepen these connections. When deployed thoughtfully, AI can handle routine paperwork, summarize case notes, and manage scheduling—freeing social workers to be fully present during client interactions, rather than splitting attention between relationship and documentation. Social workers should approach AI as an enhancer of their human skills, not as a replacement. This means establishing clear boundaries about which aspects of practice remain exclusively human domains (ethical reasoning, empathic connection, complex clinical judgment) while identifying tasks that technology can support (transcription, information organization, pattern recognition across large datasets). By offloading cognitive burden in appropriate areas, social workers can bring more of their authentic, attentive presence to client relationships. The key lies in maintaining professional discernment about when and how to incorporate these tools—recognizing that technology serves practice, not the reverse.

The Ethical Cost of Avoidance

There is a lot of talk about the ethical costs of AI. But equal attention should be given to the ethical cost to avoidance. In today’s rapidly evolving technological landscape, avoiding AI in social work education creates its own ethical dilemmas. As AI becomes integrated into social services, healthcare, and other systems our social workers will encounter, those without AI literacy may inadvertently perpetuate or fail to recognize algorithmic biases affecting their clients. When social workers lack the skills to critically evaluate these systems, they risk reinforcing the very inequities our profession aims to address. Furthermore, as institutions increasingly rely on algorithms for decision-making, our social workers must be prepared to advocate for their clients within these systems, understanding both their capabilities and limitations.

AI tools can significantly extend services to underserved populations who have historically faced barriers to accessing support. Through chatbots for initial screening or automated translation services for multilingual communities, these technologies can bridge gaps in service provision. By rejecting these tools wholesale, we risk limiting accessibility for the most vulnerable clients, particularly in rural areas or communities with provider shortages.

In a field where burnout is rampant and resources are scarce, the strategic implementation of AI can transform practice efficiency in ways that directly support ethical care. When social workers spend hours on documentation, data entry, and routine administrative tasks, they have less time for the meaningful human connection that defines our profession. AI can handle many of these routine functions, allowing social workers to use their distinct skillset to focus on complex clinical reasoning, relationship building, and advocacy—the aspects of social work that cannot be automated. By resisting these tools, we may inadvertently contribute to workforce strain, burnout, and diminished quality of care, all ethical concerns in their own right.

Conclusion

The question should not be whether to use AI in social work, but rather, how to use it ethically, effectively, and sustainably. When social workers avoid or dismiss these tools, they forfeit their opportunity to shape how technology impacts the clients they serve. This aligns with Victor, Goldkind, and Perron’s (2024) imperative that “all social workers are trained in the ethical and effective use of LLMs” through developing digital literacy across education and practice. The core values of our profession—service, social justice, dignity and worth of the person, importance of human relationships, integrity and competence—must guide how we approach these technologies. Rather than seeing AI as a threat to these values, I challenge social workers to see our ethical responsibility to ensure these tools, and our use, embodies them.

Social workers have always adapted to changing social contexts. From settlement houses to organizing during the Civil Rights Movement, to trauma informed approaches in contemporary practice—our field evolves. This AI revolution presents the same challenge and opportunity—to evolve thoughtfully, while protecting those most vulnerable. The ethical path forward isn’t avoidance; it’s engagement, education, and advocacy for AI systems that reflect social work values. Marson’s (2024) frustrating experiences with AI hallucinations don’t contradict this path—they underscore why critical engagement, rather than wholesale avoidance, is essential. It requires a nuanced engagement that acknowledges both peril and promise.

References

Marson, S. (2024). Editorial: A follow-up on artificial intelligence. International Journal of Social Work Values and Ethics, 21(1), 9-13.
https://doi.org/10.55521/10-021-102 

Victor, B. G., Goldkind, L., & Perron, B. E. (2024). Forum: The limitations of large language models and emerging correctives to support social work scholarship: Selecting the right tool for the task. International Journal of Social Work Values and Ethics, 21(1), 200-207. https://doi.org/10.55521/10-021-112