Artificial Intelligence in Social Work: Emerging Ethical Issues

Archives > Volume 20 (2023) > Issue 2 > Item 05

DOI: 10.55521/10-020-205


Frederic G. Reamer, Ph.D.
Rhode Island College
freamer@ric.edu

Full disclosure: Frederic G. Reamer is a member of the IJSWVE editorial board. IJSWVE uses an anonymous review process in which authors do not review their own work and reviewers do not know authors’ identities.

Reamer, F. (2023). Artificial Intelligence in Social Work: Emerging Ethical Issues. International Journal of Social Work Values and Ethics, 20(2), 52-71. https://doi.org/10.55521/10-020-205

This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of IFSW.

Abstract

Artificial intelligence (AI) is becoming increasingly prevalent in social work. AI is being used to conduct risk assessments, assist people in crisis, strengthen prevention efforts, identify systemic biases in the delivery of social services, provide social work education, and predict social worker burnout and service outcomes, among other uses. There is now considerable literature on the ways in which social workers and other human service professionals can use AI to assist vulnerable people. Yet social work’s literature does not include in-depth examination of the ethical implications of practitioners’ use of AI. The purpose of this article is to examine ethical issues related to social workers’ use of AI; apply relevant ethical standards; and outline elements of a strategy for social workers’ ethical use of AI. Key ethical issues addressed include informed consent and client autonomy; privacy and confidentiality; transparency; client misdiagnosis; client abandon-ment; client surveillance; plagiarism, dishonesty, fraud, and misrepresentation; algorithmic bias and unfairness; and use of evidence-based AI tools.

Keywords: Artificial intelligence, ethics, machine learning, technology, values

Artificial intelligence (AI) has come of age. AI has the potential to transform social work and enhance the profession’s ability to serve clients, organizations, and communities. AI can be used in clinical, administrative, advocacy, and policy contexts in social work. It can enable social workers to analyze data quickly in ways that lead to meaningful services and interventions, risk assessment, outcomes forecasting, and efforts to address systemic bias in the delivery of social services.

At the same time, AI comes with noteworthy ethical challenges, especially related to issues of informed consent and client autonomy; privacy and confidentiality; transparency; client misdiagnosis; client abandonment; client surveillance; plagiarism, dishonesty, fraud, and misrepresentation; algorithmic bias and unfairness; and use of evidence-based AI tools (Frackiewicz, 2023; Plante, 2023; Rubeis, 2022; Tambe & Rice, 2018; Terra, et al., 2023).

Based on a comprehensive review of the literature, social work does not yet offer practitioners and educators an in-depth exploration of key ethical challenges related to the use of AI. The purpose of this article is to examine ethical issues related to social workers’ use of AI; apply relevant ethical standards; and outline a strategy for social workers’ ethical use of AI.

The History and Nature of Artificial Intelligence

The term artificial intelligence was coined in 1955 by Stanford University professor John McCarthy. AI combines computer science and datasets to stimulate human intelligence and enable problem-solving in diverse contexts. AI includes what is known as machine learning, which uses historical data to predict and shape new output. The term “generative AI” refers to the creation of images, videos, audio, text, and 3D models by using learning patterns from existing data to generate new outputs. The European Commission (2019) has defined AI as follows:

Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications). (p. 1)

AI can take the form of expert systems, natural language processing, speech recognition, and machine vision. AI depends on algorithms to enhance machine learning, reasoning, self-correction, and creativity. In health care generally, AI has been used to diagnose disease, facilitate patient treatment, automate redundant tasks, manage medical records, provide customer service using chatbots, reduce dosage errors, provide robot-assisted services, analyze patient scans, and detect fraud (Rong, et al., 2020).

More specifically related to social work, the field of affective computing, also commonly referred to as emotion AI, is a subfield of computer science originating in the 1990s (Luxton, 2016; Royer, 2021). AI relies primarily on machine learning, computer vision, and natural language processing. Machine learning software is designed to enhance accuracy in diagnosing mental health conditions and predicting client outcomes. Computer vision analyzes images and nonverbal cues generated by clients, such as facial expression, gestures, eye gaze, and human pose to analyze clients’ communications. Natural language processing entails speech recognition and text analysis to simulate human conversations via chatbot computer programs, and to create and understand clinical documentation.

Uses of Artificial Intelligence in Social Work

To develop ethics guidelines specifically for social workers, it is important to appreciate the diverse ways in which AI is being used in the profession and in allied behavioral health professions. This context will inform the development of ethics-based protocols. In clinical social work, the context in which AI has been especially prominent, this technology is being used to conduct risk assessments, assist people in crisis, strengthen prevention efforts, identify systemic biases in the delivery of social and behavioral health services, provide social work education, and predict social worker burnout and service outcomes (Asakura, et al., 2020; Frackiewicz, 2023; Gillingham, 2019; Grządzielewska, 2021; Jacobi & Christensen, 2023; Keddell, 2019; Lanier, et al., 2020; Liedgren, et al., 2016; Molala & Mbaya, 2023; Rice, et al., 2018; Schneider & Seelmeyer, 2019; Sobjerg, 2022; Tambe & Rice, 2018). For example, the Trevor Project has partnered with Google.org to launch The Crisis Contact Simulator, a counselor training tool powered by AI. The model simulates digital conversations with LGBTQ youths in crisis and enables counselors to experience realistic practice conversations before taking live ones.

Woebot, a therapeutic chatbot that offers an automated therapist, is another AI behavioral health tool. Woebot uses natural language processing and learned responses to simulate therapeutic conversation, remember the content of past sessions, and deliver advice around mood and other struggles.

Wysa is an AI service that responds to the emotions individuals express using evidence-based cognitive behavioral techniques, DBT, meditation, breathing, yoga, and motivational interviewing to assist people. Pyx Health offers an app that is designed to communicate with people who feel chronically lonely. The Heyy app gives users access to self-assessments, research-backed resources on emotional well-being as well as links to behavioral health professionals for intensive therapy. ChatGPT offers people nearly instant suggestions of ways to address their distress, such as ways to increase relaxation, focus on sleep, reduce caffeine and alcohol consumption, challenge negative thoughts, reduce high-risk behaviors, and seek the support of friends and family.

Social workers who serve military personnel and veterans have access to AI tools designed specifically for these unique client populations. For example, the PTSD Coach app is a mobile application designed to help veterans and service members manage symptoms of PTSD. The app provides a variety of tools and resources to assist users in coping with PTSD symptoms associated with military service, including anxiety, depression, and sleep issues. It offers an interactive self-assessment tool that helps users track their symptoms over time, and it provides strategies to manage symptoms when they occur. The app also includes audio-guided relaxation exercises and a virtual coach that can help users develop coping skills and set personal goals. In addition, it provides access to resources such as crisis hotlines, support groups, and other helpful websites.

The AIMs (Anger and Irritability Management Skills) app is a mobile application designed to help veterans and military service members manage feelings of anger and irritability. The app provides a variety of tools and resources to help users identify triggers for anger and develop strategies for managing their emotions. It offers a self-assessment tool to track anger symptoms over time, as well as interactive exercises to teach relaxation techniques and problem-solving skills. Users can also create a personalized anger management plan, setting goals and strategies to manage their anger in specific situations. The app also provides a journaling feature to track progress and identify areas for improvement. Users can learn skills to improve communication, strengthen relationships, and reduce stress levels.

Mindfulness Coach is an app that provides a variety of guided mindfulness exercises that can be tailored to individual needs and preferences. It offers a range of exercises, including body scans, breathing exercises, and guided meditations, which can help users reduce stress, manage anxiety, and improve overall well-being. Users can set reminders to practice mindfulness throughout the day, track progress over time, and customize exercises to suit their needs. The app can customize mindfulness exercises to specific needs, such as managing symptoms of PTSD, depression, or anxiety. The app also provides resources and guidance to help users develop a mindfulness practice that can be incorporated into their daily routine.

The Department of Veterans Affairs’ (VA) Annie mobile app is a Short Message Service (SMS) text messaging tool that promotes self-care for veterans. Clients using Annie receive automated prompts to track and monitor their own health and motivational/educational messages. The Annie App for Clinicians allows social workers and other behavioral health professionals to use and create care protocols that allow clients to submit their health readings back to Annie. Messages and patient data are stored in the Annie system where clinicians can view the texts and readings as needed.

At the programmatic level, predictive analytics, or the use of data for forecasting service successes and challenges, can help agencies understand how best to allocate scarce resources. For example, DataKind, a data science volunteer corps, has worked with food pantries to use historical data and machine learning algorithms to predict a client’s level of dependency on the pantry. By mining data, the organization can prioritize resources to avert a crisis of food insecurity before it escalates (Goldkind, 2021). As another example, the Chicago Data Collaborative mines data to understand the inner workings of the criminal justice system by pooling data from participating institutions and organizations in the Chicago metropolitan area. Members and partners seek out data from public agencies to create a public portrait of the criminal justice system, which the collaborative makes available on its Web site (Goldkind, 2021).

Core Ethical Challenges and Resources

Several core ethics-informed resources addressing social workers’ use of technology are available to guide practitioners’ use of AI in various workplace contexts. These guidelines focus on three distinct, albeit related, domains: (1) practice standards, (2) regulatory and licensing standards, and (3) code of ethics standards. In 2013 the Association of Social Work Boards (ASWB) board of directors appointed an international task force to develop model regulatory standards for technology and social work practice. ASWB embarked on development of new ethics-related standards in response to demand from regulatory bodies around the globe for guidance concerning social workers’ evolving use of technology. The ASWB task force included representatives from prominent social work practice, regulation, and education organizations throughout the world. The task force developed standards for social workers who use digital and other electronic technology to provide information to the public, deliver services to clients, communicate with and about clients, manage confidential information and case records, and store and access information about clients. The group developed model standards, including extensive ethics guidelines related to technology use, addressing seven key ethics-related concepts: practitioner competence; informed consent; privacy and confidentiality; boundaries, dual relationships, and conflicts of interest; records and documentation; collegial relationships; and social work practice across jurisdictional boundaries. These model standards, formally adopted in 2015, are now influencing the development of licensing and regulatory laws around the world (López Peláez &  Kirwan, 2023), along with various efforts in other nations (British Association of Social Workers, 2018; Diez, 2023; Pascoe, 2023).

In 2017, following unprecedented collaboration among key social work organizations in the U.S.—the National Association of Social Workers (NASW), Council on Social Work Education (CSWE), Association of Social Work Boards (ASWB), and Clinical Social Work Association (CSWA)—the profession formally adopted new, comprehensive practice standards, including extensive ethics guidelines that focused on social workers’ and social work educators’ use of technology (NASW, CSWE, ASWB, & CSWA, 2017). Approved by these respective organizations’ boards of directors, these transformational, comprehensive standards address a wide range of compelling ethical issues related to social workers’ use of technology to provide information to the public; design and deliver services; gather, manage, and store information; and educate social workers. Further, in 2017, NASW adopted a revised code that includes extensive technology-related additions pertaining to informed consent; competent practice; conflicts of interest; privacy and confidentiality; sexual relationships; interruption of services; unethical conduct of colleagues; supervision and consultation; education and training; client records; and evaluation and research.

Social workers who use or are contemplating using AI should draw on these prominent guidelines and address a number of key ethical considerations related to informed consent and client autonomy; privacy and confidentiality; transparency; client misdiagnosis; client abandonment; client surveillance; plagiarism, dishonesty, fraud, and misrepresentation; algorithmic bias and unfairness; and use of evidence-based AI tools. These key ethics concepts should be reflected in ethics-informed protocols guiding social workers’ use of AI.

Informed consent and client autonomy

Social workers have always understood their duty to explain the potential benefits and risks of services as part of the informed consent process (Barsky, 2019; Reamer, 2018a, 2018b, 2023a). When using AI, practitioners should inform clients of relevant benefits and risks and respect clients’ judgment about whether to accept or decline the use of AI. According to the NASW Code of Ethics (2021), “social workers who use technology to provide social work services should obtain informed consent from the individuals using these services during the initial screening or interview and prior to initiating services. Social workers should assess clients’ capacity to provide informed consent and, when using technology to communicate, verify the identity and location of clients” (standard 1.03[f]).

Privacy and confidentiality

Data gathered from clients by social workers using AI must be protected. Social workers have a duty to ensure that the AI software they are using is properly encrypted and protected from data breaches to the greatest extent possible. Social workers must take steps to prevent inappropriate access to AI-generated data by third-parties, for example, vendors who sponsor the AI software social workers use. According to the NASW Code of Ethics (2021), “social workers should take reasonable steps to protect the confidentiality of electronic communications, including information provided to clients or third parties. Social workers should use applicable safeguards (such as encryption, firewalls, and passwords) when using electronic communications such as e-mail, online posts, online chat sessions, mobile communication, and text messages” (standard 1.07[f]).

Transparency

Consistent with the time-honored concept of informed consent in social work, social workers who use AI should inform clients of any unauthorized disclosure of clients’ protected health information, for example, as a result of computer hacking or failed online or digital security. According to the technology standards adopted by NASW, ASWB, CSWE, and CSWA (2017), “regardless of the precautions that social workers take to ensure that client records are gathered, managed, and stored in a secure manner, confidential electronic records may be breached. Social workers should comply with ethical standards and relevant federal and state laws regarding any duty to inform clients about possible breaches of confidentiality. Social workers should also develop policies and procedures detailing how they would inform clients regarding breaches of confidentiality” (standard 3.05).

Client misdiagnosis

Clinical social workers who rely on AI to assess clients’ behavioral health challenges must take steps to minimize the likelihood that their digital protocols will generate misdiagnoses. This may occur when social workers do not supplement their AI-generated assessments with their own independent assessments and judgment. Misdiagnosis may lead to inappropriate or unwarranted interventions which, in turn, may cause significant harm to clients and expose social workers to the risk of malpractice lawsuits and licensing board complaints (Reamer, 2023a). According to Yan, Ruan, and Jiang (2023), “current AI is still far from effectively recognizing mental disorders and cannot replace clinicians’ diagnoses in the near future” (p. 2).

Client abandonment

Social workers who rely on AI to connect with clients must take steps to respond to their messages and postings in a timely fashion, when warranted. To use the legal term, social workers must take steps to avoid “abandoning” clients who use AI to communicate significant distress. In malpractice litigation, abandonment occurs when practitioners do not respond to clients in a timely fashion or terminate services in a manner inconsistent with standards in the profession. For example, a client who communicates suicidal ideation via AI, does not receive a timely response from their social worker, and attempts to die by suicide yet lives, may have grounds for a malpractice claim. According to the NASW Cofde of Ethics (2021), “social workers should take reasonable steps to avoid abandoning clients who are still in need of services. Social workers should withdraw services precipitously only under unusual circumstances, giving careful consideration to all factors in the situation and taking care to minimize possible adverse effects. Social workers should assist in making appropriate arrangements for continuation of services when necessary” (standard 1.17[b]).

Client surveillance

One of the inherent risks of AI is the possibility that third parties will use available data inappropriately and without authorization for surveillance purposes. For example, social workers who provide reproductive health services to clients in states where abortion is illegal must be cognizant of the possibility that prosecutors will subpoena electronically stored information (ESI) generated by AI to prosecute pregnant people who seek abortion services and the practitioners who assist them in their decision making. Although ESI in social workers’ possession has always been discoverable during legal proceedings, there is a newer challenge when ESI includes information generated by AI (for example, information about reproductive health generated by chatbots used by clients and social workers). A client or social worker who uses AI to search online for abortion-related information services creates a digital trail (Reamer, 2023b). According to the Federal Rules of Civil Procedure, ESI is defined as any documents or information that are stored in electronic form (Yeazell, et al., 2022).

Plagiarism, dishonesty, fraud, and misrepresentation

One advantage of AI is that it enables social workers to generate useful job-related information quickly. For example, social workers may use ChatGPT to produce content that may be useful in grant applications, program evaluations, advocacy efforts, and fundraising appeals. Social workers who take advantage of this powerful AI tool must be sure to cite their sources and comply with “fair use” doctrine to avoid allegations of plagiarism, dishonesty, fraud, and misrepresentation (Keegan, 2023). Although using content from ChatGPT is not necessarily plagiarism, it is possible that ChatGPT incorporates content from other authors whose work should be cited (Pocock, 2023). Social workers who use AI should comply with prevailing ethical standards (NASW, 2021):

Social workers should not participate in, condone, or be associated with dishonesty, fraud, or deception. (standard 4.04)

Social workers should take responsibility and credit, including authorship credit, only for work they have actually performed and to which they have contributed. (standard 4.08[a])

Social workers should honestly acknowledge the work of and the contributions made by others. (standard 4.08[b])

Algorithmic bias and unfairness

AI’s dependence on machine learning, which draws from large volumes of available data that may not be entirely representative of social workers’ clients, comes with a risk that algorithms used to assess clients and develop interventions and treatment plans will incorporate significant bias related to race, ethnicity, gender, sexual orientation, gender expression, and other vulnerable or protected categories. Algorithmic bias can manifest itself when AI is used as part of social service agency recruitment, online marketing, and facial recognition, among other tools. According to Lee, Resnick, and Barton (2019), “because machines can treat similarly-situated people and objects differently, research is starting to reveal some troubling examples in which the reality of algorithmic decision-making falls short of our expectations. Given this, some algorithms run the risk of replicating and even amplifying human biases, particularly those affecting protected groups.”

Ethical use of evidence-based AI tools

AI technology is developing rapidly. Best practices are evolving at blinding speed. Social workers who incorporate AI into their practice have an ethical duty to keep pace with research-informed developments and adjust their protocols accordingly. A key resource is the Association for the Advancement of Artificial Intelligence. This organization’s code of ethics is consistent with traditional social work values. According to the technology standards adopted by NASW, ASWB, CSWE, and CSWA (2017), “technology is constantly evolving, as is its use in various forms of social work practice. Social workers should keep apprised of the types of technology that are available and research best practices, risks, ethical challenges, and ways of managing them” (standard 2.23).

A Protocol for Ethical Use of Artificial Intelligence

In recent years, AI experts have developed protocols to design and implement ethics-based use of AI. These include a series of concrete steps social workers can take to increase the likelihood of compliance with prevailing ethical standards (Gattadahalli, 2020):

Establish ethics-based governing principles

Artificial intelligence initiatives should adhere to prominent ethics-informed principles to ensure these efforts are designed and implemented responsibly. Key principles include:

  • The AI technology does no harm. AI developers should take steps to protect clients and other members of the public they serve.
  • The AI technology is designed and developed using transparent protocols and auditable methodologies.
  • That AI tools collect and treat client data to reduce biases against population groups based on race, ethnicity, culture, gender, sexual orientation, gender expression, religion, and other potential sources of bias.
  • Clients are informed of known risks and benefits of AI technologies so they can make informed decisions about its use.

Establish a digital ethics steering committee

Ideally, organizations that employ social workers and use AI would create a digital ethics steering committee comprised of key staff who are familiar with digital technology in general, AI technology, and prevailing ethical standards and best practices. This committee would have oversight responsibilities related to the design and implementation of AI.

Convene diverse focus groups

Focus groups that include individuals from the diverse populations from whom datasets used in AI may be collected can help reduce and prevent algorithmic bias. Focus groups may include clients, client advocates, practitioners, researchers, educators, community leaders, agency administrators, and policymakers. They can review current AI protocols and suggest modifications to address risks related to algorithmic bias. 

Subject algorithms to peer review

Rigorous peer review processes can help identify and address blind spots and weaknesses in AI protocols. Peer reviewers may include social workers familiar with AI, researchers, educators, and diverse groups of data scientists.

Conduct AI model simulations

It can be useful to develop simulation models that test scenarios in which AI tools are susceptible to bias in an effort to reduce risk and address possible algorithmic bias. Feedback generated by simulations can identify potential ethics-related problems associated with AI.

Develop social worker-focused guidance for accessing and using AI results

Social workers must be trained to give appropriate weight to AI tools to supplement—and not replace—their professional judgement. It is essential that social workers understand distinctions between appropriate and inappropriate use of AI.

Develop external change communication and training strategies

As applications of AI in health care evolve, creating a carefully designed messaging strategy is important to ensure that the key benefits and risks of AI used in social work will be understood by clients and can be clearly and coherently communicated by their practitioners. An effective communication and training protocol directed to all relevant parties, both within and outside of social workers’ employing organizations, is essential.

Maintain a log of tests

Ideally, organizations that employ social workers and use AI will maintain a comprehensive database summarizing the results of periodic tests to identify the strengths and limitations of their AI protocols. Over time, this database can be monitored to enhance the effectiveness of AI and compliance with relevant ethical standards.

Conclusion

Practitioners in a variety of professions, including social work, are rapidly increasing their use of AI. Along with a wide range of potential benefits of AI in social work come a number of compelling ethics-related challenges and risks, especially related to informed consent and client autonomy; privacy and confidentiality; transparency; client misdiagnosis; client abandon-ment; client surveillance; plagiarism, dishonesty, fraud, and misrepresentation; algorithmic bias and unfairness; and use of evidence-based AI tools.

To protect clients and themselves, social workers should take a number of proactive steps. First, they should familiarize themselves with AI protocols and the various ways in which they can be used in their social work practice settings. Second, social workers should review relevant ethical standards and practice standards pertaining to their use of AI and ensure that their practice settings are in compliance. Third, social workers in positions of authority should design and implement training for staffers regarding the appropriate and ethical use of AI in their work setting. Fourth, social work education programs should incorporate content on AI in their curricula to ensure that the next generation of social workers uses AI responsibly. Social work organizations that sponsor continuing education should also offer workshops and webinars for practitioners who may not have been introduced to this content during their formal social work education. Finally, social work educators, researchers, and practitioners should take diligent steps to evaluate the impact of AI in the profession, focusing especially on evidence of effectiveness, limitations, algorithmic bias, and compliance with pertinent ethical standards.

The profession’s earliest practitioners could not have imagined that today’s social workers would use technology to serve clients remotely, enter client information in electronic records stored in the “cloud,” and address clients’ needs using AI, among other digital and electronic tools. The emergence and proliferation of AI is yet another reminder that social work ethics challenges and related standards evolve.

References

Asakura, K., Occhiuto, K., Todd, S., Leithead, C., & Clapperton, R. (2020). A call to action on artificial intelligence and social work education: Lessons learned from a simulation project using natural language processing. Journal of Teaching in Social Work, 40, 501-518.

Barsky, A. (2019). Ethics and values in social work: An integrated approach for a comprehensive curriculum (2nd ed.). Oxford University Press.

British Association of Social Workers. (2018). BASW policy: Social media. Author. https://www.basw.co.uk/system/files/resources/Social%20Media%20Policy.pdf 

Diez, E. (2023). Artificial intelligence and social work: Contributions to an ethical artificial intelligence at the service of the people. In A. López Peláez & G. Kirwan (Eds.), The Routledge international handbook of digital social work (pp. 368-381). Routledge.

European Commission. (2019). Independent high-level expert group on artificial intelligence: A definition of AI—Main capabilities and disciplines. Author. https://digital-strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines

Frackiewicz, M. (2023, May 5). The ethics of artificial intelligence in autonomous social work and counseling. TS2. https://ts2.space/en/the-ethics-of-artificial-intelligence-in-autonomous-social-work-and-counseling/

Gattadahalli, S. (2020, November 3). Ten steps to ethics-based governance of AI in health care. STAT. https://www.statnews.com/2020/11/03/artificial-intelligence-health-care-ten-steps-to-ethics-based-governance/

Gillingham, P. (2019). Can predictive algorithms assist decision‐making in social work with children and families? Child Abuse Review, 28, 114–126. https://doi.org/10.1002/car.2547

Goldkind, L. (2021). Social work and artificial intelligence: Into the matrix. Social Work, 66, 372-374.https://doi.org/10.1093/sw/swab028

Grządzielewska, M. (2021). Using machine learning in burnout prediction: A survey. Child and Adolescent Social Work Journal, 38, 175–180. https://doi.org/10.1007/s10560-020-00733-w  

Jacobi, C., & Christensen, M. (2023). Functions, utilities, and limitations: A scoping study of decision support algorithms in social work. Journal of Evidence-Based Social Work, 20, 323-341. https://doi.org/10.1080/26408066.2022.2159777

Keddell, E. (2019). Algorithmic justice in child protection: Statistical fairness, social justice and the implications for practice. Social Sciences, 8, 281. https://doi.org/10.3390/socsci8100281

Keegan, J. (2023, May 23). ChatGPT is a plagiarism machine: So why do administrators have their heads in the sand? Chronicle of Higher Education. https://www.chronicle.com/article/chatgpt-is-a-plagiarism-machine?cid=gen_sign_in

Lanier, P., Rodriguez, M., Verbiest, S., Bryant, K., Guan, T., & Zolotor, A. (2020). Preventing infant maltreatment with predictive analytics: Applying ethical principles to evidence-based child welfare policy. Journal of Family Violence, 35, 1–13. https://doi.org/10.1007/s10896-019-00074-y

Lee, N., Resnick, P., & Barton, G. (2019). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/

Liedgren, P., Elvhage, G., Ehrenberg, A., & Kullberg, C. (2016). The use of decision support systems in social work: A scoping study literature review. Journal of Evidence-Informed Social Work, 13, 1– 20. https://doi.org/10.1080/15433714.2014.914992

López Peláez, A. & Kirwan, G. (Eds.). (2023). The Routledge international handbook of digital social work. Routledge.

Luxton, D. (Ed.). (2016). Artificial intelligence in behavioral and mental health care. Academic Press.

Molala, T., & Mbaya, T. (2023). Social work and artificial intelligence: Towards the electronic social work field of specialisation. International Journal of Social Science Research and Review, 6, 613-621.  https://doi.org/10.47814/ijssrr.v6i4.1206

National Association of Social Workers. (2017). Code of ethics of the National Association of Social Workers. Washington, DC: Author.

National Association of Social Workers. (2021). Code of ethics of the National Association of Social Workers. Washington, DC: Author.

National Association of Social Workers, Association of Social Work Boards, Council on Social Work Education, & Clinical Social Work Association. (2017). NASW, ASWB, CSWE, & CSWA standards for technology in social work practice. https://www.socialworkers.org/LinkClick.aspx?fileticket=lcTcdsHUcng%3d&portalid=0

Pascoe, K. (2023). Considerations for integrating technology into social work practice: A content analysis of nine professional social work associations’ codes of ethics. International Social Work, 66, 298-312. https://doi.org/10.1177/002087282098

Plante, T. (2023, February 6). The ethics of AI applications for mental health care. Markkula Center for Applied Ethics. https://www.scu.edu/ethics-spotlight/generative-ai-ethics/the-ethics-of-ai-applications-for-mental-health-care/

Pocock, K. (2023, June 9). Is ChatGPT plagiarism free? PC Guide. https://www.pcguide.com/apps/is-chat-gpt-plagiarism-free/

Reamer, F. (2018a). Ethical standards in social work: A review of the NASW Code of Ethics. NASW Press

Reamer, F. (2018b). Social work values and ethics (5th ed.). Columbia University Press.

Reamer, F. (2023a). Risk management in the behavioral health professions: A practical guide to preventing malpractice and licensing-board complaints. Columbia University Press.

Reamer, F. (2023b). Ethical practice in a post-Roe world: A guide for social workers. Social Work, 68, 150-158.

Rice, E., Yoshioka-Maxwell, A., Petering, R., Onasch-Vera, L., Craddock, J., Tambe, M., Yadav, A., Wilder, B., Woo, D., Winetrobe, H., & Wilson, N. (2018). Piloting the use of artificial intelligence to enhance HIV prevention interventions for youth experiencing homelessness. Journal of the Society for Social Work and Research, 9, 551–573. https://doi.org/10.1086/701439

Rong, G., Mendez, A., Bou Assi, E, Zhao, B, & Sawan, M. (2020). Artificial intelligence in healthcare: review and prediction case studies. Engineering, 6, 291–301. https://doi.org/10.1016/j.eng.2019.08.015  

Royer, A. (2021, October 14). The wellness industry’s risky embrace of AI-driven mental health care. Brookings. https://www.brookings.edu/articles/the-wellness-industrys-risky-embrace-of-ai-driven-mental-health-care/

Rubeis, G. (2022). iHealth: The ethics of artificial intelligence and big data in mental healthcare. Internet Interventions, 28. https://doi.org/10.1016/j.invent.2022.100518

Schneider, D., & Seelmeyer, U. (2019). Challenges in using big data to develop decision support systems for social work in Germany. Journal of Technology in Human Services, 37, 113–128. https://doi.org/10.1080/15228835.2019.1614513

Søbjerg, L. (2022). Developing a statistical model for assessment of referrals of children at risk. Child Abuse Review, 31, e2749. https://doi.org/10.1002/car.2749

Tambe, M., & Rice, E. (Eds.). (2018). Artificial intelligence and social work. Cambridge University Press.

Terra, M., Baklola, M., Ali, S., & El-Bastawisy, K. (2023). Opportunities, applications, challenges and ethical implications of artificial intelligence in psychiatry: A narrative review. Egyptian Journal of Neurology, Psychiatry, and Neurosurgery, 59. https://doi.org/10.1186/s41983-023-00681-z

Yan, W., Ruan, U., & Jiang, K. (2023). Challenges of artificial intelligence in recognizing mental disorders. Diagnostics, 13, 2. https://doi.org/10.3390/diagnostics13010002 Yeazell, S., Schwartz, J., & Carroll, M. (2022). Federal rules of civil procedure with selected statutes, cases, and other materials. Aspen.