Archives > Volume 22 (2025) > Issue 1 > Item 06
DOI: 10.55521/10-022-106
Allan Barsky, JD, MSW, PhD
Florida Atlantic University
This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of IFSW.
Abstract
Social workers can use artificial intelligence tools to streamline and enhance many steps in the research process. AI can quickly and efficiently curate research questions, identify relevant studies, synthesize prior research and identify gaps, gather and analyze data, and compose research findings and conclusions. Despite AI’s impressive capabilities, its responses are not necessarily accurate, complete, or free from bias. This article explores the ethics of responsible use of AI in social work research through the lenses of accuracy, honesty, and anchoring bias. Anchoring bias refers to a person’s tendency to over-rely on initial pieces of information that they receive, potentially discounting or ignoring other information that could confirm or disconfirm the veracity of the initial information. To mitigate this bias, social workers should think critically about AI-generated outputs, rather than over-relying on the first information they receive. This article provides social workers with strategies to mitigate the anchoring effect when they use AI in various stages of social work research: understanding AI’s limitations, nurturing awareness of anchoring bias, asking critical thinking questions to evaluate the accuracy of AI outputs, and employing other strategies to verify the accuracy and reliability of AI outputs. While AI can be a valuable research tool, social workers should remain ethically responsible for the rigor of their research methods and the veracity of the findings they report.
Keywords:
research, anchoring effect, responsible conduct, AI, ethical accountability
Introduction
Social work researchers (SWRs) may employ artificial intelligence (AI) in various facets of the research process to enhance its effectiveness, reliability, and efficiency (Butson & Spronken-Smith, 2024; Chubb et al., 2022). AI has the capacity to identify, gather, and analyze large and complex quantities of data in real time. Its advanced algorithms can manage complex multivariate datasets and uncover patterns or relationships that traditional research methods might overlook. However, AI has limitations: it may omit relevant information, fabricate or misconstrue data, or rely on biased, unreliable, or invalid research findings (Chubb et al., 2022). This article explores the risks of “anchoring bias” when SWRs use AI to facilitate research processes. Anchoring bias is a psychological bias that arises when individuals rely disproportionately on the first piece of information introduced to them (Tversky & Kahneman, 1974).
This article begins with a description of the nature of the anchoring bias, including research on factors contributing to this bias. The second section, Research Integrity and AI, explores how anchoring effects specific to AI use may lead to ethical issues when designing and implementing research processes. The third section delves into various stages of the research process, describing how AI may lead to anchoring bias in each stage and providing practical strategies for combating such bias. The final section discusses the implications of AI and anchoring bias for research integrity, offering guidelines for SWRs seeking to responsibly incorporate AI in their work.
The Nature of the Anchoring Effect
The anchoring effect, coined by Tversky and Kahneman (1974), signifies the tendency of initial pieces of information received by individuals to disproportionately influence their beliefs or judgments. This cognitive bias arises when individuals overvalue the initial information, leading them to dismiss or undervalue new information that might conflict with the initial anchor (Furnham & Boo, 2011). By relying on the initial information without further analysis, anchoring acts as a mental shortcut, allowing people to avoid complex reasoning and make more simplified judgments. Although this mental shortcut entails less time and energy to make decisions, it means that individuals are overlooking additional factors and not questioning the validity of the anchored information (Furnham & Boo, 2011).
Research has identified several factors affecting vulnerability to the anchoring effect, including mood, subject matter expertise, personality traits, cognitive ability, and the perceived credibility of the anchoring information. Specifically, individuals are more prone to anchoring under the following conditions.
- Non-experts are more likely to rely on anchors than those with significant subject-matter knowledge (Englich & Soder, 2009).
- Individuals experiencing sadness are more susceptible to anchoring compared to those in happy or neutral moods, particularly among individuals who are not subject-matter experts (Englich & Soder, 2009).
- Individuals with high conscientiousness, agreeableness, or low extraversion personalities tend to be more prone to anchoring (Furnham & Boo, 2011).
- Individuals with lower cognitive abilities are more likely to be influenced by anchoring effects (Bergman et al., 2010).
- When anchoring information has been widely cited in other sources, they are perceived as more credible, leading to greater anchoring, even if the information is inaccurate (Bornmann et al., 2023).
There is mixed research on particular factors affecting anchoring bias, including the effectiveness of warning individuals about the potential inaccuracy of the initial information they receive (Furnham & Boo, 2011). Moreover, there is little research on anchoring effects specific to AI-generated information (Lee et al., 2022). For instance, how do individuals differentiate between “right-looking answers” and “actually right answers” when deciding whether to rely upon AI’s responses? Further research on anchoring effects with AI is certainly needed.
Responsible Conduct of Research and AI Use in Various Stages of the Research Process
Responsible conduct of research (RCR) refers to practicing research in a manner consistent with professional and scholarly ethics, including the principles of honesty, transparency, respect, accuracy, and accountability. RCR fosters a culture of integrity and scientific rigor in research, enhancing public confidence and support for scholarly research (National Institutes of Health, 2024). Various governmental organizations, universities, and research institutes promote RCR through training that not only provides researchers with information about what these principles mean, but also empowers them with critical thinking, attitudes, and moral courage to put RCR into practice (Cicero, 2021; Hoven et al., 2023). For SWRs, RCR aligns with the core values of the profession, including professional integrity, ethical use of technology, the inherent dignity of all people, and confidentiality (International Federation of Social Workers, 2018; National Association of Social Workers, 2021).
When SWRs use AI to assist with research, it is incumbent on them to consider how their ethical duties apply in each stage of the research process. For instance, when formulating research questions SWRs can use AI to brainstorm research ideas, explore gaps in existing literature, or explore potential biases in their research questions. When conducting literature reviews, SWRs can use AI to identify relevant research, assess its quality, and create visual maps to identify connections between various articles. When selecting research methods, SWRs can ask AI to critique proposed methods, suggest methods, or explore ways to improve methods in relation to inclusivity, validity and reliability of measurement tools, sampling bias, or other specific research factors. When conducting qualitative data analysis, SWRs can use AI to transcribe interviews, code data, identify themes, create network graphs or other visualizations to demonstrate correlations between themes, and write drafts of the findings (Anis & French, 2023; Nashwan & Abukhadijah, 2023). When conducting quantitative data analysis, AI may be used to clean data, automate descriptive and inferential calculations, and execute complex statistical models to test particular hypotheses (Butson & Spronken-Smith, 2024).
While AI can support various research functions, the answers produced by AI are not necessarily accurate or true (Butson & Spronken-Smith, 2024). The notion of accountability in RCR suggests that researchers, not AI, are accountable for the accuracy and truth of the research finding. Accordingly, when SWRs are determining whether and how to use answers provided by AI, they need to avoid anchoring bias. In other words, they should not assume the veracity of AI’s initial answers. Rather, SWRs should consider what steps may be necessary to confirm or reject AI-generated outputs.
Mitigating Anchoring Bias
To mitigate anchoring bias and ensure the integrity of their research, SWRs should take deliberate steps to evaluate the veracity of results generated by AI. Broadly speaking, these steps may include actively questioning and reassessing the initial information, checking the original sources that AI used to develop its responses, and cross-checking the outputs with other sources.
To guard against anchoring bias, SWRs should be aware of the ways that mood affects anchoring. Research indicates that individuals in sad moods tend to experience higher anchoring effects (relying on first-introduced information despite receiving disconfirming information afterward); individuals in happy moods have lower anchoring effects tending to give higher credence to disconfirming evidence rather than simply relying on first-introduced information (Englich & Soder, 2009). Mood effects are lessened when individuals view themselves as experts in the subject area. In other words, regardless of one’s mood, SWRs can mitigate anchoring bias by viewing themselves as professionals or experts who do not simply rely on the first information they receive from AI. When SWRs feel rushed, pressured, lazy, or tired, they may be more prone to unethical behavior (Ahmad et al., 2023; Cicero, 2021; Spoelma, 2022), including the possibility of relying on the first information they receive from AI rather than ensuring its accuracy. Accordingly, it is important for SWRs to be aware of their moods and levels of relevant expertise, and take proactive steps to mitigate anchoring bias.
The following sections provide more detailed strategies for mitigating anchoring bias at particular stages of the research process.
Research Questions
One of the first steps in the research process is formulating a research question, a query that clarifies the focus of the research and guides decisions related to research design, methodology, and analysis (Rubin & Babbie, 2025). SWRs may use AI to brainstorm research questions, identify gaps in existing research, or critique drafts of proposed questions. For instance, SWRs might use AI prompts such as:
- Generate 10 options for research question ideas related to the effectiveness of narrative therapy for individuals with gambling addictions.
- Analyze existing literature on psychosocial assessments for individuals affected by childhood trauma. Identify gaps in the literature and suggest specific, actionable research questions to address these gaps.
- Please improve the research question to ensure clarity, neutrality, and feasibility: How do SWRs help clients with cognitive disabilities address social stigma in employment settings?’
When determining whether and how to use AI-generated responses, SWRs can mitigate anchoring effects by viewing the responses with a healthy degree of skepticism (Lee et al., 2022). For instance, it is essential to consider AI’s sources of information, the scope of prior research it considered, and the research it might have overlooked. For the question about gambling addiction, did AI source information about effective interventions from a single country or across multiple locations? Regarding the question about childhood trauma, did AI analyze genuine scholarly research or did it fabricate articles or misinterpret data? For the question about clients with cognitive disabilities, did AI adequately consider culture, religion, socioeconomic status, and other aspects of human diversity?
One strategy for mitigating anchoring effects is brainstorming research questions without using AI, and then asking AI for critique or recommendations. By brainstorming first, SWRs may open their minds to a broader range of research ideas informed by their own knowledge, experience, and creativity. AI’s suggestions can then be used to expand the options for research questions, as well as to refine ones that seem particularly relevant or promising. Another mitigation strategy is to engage AI in a series of follow-up inquiries. Assume that AI has critiqued a research question based on culture and ethnocentrism. You could then invite AI to critique the research question in relation to sociocultural status and bias. A third mitigation strategy is to engage AI as if it were a research assistant, particularly one that is eager to answer questions but somewhat prone to errors and in need of guidance and supervision. If AI provides a critique of a research question, for instance, ask AI to provide its rationale. Rather than simply relying on AI’s initial response, jointly explore whether the proposed question meets certain criteria that you deem important: To what extent does the research question build on prior research, to what extent is the research question important to the community or population that I am serving, and to what extent does the research question lend itself to a feasible research project (taking time, costs, and ethical issues such as privacy and informed consent into account). By adopting mitigation strategies, SWRs can use AI to develop effective research questions while maintaining oversight and avoiding undue influence from AI’s initial responses.
Literature Reviews
The purposes of literature reviews include laying the foundation of knowledge to inform the research, identifying gaps and limitations in existing studies, providing theoretical support and rationale for the research, and determining what type of lines of research can best build on prior studies (Rubin & Babbie, 2025). SWRs may use AI to search for relevant theories and research articles, summarize and synthesize information from specific articles, and develop visual representations of theories and research findings (Butson & Spronken-Smith, 2024; Tauchert et al., 2020; Scite.ia, n.d.). SWRs should be aware of the constraints of the specific AI tools they are using, including the source of each tool’s data. AI tools such as Scite.AI (n.d.) have been developed specifically for scholarly research purposes, ensuring that articles are sourced from peer-reviewed, reliable databases. ChatGPT (n.d.) and other AI programs may draw from non-scholarly sources, making them more prone to errors, omissions, biases, and fabrications. Many AI tools allow one to request information to be drawn from particular types of sources. Regardless of which AI tools are used to facilitate literature reviews, prudent SWRs can exercise a reasonable standard of care by asking AI to cite its sources and reading the original articles to ensure the veracity of AI’s responses.
By asking AI to identify and summarize particular theories and research articles, SWRs can efficiently identify relevant materials (Butson & Spronken-Smith, 2024). Although AI’s responses may appear accurate and complete, SWRs should remind themselves of AI’s limitations. To guard against missed articles, SWRs can supplement AI searches with queries in traditional scholarly databases and use AI to summarize identified studies. They can also read original versions of the most relevant articles to check the accuracy of AI’s summaries or use AI tools with different databases to cross-check their findings. To guard against bias, SWRs can reflect on the language used by AI, including whether it is inclusive of individuals and groups from diverse backgrounds. For instance, SWRs might ask AI to clarify the methods, sampling, and underlying theories used in particular studies. To ensure that AI has considered multiple viewpoints, SWRs could ask AI to provide a matrix of research highlighting different perspectives and then pose follow-up questions to identify additional perspectives.
When deciding whether to rely on AI-generated information, individuals often gravitate to information that aligns with their preexisting beliefs. Thus, it is important for SWRs to critically evaluate AI-generated content, regardless of whether it confirms or disconfirms their prior beliefs. Assume that a social worker initially believes that poverty is a primary cause of child neglect, but an AI-generated literature review suggests this correlation is not true. When checking the accuracy of AI’s information, the social worker should not allow their original beliefs to interfere with their critical analysis of AI’s findings (Lee et al., 2022). By maintaining awareness of their assumptions or biases, SWRs may prioritize evidence-based assessments and rational thinking processes to check the accuracy of AI-generated information. To mitigate anchoring bias, SWRs should regularly ask themselves, “What if my original beliefs are true?” and “What if my original beliefs are untrue?” This mindset encourages open, balanced evaluation of personal beliefs, AI-generated information, and other sources of information.
Research Methods
When selecting research methods, SWRs should ensure that their research design effectively addresses the research questions and objectives (Rubin & Babbie, 2025). Key decisions include sampling size and procedures, methods of gathering information, and valid tools for measuring independent and dependent variables. Experimental design, for instance, incorporates methodologies such as random assignment, control groups, and pre/posttests, enabling researchers to assess whether independent variables are having significant effects on dependent variables. But what if AI has a bias toward suggesting experimental design when other approaches might be more appropriate?
Consider a social worker evaluating the effectiveness of a novel intervention. The researcher asks AI to suggest a research design, including whether the study should employ qualitative or qualitative, what evaluation measures to use, and how to ensure an unbiased sample. While AI can certainly answer these questions, its responses will not necessarily reflect scientifically sound or contextually appropriate methods. AI may misinterpret the social worker’s prompt or fail to take factors specific to the social worker’s research topic into account (e.g., the cultural and socioeconomic backgrounds of the research population). Further, AI’s suggestions may not be ethical or feasible (e.g., not taking risks to research participants into account or suggesting extremely costly designs).
To mitigate over-reliance on AI, SWRs can develop research proposals without AI and then request AI to provide constructive feedback, suggestions, and refinements for their methods. When reviewing AI’s responses, SWRs could critically analyze AI’s responses with questions such as:
- Has AI correctly interpreted the research questions?
- Is AI favoring specific methods based on their prevalence in the sources it draws upon, potentially overlooking novel or uncommon methods?
- Has AI accounted for practical constraints into account such as financial costs, risk, informed consent, time limitations, and participant availability?
- Are AI’s suggestions consistent with scientifically accepted research methods?
- What additional questions could be posed to AI to improve the research design and address possible limitations?
When assessing the accuracy and appropriateness of AI’s suggestions for research methods, SWRs should recognize that they have subject-matter expertise. Rather than passively accepting AI-suggested methods, they should consider potential errors, omissions, or misalignments with their research goals and ethical standards. When uncertain about particular methods suggestions offered by AI, SWRs can ask for clarifications, such as AI’s rationale and sources of information used to justify the suggested methods. Suppose that AI suggests multilevel modeling (MLM), a statistical technique unfamiliar to the social worker. The social worker could ask AI to explain MLM, including how it differs from traditional models like linear regression, under what circumstances is it appropriate for human subject research, and what limitations it entails. By treating AI as a supplementary tool rather than an authoritative source, SWRs can verify its responses by consulting established research or statistics texts.
Quantitative Data Analysis
Quantitative data analysis involves examining and interpreting numerical data through statistical techniques and mathematical calculations to uncover patterns and relationships between variables in a dataset (Rubin & Babbie, 2025). While researchers have long relied on computers to perform statistical analysis, AI affords distinct advantages over traditional computer-based methods. Potential advantages include AI’s ability to handle large and complex datasets, and to automate tedious tasks such as error detection, data standardization, and handling missing data (Butson & Spronken-Smith, 2024). SWRs can also use AI to recommend specific types of data analysis based on factors such as sample size and data type (e.g., ordinal, nominal, interval), enhancing research efficiency and accuracy.
Although the outputs of quantitative research are typically objective, the choice of statistical analyses involves subjective elements (Berger & Berry, 1988). When assessing AI-generated recommendations for statistical analysis, SWRs should ensure that they align with the research question, accurately reflect the story they intend to tell through the data, and adhere to the assumptions underlying the suggested statistical tests.
It is vital for SWRs to ensure that they understand the nature of the statistical methods proposed by AI, including their strengths, limitations, and alternatives. For instance, AI-tools based on algorithms designed for pattern recognition would not be appropriate for determining causality (Butson & Spronken-Smith, 2024). Ideally, SWRs should personally understand the machine learning algorithms that AI employs in statistical analysis; otherwise, they could consult trusted AI experts to advise on whether AI’s algorithms are accurately performing the intended forms of analyses. To mitigate anchoring effects, SWRs and their AI consultants may reflect on the following questions.
- Does the AI-proposed analysis answer my research question?
- Is AI correctly interpreting how I am using my variables?
- What are the strengths and limitations of the proposed analysis?
- What other statistical methods should I ask AI to consider?
Additionally, AI can be used in other steps of the quantitative analysis process, including directly analyzing the data. While AI can reduce the likelihood of human errors in quantitative analysis (e.g., data entry and calculation mistakes), SWRs should not merely accept AI-generated findings at face value. Instead, they may crosscheck AI’s findings by manually recalculating a sample of AI’s findings or using traditional statistical software (e.g., SPSS and R). They may also remind themselves of the value of human intuition and judgment (Butson & Spronken-Smith, 2024). Do AI’s findings align with my prior experience, knowledge, and intuitions? If not, what could explain the discrepancies?
Qualitative Data Analysis
Qualitative data analysis refers to the examination and interpretation of non-numerical data to explore themes, patterns, and meanings. Approaches to qualitative analysis include phenomenology, ethnography, grounded theory, and case study analysis (Renjith et al., 2021). To ensure the dependability of the results, SWRs can reflect on their beliefs and worldviews to raise awareness of how their analyses may be affected by their beliefs and biases. SWRs may use this awareness to foster more accurate and objective analysis (Olmos-Vega et al., 2023).
AI can assist with many forms of qualitative analysis. As with quantitative analysis, the advantages of AI for qualitative analysis include its ability to analyze large and complex datasets efficiently (Anis & French, 2023; Nashwan & Abukhadijah, 2023). SWRs can use specific prompts to guard against biases and to enhance the depth of analysis (Butson & Spronken-Smith, 2024). Despite the potential advantages of AI, it is vital that humans maintain interpretative control, checking for accuracy and potential biases in AI-generated findings. SWRs should be aware of how their particular AI tool has been trained to evaluate qualitative data. AI’s analysis may contain biases due to the way that it has been trained and the data used for such training (Christou, 2023). SWRs may need to manually analyze a sample of the data to ensure AI is accurately interpreting it (Anis & French, 2023). Rather than simply relying on AI’s initial outputs, SWRs may compare its results with their own analyses and interpretations. During this process, SWRs may ask themselves:
- Is AI applying different perspectives and biases than I am when analysing themes and interpreting the data?
- Is AI being mindful of relevant cultural contexts and perspectives
(including the research participants’ values, beliefs, language, and worldviews)? - What types of algorithms is AI using to interpret the data (e.g., frequency of word use or theme analysis)?
Asking AI to help code qualitative data may allow SWRs to identify key themes they may not have identified on their own. By comparing the SWR’s themes with those of AI, SWRs can reflect on which themes most accurately reflect the data and perspectives of the research participants.
Conclusion
Advances in AI and natural language processing offer social workers new ways to implement technology in many aspects of practice (Goldkind et al., 2023), including social work research. To uphold the accuracy and integrity of research findings, SWRs should be aware not only of the ways that AI may augment research but also of potential pitfalls. AI itself is not a moral agent, but rather, a tool driven by algorithms and the data it has been given to process (Butson & Spronken-Smith, 2024). While AI can excel at analyzing data quickly and effectively, it lacks human qualities such as common sense, the ability to learn from experience, and the capacity to understand social and cultural nuances (Anis & French, 2023). Accordingly, SWRs should exercise ethical judgment and take responsibility for mitigating biases, including anchoring effects.
Historically, SWRs have incorporated many digital tools into their work, including spellcheck (introduced in the 1960s), data analysis software such as SPSS (launched in 1968), and online search engines (popularized in the 1990s) (De Amorim, 2013; Duka et al., 2023; IBM, 2018). Although AI may seem unique or even scary to some, it is essentially another digital tool that SWRs can incorporate across various stages of research. As with other digital tools, SW must remain responsible for ensuring the validity, accuracy, and reliability of the information that AI produces. This includes critically evaluating the sources of AI-generated output, properly crediting the original sources and the AI tools used, and rephrasing outputs to reflect their unique voice. These practices uphold ethical standards, avoid plagiarism, and preserve the human component in researcher-AI collaboration.
As AI evolves and SWRs find new ways to incorporate AI into their research processes, SWRs must remain accountable for their decision making and ensure ethical and scientific rigor. While this article has focused on ways to mitigate anchoring effects of information that AI has generated, SWRs should also adopt strategies to improve the likelihood of obtaining valid and reliable outputs from AI in the first instance. Choosing an appropriate form of AI is crucial. Although some AI tools are not explicitly developed for scientific research, others have been purposefully designed for research tasks such as literature reviews, data gathering, and quantitative and qualitative analysis.
Responsible conduct of research not only requires knowledge of research ethics, but also the motivation, critical consciousness, moral courage, and practice skills to implement research ethically (Axt & To, 2024; Cicero, 2021; Hoven et al., 2023). Because individuals are particularly vulnerable to anchoring bias when they lack expertise, it is important for SWRs to use their expertise when evaluating the extent to which they should rely on AI-generated information. When addressing topics beyond their expertise, prudent practice suggests collaborating with qualified research partners to vet the accuracy of AI’s outputs.
To guard against potential biases, SWRs should use deliberate strategies to raise their awareness and facilitate critical thinking (Axt & To, 2024). One strategy is to reflect on questions that challenge potential biases. For instance, what if the AI-generated information is not true? What if the opposite were true? And what perspectives may be missing in AI’s analysis and response?
SWRs should recognize the importance of how they phrase their questions, prompts, or instructions for AI. Even slight differences in wording can lead to vastly different outcomes. Prompt engineering–the practice of designing and refining AI instructions–plays a vital role in enhancing the accuracy, relevance, and depth of AI outputs (Wang et al., 2024). AI prompts should provide clear guidance, including the particular analysis or outputs requested, the context of the inquiry, and relevant data sources or analytical processes to be used. Just as SWRs are trained to communicate in a manner that accommodates their clients’ language, culture, and cognitive processes, they should also tailor their AI-instructions to align with AI’s “language,” logic, and artificial cognition and neural network processes.
Moving forward, it will be important for researchers to study the effectiveness of various AI tools to facilitate specific research functions. Researchers should also study how SWRs and other researchers can use AI in an ethically responsible manner, mitigating anchoring effects and ensuring the integrity of their research. After all, the purpose of scientific research is not simply to generate new knowledge, but to generate valid, accurate, and reliable knowledge.
References
Ahmad, S., Han, H., Alam, M., Rehmat, M., Irshad, M., Arraño-Muñoz, M., & Ariza-Montesm A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities, Social Science, and Communities, 10(1), 311.
Anis, S., & French, J. A. (2023). Efficient, explicatory, and equitable: Why qualitative researchers should embrace AI, but cautiously. Business & Society, 62(6), 1139-1144.
Axt, J., & To, J. (2024). How can debiasing research aid efforts to reduce discrimination? Personality and Social Psychology Review, 0(0).
Berger, J. O., & Berry, D. A. (1988). Statistical analysis and the illusion of objectivity. American Scientist, 76(2), 159–165.
Bergman, O. Ellingsen, T., Johanneson, M., & Svensson, C. (2010). Anchoring and cognitive ability. Economics Letters, 107, 66–68.
Bornmann, L., Ganser, C., & Tekles, A. (2023). Anchoring effects in the assessment of papers: An empirical survey of citing authors. PLOS ONE, 18(3), e0283893.
Butson, R., & Spronken-Smith, R. (2024). AI and its implications for research in higher education: A critical dialogue. Higher Education Research & Development, 43(3), 563–577.
ChatGPT. (n.d.). https://chatgpt.com
Chubb, J., Cowling, P. & Reed, D. (2022) Speeding up to keep up: Exploring the use of AI in the research process. AI & Society, 37, 1439–1457.
Cicero, F. (2021). Behavioral ethics: Ethical practice is more than memorizing compliance codes. Behavioral Analysis in Practice, 14(4), 1169-1178.
Christou, P. (2023). Ηow to use artificial intelligence (AI) as a resource, methodological and analysis tool in qualitative research. The Qualitative Report, 28(7), 1968-1980.
De Amorim, R. C. (2013). Effective spell checking methods using clustering algorithms. Conference Proceedings: Recent Advances in Natural Language Processing, Hissar, Bulgaria.
Denzin, N., Lincoln, Y., Giardina, M., & Canella, G. (Eds.). (2023). The SAGE handbook of qualitative research. SAGE.
Duka, M., Sikora, M., & Strzelecki, A. (2023). From web catalogs to Google: A retrospective study of web search engines sustainable development. Sustainability, 15, 6768, 0(0).
Englich, B., & Soder, K. (2009). Moody experts—How mood and expertise influence judgmental anchoring. Judgment and Decision Making, 4(1), 41-50.
Furnham, A., & Boo, H. C. (2011). A literature review of the anchoring effect. The Journal of Socio-Economics, 40(1), 35–42.
Goldkind, L., Wolf, L., Glennon, S., Rios, J., & Nissen, L. (2024). The end of the world as we know it? ChatGPT and social work. Social Work, 69(1), 103–105.
Heift, T. (2017). History and key developments in intelligent computer-assisted language learning (ICALL). In: S. Thorne & S. May (Eds.). Language, education and technology. Encyclopedia of Language and Education. Springer.
Hoven, M., Mol, H., & Verhoeff, R. (2023). Evaluating empowerment towards responsible conduct of research in a small private online course. International Journal for Education Integrity, 19, 16-40.
International Federation of Social Workers. (2018). Global social work statement of ethical principles. https://www.ifsw.org/global-social-work-statement-of-ethical-principles
IBM. (2018). SPSS statistics. https://community.ibm.com/community/user/ai-datascience/blogs/douglas-stauber/2018/04/05/spss-50-years-of-innovation
Lee, K., Woo, H., Cho, W., & Jong, S. D. (2022). When can AI reduce individuals’ anchoring bias and enhance decision accuracy? Evidence from multiple longitudinal experiments. Proceedings of the Annual Hawaii International Conference on System Sciences.
Nashwan, A., & Abukhadijah, H. (2023). Harnessing artificial intelligence for qualitative and mixed methods in nursing research. Cureusm, 15(11), e48570.
National Association of Social Workers. (2021). Code of ethics. https://www.socialworkers.org/About/Ethics/Code-of-Ethics/Code-of-Ethics-English
National Institutes of Health. (2024). Responsible conduct of research training. https://oir.nih.gov/sourcebook/ethical-conduct/responsible-conduct-research-training
Olmos-Vega, F., Stalmeijer, R., Varpio, L., & Kahlke, R. (2023). A practical guide to reflexivity in qualitative research: AMEE Guide No. 149, Medical Teacher, 45(3), 241-251.
Renjith, V., Yesodharan, R., Noronha, J.,, Ladd, E., & George, A. (2021). Qualitative methods in health care research. International Journal of Preventive Medicine, 12, 1-20.
Rubin, A., & Babbie, E. R. (2025). Research methods for social work (10th ed.). Cengage.
Scite.AI. (n.d.). https://scite.ai
Spoelma, T. M. (2022). Counteracting the effects of performance pressure on cheating: A self-affirmation approach. Journal of Applied Psychology, 107(10), 1804–1823.
Tauchert, C., Bender, M., Mesbah, N., & Buxmann, P. (2020). Towards an integrative approach for automated literature reviews using machine learning. Proceedings of the 53rd Hawaii International Conference on System Sciences (pp.762-771).
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.
Wang, L., Chen, X., Deng, X. et al. (2024). Prompt engineering in consistency and reliability with the evidence-based guideline for LLMs. npj Digital Medicine, 7, 41-50.
Acknowledgements
We acknowledge the use of AI tools in developing this article. Specifically, we used Scite.AI to identify relevant research and ChatGPT to assist with refining grammar, spelling, and overall flow. For articles identified by Scite.AI, we carefully reviewed the original articles rather than relying solely on Scite.AI’s summaries. Additionally, we conducted literature searches using traditional academic databases such as PsychInfo and Academic Search Complete to ensure a comprehensive review of relevant literature. We also carefully reviewed and evaluated ChatGPT’s suggested grammatical and structural edits to confirm that they maintained or enhanced the accuracy and clarity of the article. Throughout the writing process, we remained mindful of the potential for anchoring bias. We employed strategies outlined in this article to mitigate such risks.