Hot Topic Series; Issue 2

Hot Topic Series; Issue #2
Byte-Sized Therapy: Ethical Considerations of AI in Psychological Practice

Alexandra Fisher, MA; Kalin Burkhardt Clark, PsyD, ABPP; & the OPA Ethics Committee

 

In this article, we continue the series of “hot topic” issues, in which a polarizing topic is explored from an ethical framework. This is the second article in this series and pertains to the topic of ethical considerations of artificial intelligence (AI) in psychological practice, as it continues to gain prominence in academic, public, and clinical domains.

AI’s growing body of research is broadening its influence on the field of psychology and therapeutic intervention. The incorporation of AI into healthcare holds the potential for revolutionary advancements in overall patient care, decision-making processes, and clinical diagnoses (Jeyaraman et al., 2023). However, the escalating presence of AI in healthcare introduces profound ethical dilemmas that require careful examination. Key ethical considerations include privacy, responsibility, transparency, trust and confidentiality, data quality, and bias (Jeyaraman et al., 2023; McLennan et al., 2022). The term AI covers a diverse and complex set of computer science and statistical principles, all aimed at developing computational systems that can perform cognitive tasks similar to human abilities (D’Alfonso, 2020). These tasks encompass learning pattern recognition, reasoning, problem-solving, generalization, and the capacity for predictive inference (D’Alfonso, 2020). While there has been a notable increase in research and development, only a limited number of initiatives have progressed beyond showcasing algorithmic performance and technical feasibility (Fulmer, 2019).

The advancement of personalization algorithms has significantly improved the convenience of communication and interactions (Sundar, 2020). For example, a mental health app might utilize personalization algorithms (i.e., computational processes designed to tailor communication and interactions based on individual user characteristics) to analyze user input, such as self-reported moods, stress levels, and preferences (e.g., meditation vs. breathing exercises). However, these advancements have sparked noteworthy concerns regarding transparency of technologies, human control over functioning, and overall privacy (Sundar, 2020). It is pivotal to address the tensions between human agency and machine agency in this new era of AI, especially as machines become increasingly more agentic (Sundar, 2020).

AI utilizes various machine learning (ML) techniques to provide human level reasoning and processing, including deep learning (interconnected multi-layered neural networks) and natural language processing to comprise large language models (LLMs) for advanced text analysis, language translation, and contextual understanding in various applications (Farhud & Zokaei, 2021). OpenAI's ChatGPT, an advanced LLM developed by an AI research organization with a focus on advancing AI technologies and promoting ethical principles, holds great potential in the fields of medical and mental health practice, research, and education (Jeyaraman et al., 2023).

These considerations and concerns align with several of the ethical principles and standards in the American Psychological Association (APA) Ethical Principles of Psychologists and Code of Conduct (2017), hereafter referred to as the Ethics Code. Relevant standards include bases for scientific and professional judgments, avoiding harm, obtaining informed consent, and maintaining confidentiality.

2.04 Bases for Scientific and Professional Judgments: Psychologists' work is based upon established scientific and professional knowledge of the discipline.

The field of mental healthcare, akin to various other disciplines, has experienced the transformative influence of the digital technology and AI revolution, with digital phenotyping being a primary example (D’Alfonso, 2020). The information obtained through digital phenotyping can help serve as input for machine learning methods to predict psychometric and psychological outcomes, as well as mental health conditions (D’Alfonso, 2020). Digital phenotyping is defined as personal sensing that utilizes data from sensors and usage on personal digital devices, such as smartphones, to deduce behavioral and contextual information about an individual. Depression and anxiety are two of the most prevalent mental health concerns, so a significant portion of digital phenotyping research has focused on these areas. Regarding screen input and smartphones, recent research suggests that analyzing keystroke dynamics (e.g., scrolling, swiping, clicking, tapping) can offer insights into the mental health of individuals without the need to linguistically analyze the content of entered characters (D’Alfonso, 2020). Additionally, a proposed ML-based method for depression detection (as measured by the PHQ-9) based on input from smartphone typing patterns, utilizes touchscreen typing pattern analysis.

According to the APA Ethics Code (2017), the practice of psychology is based on scientific evidence and professional knowledge. As AI continues to be integrated into the field of psychology, the science and ethics behind the detection of certain symptoms of mental health conditions, such as in depression and anxiety via digital phenotyping, may require further advancement (D’Alfonso, 2020). AI is playing a more prominent role, and so it becomes crucial to enhance both the scientific foundation and ethical frameworks to maintain the integrity and accuracy of mental health assessments. With the use of chatbots in mental healthcare, there needs to be an understanding that they can function as additions or enhancements rather than substitutes for human therapists with professional training (D’Alfonso, 2020). Their capacity to respond to emergencies, such as immediate harm or suicidal thoughts, is restricted and at times may be inappropriate.

3.04 Avoiding Harm: (a) Psychologists take reasonable steps to avoid harming their clients/patients, students, supervisees, research participants, organizational clients, and others with whom they work, and to minimize harm where it is foreseeable and unavoidable.

Avoiding harm includes both awareness of the biases one holds and engaging in responsible practices. Addressing bias in mental healthcare data is pivotal to preventing AI-driven inequities, with biases in algorithm development and data collection potentially sustaining healthcare disparities (Jeyaraman et al., 2023). AI systems are only as unbiased as the data on which they are trained. Promoting a public-health approach is crucial to address inequalities and encourage diversity in AI research. This involves making sure the data used in AI applications are of good quality, ensuring a thorough understanding of patients' perspectives, and considering the broader impact on society. For instance, instead of developing AI tools specifically designed for individual mental healthcare needs, researchers might focus on solutions that address mental health challenges at a population level, taking into account diverse needs, disparities, and the overall well-being of communities. By adopting a multidimensional approach involving healthcare providers, patients, policymakers, and developers, while addressing these proposed challenges, the full potential of AI in healthcare can be leveraged while ensuring equitable and ethical outcomes (Jeyaraman et al., 2023).

Furthermore, with the quality of data used for training and testing in AI, as with the development and assessment of AI algorithms used for tasks such as symptom detection or mental health diagnosis, it's crucial to consider the accuracy of the final results (Alkaissi & McFarlane, 2023). This is because AI models have the potential to give incorrect outcomes and even create certain outputs such as incorrect predictions or in extreme cases, fabricated information, known as hallucinations (Alkaissi & McFarlane, 2023). To further avoid harm, psychologists must be cognizant that the output may be incorrect. The crux lies in how these LLMs produce results and how to verify such output (Alkaissi & McFarlane, 2023). Psychologists cannot fully trust results when there is a lack of understanding of how they were specifically generated.

3.10 Informed Consent: (a) When psychologists conduct research or provide assessment, therapy, counseling, or consulting services in person or via electronic transmission or other forms of communication, they obtain the informed consent of the individual or individuals using language that is reasonably understandable to that person or persons except when conducting such activities without consent is mandated by law or governmental regulation or as otherwise provided in this Ethics Code.

It is important that psychologists ensure that patients understand the role of AI within their treatment. Obtaining informed consent that is understandable to the patient and appropriately explains the nature of the services to be received is paramount to ethical practice, especially when AI is involved. Jeyaraman et al. (2023) raised significant inquiries about AI's moral agency and human accountability regarding the determination of responsibility within AI-driven outcomes. Specifically, providers are tasked with obtaining informed consent and delivering easily understood verbiage to patients. Psychologists must actively inform individuals during the consent process about AI's role in psychological assessments or treatments, explaining potential outcomes, addressing uncertainties, and ensuring transparent collaboration with AI. Balancing AI advancements for improved psychological practices with ethical standards is crucial, ensuring that individuals are well-informed participants in their mental health care. This may be difficult when discussing the bounds of confidentiality, the role of AI in psychological practice or assessment, and overall privacy rights, due to the complex nature of AI in general. McLennan and colleagues (2022) addressed the difficulty providers will encounter, articulating to patients the rationale behind AI recommended treatment and/or diagnosis. This difficulty also lies in the shared clinical responsibility between the provider and a non-human ‘colleague,’ by navigating and defining the roles and accountability of each entity (McLennan et al., 2022). Integrating non-human entities such as AI systems into clinical decision-making processes raises complexities in determining who bears the responsibility for outcomes, decisions, and potential errors, as well as how to disclose that information to the patient during the informed consent process.

4.01 Maintaining Confidentiality: Psychologists have a primary obligation and take reasonable precautions to protect confidential information obtained through or stored in any medium, recognizing that the extent and limits of confidentiality may be regulated by law or established by institutional rules or professional or scientific relationship.

To ensure responsible clinical integration and maintain patient confidentiality, it is imperative to recognize and address the ethical and social implications associated with the expanding use of embodied AI in mental healthcare. Confidentiality and privacy may be compromised when incorporating AI and unconventional mediums like virtual psychotherapists or social robots. This suggests that confidentiality and privacy challenges may arise not only from the general use of AI but also from additional features, functions, or applications of AI systems.

AI-driven virtual and robotic agents are assuming advanced therapeutic roles that are traditionally provided exclusively by highly trained and licensed health professionals (Fiske et al., 2019). Safeguarding patient privacy in data-driven healthcare is paramount, with potential repercussions on psychological well-being and data sharing practices. Concerns regarding privacy may intensify with the increasing volume of collected data (Fiske et al., 2019). For instance, applications incorporating video data would be expected to require dedicated privacy safeguards to handle the communication of sensitive information or details pertaining to individuals other than the consenting patient, such as a sibling or friend of the patient (Fiske et al., 2019). It is important that methods be put into place to increase patient trust and ensure confidentiality and privacy in these new age innovations. Cryptographic techniques, such as secure multiparty computation (SMPC) and homomorphic encryption (HE) play a vital role in enabling collaborative data analysis and computations while maintaining the privacy and confidentiality of sensitive information (Jeyaraman et al., 2023).

In summary, the integration of AI into various domains, including mental health and psychological practice, is rapidly expanding. AI's potential in revolutionizing patient care, predictive medicine, and decision-making processes is evident. However, the broad scope of AI, encompassing techniques mimicking human cognitive processes, poses challenges in terms of explaining AI-generated outputs to patients and defining shared clinical responsibility with non-human colleagues. The evolving landscape of healthcare necessitates a careful examination of the impact of AI on traditional healthcare practices, with challenges such as maintaining confidentiality, addressing biases, and ensuring patient understanding of AI's role in treatment. This also necessitates a thoughtful and focused approach to design and examination of AI tools within specific healthcare contexts, ensuring that ethical standards are maintained (Fulmer, 2019). To leverage AI's potential in healthcare while guaranteeing ethical outcomes, a multidimensional approach involving healthcare providers, patients, policymakers, and developers is essential. The advancements in AI-driven technologies hold promise for mental health diagnosing, treatment recommendations, and personalized interventions. However, ethical considerations must guide their responsible integration into clinical practices.

 

References

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in CHATGPT: Implications in scientific writing. Curēus (Palo Alto, CA), 15(2), e35179–e35179. https://doi.org/10.7759/cureus.35179

American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. http://www.apa.org/ethics/code/

D’Alfonso, S. (2020). AI in mental health. Current Opinion in Psychology, 36, 112–117. https://doi.org/10.1016/j.copsyc.2020.04.005

Farhud, D. D., & Zokaei, S. (2021). Ethical issues of artificial intelligence in medicine and Healthcare. Iranian Journal of Public Health, 50(11), i-v. https://doi.org/10.18502/ijph.v50i11.7600

Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research, 21(5), e13216–e13216. https://doi.org/10.2196/13216

Fulmer, R. (2019). Artificial Intelligence and counseling: Four levels of implementation. Theory & Psychology, 29(6), 807–819. https://doi.org/10.1177/0959354319853045

Jeyaraman, M., Balaji, S., Jeyaraman, N., & Yadav, S. (2023). Unraveling the ethical enigma: Artificial intelligence in healthcare. Curēus (Palo Alto, CA), 15(8), e43262–e43262. https://doi.org/10.7759/cureus.43262

McLennan, S., Fiske, A., Tigard, D., Müller, R., Haddadin, S., & Buyx, A. (2022). Embedded ethics: A proposal for integrating ethics into the development of medical AI. BMC Medical Ethics, 23(1), 6–6. https://doi.org/10.1186/s12910-022-00746-3

Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of Human–AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74–88. https://doi.org/10.1093/jcmc/zmz026