Public Significance Statement

Recent advancements in artificial intelligence (AI) technologies, such as large language models like ChatGPT and conversational artificial intelligence systems (commonly referred to as AI chatbots), have garnered significant attention in the public sphere, driving innovation across various sectors including clinical and mental health care. While the responsible integration of AI tools holds promise for improving access and affordability of care, important considerations exist for their application in therapeutic settings. It is essential that AI administrative technologies, such as automated scheduling and data management systems, and AI complementary tools, like decision-support algorithms and diagnostic aids, be introduced into clinical practice gradually. Therefore, a staged and thoughtful integration of AI—guided by ethical frameworks, informed consent, and ongoing evaluation—is crucial to ensuring that AI contributes meaningfully to addressing the global mental health crisis (Fiske et al., 2019). Psychotherapists and counsellors, like all mental health professionals, must be trained to address the ethical implications of AI use, ensuring that client autonomy, confidentiality, dignity, and informed consent are consistently upheld in alignment with the core principles of therapeutic practice. This approach will ensure that AI applications provide meaningful support, complementing and enhancing care quality while respecting the foundational principles of ethical clinical practice.

Introduction

Over 10 million Australians experience mental health challenges each year, evidenced by a consistent 3% annual increase in prevalence (Australian Bureau of Statistics, 2020–2022). This growing demand places considerable strain on an already overburdened mental health system which struggles to meet the need for care. Despite the availability of evidence-based treatments, including psychological therapies and medications, approximately one-third of individuals requiring treatment do not receive care within a 12-month period (Jorm, 2018). This gap represents a critical public health concern since untreated mental health conditions can lead to worsened symptoms, chronicity, and significant impacts on quality of life (Mohamed Ibrahim et al., 2020).

A meta-analysis of 34 studies found that 75% of individuals with mental health challenges preferred psychological treatment over medication (McHugh et al., 2013), thereby highlighting the need for expanded access to psychotherapy and counselling. However, psychological treatment is often unavailable when individuals need it most. In Australia, psychiatric medication is the most common treatment for mental health challenges, and approximately one in six Australians receive prescriptions, particularly in rural and remote areas where access to mental health professionals is limited (Australian Institute of Health and Welfare (AIHW), 2023; Bloch-Atefi et al., 2021). This issue is compounded by a global shortage of mental health professionals; for example, an estimated shortfall of 1.8 million providers exists worldwide (World Health Organization, 2022). Prior to the COVID-19 pandemic, anxiety and depression were estimated to affect 29% of the global population over their lifetime (Steel et al., 2014). The pandemic exacerbated the mental health crisis, significantly increasing the demand for support (Loosen et al., 2021; Ornell et al., 2021; Santomauro et al., 2021; Thome et al., 2021).

As the prevalence of mental health conditions rises globally, this shortage has been further strained, exacerbating disparities in care, particularly in low- and middle-income countries (Lange, 2021; Lora et al., 2020). In Australia, these disparities are also evident within culturally and linguistically diverse (CALD) communities, the LGBTIQ+ community, and Aboriginal and Torres Strait Islander populations. These groups often face additional barriers to accessing mental health care, including discrimination, a lack of culturally appropriate services, and challenges in navigating mainstream health systems (Dune et al., 2022; Morgan et al., 2021).

The global burden of mental health challenges is immense. An estimated one in eight individuals are affected at some point in their lives, resulting in significant personal, social, and economic consequences (World Health Organization (WHO), 2022). Untreated mental health conditions contribute substantially to productivity losses and increased healthcare costs, underscoring the urgent need for scalable solutions (Lange, 2021; Lora et al., 2020). These conditions currently cost the global economy approximately USD1 trillion annually, and projections indicate that by 2030, the economic cost will rise to USD6 trillion, further highlighting the critical need for effective interventions (WHO, 2022).

In Australia, the shortage of qualified mental health professionals is equally concerning; the current 32% shortfall has been projected to increase to 42% by 2030 (Department of Health and Aged Care, 2022). These figures illustrate the widening gap between demand for mental health services and the available workforce, raising questions about the sustainability of traditional, in-person care models to meet the full scope of demand.

One potential solution to address this disparity is the integration of AI technologies into mental healthcare. For example, large language models (LLMs) like ChatGPT can be used to provide therapeutic support through text-based interactions, machine learning algorithms for diagnostic support, natural language processing (NLP) tools for clinical documentation, and AI-powered chatbots that can deliver therapeutic interventions and provide psychoeducational support between therapy sessions. These technologies have demonstrated promise in supporting mental health care by assisting with administrative tasks (Topol, 2019), diagnostic support (Bzdok & Meyer-Lindenberg, 2018), and therapeutic interventions (Fitzpatrick et al., 2017). The market for mental health apps is rapidly expanding, and over 10,000 options are currently available (King et al., 2023). While many of these apps offer basic functionalities such as mood tracking, progress monitoring, medication reminders, journaling, access to prerecorded guided meditations and breathing exercises (Fiske et al., 2019; Wasil et al., 2022), chatbots more closely resemble conventional talk therapies, offering interactive and personalised support that simulates therapeutic conversations (Gaffney et al., 2019; Miner et al., 2019).

AI technologies have the potential to bridge the gap in mental health service delivery, especially in underserved regions, by providing timely, scalable support and alleviating the pressure on overburdened professionals (Thakkar et al., 2024). However, the integration of AI into mental health care must be approached with caution. While AI holds promise, its use in therapeutic settings requires careful ethical consideration, oversight by experienced clinicians, and alignment with established therapeutic principles (Elyoseph et al., 2024). AI tools can complement but not replace the human interaction essential to effective psychotherapy and counselling (Grodniewicz & Hohol, 2024; Montemayor et al., 2022). Therefore, a staged and thoughtful integration of AI—guided by ethical frameworks, informed consent, and ongoing evaluation—is crucial to ensuring that AI contributes meaningfully to addressing the global mental health crisis (Fiske et al., 2019).

The Role of AI in Mental Health Care

While mental health professionals have traditionally relied on human interaction to provide care, there is increasing recognition of the role AI can play in alleviating administrative burdens, improving accessibility, and reducing costs (Elyoseph et al., 2024; Lee et al., 2021). Strong clinician–client relationships are central to mental health care but can be hindered by time constraints and limited interaction. AI technologies can automate routine tasks that do not require a human touch, thereby allowing clinicians to focus more on delivering empathetic, person-centred care. By freeing up clinicians to prioritise these aspects of care, AI can enhance the therapeutic relationship, humanising mental health practice and ensuring clients receive more meaningful and personalised support (Topol, 2019). AI tools, including client booking systems, session planning applications, and AI-assisted note-taking technologies, offer significant potential to optimise clinical workflow, allowing therapists to focus more on direct client care (Elyoseph et al., 2024; Lee et al., 2021; Topol, 2019). In recent years, AI has been increasingly integrated into various areas of mental health care (D’Alfonso, 2020; Fiske et al., 2019). For example, AI-driven solutions are used to enhance the diagnosis of conditions such as depression (Mastoras et al., 2019; Ware et al., 2020) and schizophrenia (Kalmady et al., 2019), as well as to predict treatment outcomes (Thieme et al., 2020). Additionally, robots are employed to support children with autism spectrum disorders (Islam et al., 2023), while virtual reality avatars help clients manage their auditory hallucinations (O’Neill et al., 2024). AI applications—such as Woebot, a cognitive-behavioural therapy (CBT) chatbot—are already being used in clinical settings to provide low-cost, accessible mental health support (Grodniewicz & Hohol, 2024). Additionally, Replika, an avatar-based therapy app, engages users in therapeutic conversations, reconstructing a personality profile from users’ digital footprint or text exchanges. It provides a judgement-free space for users to engage in vulnerable conversations and gain insights into their own personality (Pham et al., 2022). This is just a snapshot of the growing role AI plays in mental health care.

The integration of AI into therapeutic settings has shown promising results, particularly in addressing conditions like anxiety and depression. Studies have indicated that AI-based interventions, such as internet-delivered CBT (iCBT) and computerised CBT (cCBT), offer significant symptom reduction, often comparable to traditional therapeutic approaches (Botella et al., 2010; Fitzpatrick et al., 2017; Merry et al., 2012). For example, iCBT has proven effective in alleviating anxiety and depression, demonstrating sustained improvements at follow-up (Botella et al., 2010). However, challenges related to engagement and adherence persist, particularly among younger or less engaged populations, suggesting that more interactive or supportive features could improve retention (Fleming et al., 2012; Kretzschmar et al., 2019). Additionally, AI interventions have been found effective for young adults, students, and individuals experiencing mild to moderate symptoms, although these studies typically involved small, homogeneous groups, thus limiting their generalisability (Bowler et al., 2012; Clarke et al., 2009). While AI-based interventions have demonstrated positive effects over time (Lenhard et al., 2017), concerns about the applicability of these findings to broader populations remain. Many studies focused on narrow demographic groups, and there has been limited representation of older or more diverse populations (Danieli et al., 2022). Despite these limitations, the evidence highlights the potential for AI interventions to offer effective, scalable, and accessible solutions for mental health care. As AI continues to evolve, it is essential to balance its potential with careful consideration of its limitations and the ethical implications surrounding its use in mental health settings. The growing integration of AI into therapeutic practice holds transformative potential for mental health care, yet it requires ongoing refinement to address challenges such as user engagement and to ensure its ethical implementation (Cross et al., 2024).

Challenges, Ethical Considerations, and the Complementary Role of AI in Therapy

The integration of AI into mental health care represents a significant development in the field, offering a range of potential benefits in terms of efficiency, accessibility, and cost-effectiveness. However, because the application of AI in therapeutic settings is still in the early stages, it is imperative to approach its adoption with caution and foresight. While the capabilities of AI systems—such as LLMs and decision-support algorithms—are advancing at an unprecedented pace, it must be recognised that these technologies, at present, fall short in replicating the core components of therapeutic practice that are fundamentally human: emotional intelligence, empathy, and the nuanced understanding required to form a therapeutic alliance.

AI tools have the potential to improve the efficiency of mental health professionals by automating routine tasks such as appointment scheduling, client progress tracking, and data entry. This would enable therapists to allocate more time to direct client care, focusing on the core aspects of therapy that require emotional intelligence, clinical expertise, and human connection. Nevertheless, there is a risk that therapists may become overly reliant on AI technologies, thereby potentially undermining the therapeutic relationship. Clinicians must remain vigilant and recognise the limitations of AI and the importance of human judgement in clinical decision-making. AI can certainly assist with diagnosis and treatment planning but its use should be framed as a tool to augment, not replace, the therapist’s professional expertise (Alfano et al., 2024; Farmer et al., 2024). The integration of AI into psychotherapy and counselling offers substantial opportunities to enhance the quality and accessibility of care, yet it also presents several ethical and practical challenges that must be addressed (Elyoseph et al., 2024; Luxton, 2020; Manriquez Roa et al., 2021). AI can assist with administrative tasks, improve diagnostic accuracy, and streamline certain aspects of care, but it is critical that it does not replace the human therapist (Stade et al., 2024). While AI systems, such as LLMs like ChatGPT, can assist with generating responses to client inquiries and providing diagnostic support, these responses are based on algorithms and data, rather than the emotional intelligence or human experience that a therapist can offer. Unlike human therapists, AI lacks the emotional depth, empathy, and nuanced understanding vital to building a therapeutic relationship. Empathy, trust, and emotional connection are essential for creating a safe and healing environment, which AI cannot replicate (Montemayor et al., 2022).

Thus, this article proposes that AI should be viewed as a complementary tool, not a replacement for a human therapist. The unique human-to-human interactions that elicit neurological responses tied to empathy and emotional connection are irreplaceable in therapy. This is further evidenced in how human interaction and AI systems—such as LLMs like ChatGPT or AI chatbots—engage the brain in fundamentally different ways, particularly in terms of empathy, emotional regulation, and social bonding (Eslinger et al., 2021; Tang et al., 2023). In real-life therapeutic conversations, empathy involves understanding complex, layered emotions and offering responses that align with a client’s deeper emotional needs (Harris, 2024). Humans can recognise and respond to subtle emotional cues, such as changes in tone or slight expressions of distress, which may not be explicitly communicated. Brain regions like the medial prefrontal cortex and anterior insula are activated during human interactions, facilitating emotional processing and empathy (Eslinger et al., 2021). While AI can simulate conversational cues, such as altering tone or using emotive language, it does not engage the mirror neuron system in the same way. AI-generated conversations, even those mediated by sophisticated LLMs, do not elicit the same automatic emotional resonance from the listener owing to the absence of genuine emotional expression and the contextual understanding that human interactions provide (Montemayor et al., 2022). Consequently, although humans may cognitively process an AI’s responses, the emotional mirroring central to social connection is not replicated in these interactions. This highlights the gap between human and AI interaction, since AI cannot trigger the neurological responses associated with human empathy, such as the release of oxytocin, which plays a key role in bonding and trust. Instead, AI interactions tend to evoke more mechanical or cognitive brain activation, primarily linked to language processing and task completion (Tang et al., 2023). Therefore, while AI can complement human therapeutic practices, it cannot replace the authentic emotional connection and neurobiological engagement that human therapists provide. Rather than replacing human involvement, AI should serve to complement and enhance the therapeutic process (Grodniewicz & Hohol, 2024; Montemayor et al., 2022).

One significant ethical concern with AI is the potential for bias in AI systems, which can arise from the datasets upon which they are trained. If AI tools are used in psychological assessments, such as diagnostic tests or evaluations of mental health conditions, or in therapeutic interventions, like automated therapy sessions or symptom tracking, there is a risk that these systems could reinforce existing societal biases. This risk is particularly concerning for clients from underrepresented or vulnerable groups, since AI-driven tools may not be attuned to the unique cultural, social or individual needs of such clients (Farmer et al., 2024). Therefore, it is crucial that AI tools are rigorously evaluated for bias, and practitioners must be aware of the potential for discrimination or inequity. Ensuring that AI systems are culturally safe and appropriate for diverse client populations is an essential responsibility for mental health professionals (Cross et al., 2024).

Moreover, as AI technologies become more integrated into clinical practice, concerns about client data protection and confidentiality grow. AI systems rely on vast datasets, often containing sensitive personal information, which increases the risk of data breaches or unauthorised access. Protecting client data is, therefore, a primary ethical consideration. The potential for data breaches, unauthorised access, or misuse of client data is a significant concern because mental health data is particularly sensitive and private (Sedlakova & Trachsel, 2022). To safeguard client confidentiality and ensure trust in the therapeutic relationship, it is critical to implement stringent data protection protocols. Clients must be fully informed about how their data will be used, and clear consent protocols should be implemented that allow them to opt out if they so choose (Cross et al., 2024; Lustgarten et al., 2020). The autonomy of clients to make informed decisions about the role AI plays in their care is paramount, and clients should not feel pressured to accept AI involvement against their will.

As outlined in the Psychotherapy and Counselling Federation of Australia’s (PACFA’s) (2024) practice guidelines regarding AI use, within the Australian context, practitioners must ensure that their use of AI complies with regulatory standards set by agencies such as the Office of the Australian Information Commissioner, the Digital Transformation Agency, and the Australian Human Rights Commission. Practitioners are also encouraged to engage in continuous professional development and contribute to research on AI’s effectiveness in practice. Mental health professionals must be vigilant in ensuring that AI-driven tools comply with privacy regulations, such as the Australian Privacy Principles (Office of the Australian Information Commissioner, 2022), which govern the collection, storage, and use of personal data (Mennella et al., 2024). Stricter regulatory frameworks are essential to safeguard client information and ensure that AI technologies are used responsibly within the mental health sector.

Training and Professional Development for Mental Health Practitioners

As artificial intelligence (AI) continues to make significant advancements, its integration into psychotherapy and mental health care is becoming increasingly relevant. However, for mental health practitioners to effectively incorporate AI into their practice, it is crucial that professional training programs evolve to include AI literacy and address the ethical implications associated with AI technology (Sedlakova & Trachsel, 2022). Future therapists must be equipped not only with the technical skills required to use AI tools effectively but also with the critical thinking abilities necessary to evaluate these tools, recognise their limitations, and apply them responsibly in clinical settings. Consequently, alongside technical training, it is essential that professional training frameworks also prioritise the development of ethical guidelines that regulate the responsible use of AI in psychotherapy and counselling, ensuring that such tools enhance rather than undermine the core human elements of therapeutic practice (Montemayor et al., 2022).

Discussion

This paper examines the integration of AI into counselling and psychotherapy, highlighting both the significant opportunities it offers and the ethical challenges it presents. While human interaction has long been the cornerstone of therapeutic care, the increasing demand for mental health services, alongside rising clinician burnout, has spurred interest in AI’s potential to enhance service delivery. AI has the capacity to alleviate administrative burdens, improve diagnostic accuracy, and expand access to care, particularly in remote or underserved regions (Elyoseph et al., 2024; Lee et al., 2021). While human connection remains essential to therapy, AI technologies offer considerable promise in streamlining clinical workflows by allowing clinicians to focus on providing personalised, person-centred care.

AI tools, such as automated client scheduling systems, session tracking applications, and virtual therapeutic assistants, hold substantial potential to reduce the time clinicians spend on administrative tasks, which have become an increasing burden in contemporary mental health practice (Lee et al., 2021). In the context of heightened demand for services, AI could enable clinicians to allocate more time for direct client care, enhancing overall efficiency. For example, AI-driven platforms offer cost-effective, accessible therapeutic support for individuals who might otherwise face barriers to traditional face-to-face therapy (Grodniewicz & Hohol, 2024). Similarly, conversational AI systems can provide users with a private, non-judgemental space for therapeutic conversations, thereby addressing the stigma surrounding mental health and extending support to individuals who might otherwise be hesitant to seek help (Pham et al., 2022).

However, despite these opportunities, the integration of AI into mental health care presents a number of significant ethical challenges. One of the most pressing concerns is the potential for algorithmic bias. AI systems are only as reliable as the data upon which they are trained. If AI tools are used in assessment or intervention, there is a risk that these systems could perpetuate existing societal biases, which may be particularly detrimental to clients from underrepresented or vulnerable groups (Farmer et al., 2024). Therefore, it is crucial that AI tools are rigorously evaluated for bias, and practitioners must be aware of their potential for discrimination or inequity. Ensuring that AI systems are culturally safe and equitable, while prioritising the needs of diverse populations, is an essential responsibility for mental health professionals (Cross et al., 2024).

Another significant ethical concern involves the protection of client data. AI systems process vast amounts of data, including sensitive personal information, to function effectively. Given the confidential nature of mental health data, legitimate concerns exist about the potential for data breaches or misuse. Safeguarding client privacy is crucial for maintaining trust in the therapeutic relationship and for ensuring that individuals feel comfortable engaging with AI tools. Informed consent processes must be transparent, clearly outlining how client data will be used and ensuring clients retain the option to opt out of AI involvement in their care if they so wish (Sedlakova & Trachsel, 2022). Additionally, AI systems must comply with stringent data protection standards, such as the Australian Privacy Principles (Office of the Australian Information Commissioner, 2022), to ensure that data is collected, stored, and used securely.

A further ethical challenge is the preservation of empathy in the therapeutic process. While AI can support clinicians in tasks such as symptom tracking and diagnostic assistance, it inherently lacks the emotional intelligence and nuanced understanding required to foster the deep, therapeutic connections that are central to effective therapy (Montemayor et al., 2022). Empathy remains a cornerstone of therapy and is essential for building trust, emotional safety, and a strong therapeutic alliance. Therefore, AI should be viewed as a complementary tool that supports rather than replaces the human therapist. The therapeutic relationship, grounded in empathy and understanding, is unique in its ability to attune to the emotional and psychological needs of the client (Harris, 2024). While AI can support and augment certain aspects of therapeutic practice, it should be regarded as a complementary tool that enhances, rather than replaces, the human connection essential to an effective therapeutic process (Fiske et al., 2019).

Despite the promise that AI holds, it is important to acknowledge its limitations. AI technologies are still evolving and there are intrinsic challenges in their ability to comprehend context, understand cultural nuances, and recognise complex emotional states. While AI can assist in routine tasks and offer support for specific conditions, it cannot replicate the clinical judgement, intuition, and empathy that human therapists bring to the therapeutic process.

AI interventions, such as iCBT and cCBT (Botella et al., 2010; Fitzpatrick et al., 2017; Merry et al., 2012), have shown positive results and are being increasingly used to enhance the diagnosis and treatment of various mental health conditions. However, while these interventions can be beneficial for many individuals, they may not be suitable for those with more complex or severe conditions, particularly when tailored, personalised care is required. In contrast, other AI-driven solutions, such as robots to support children with autism spectrum disorders (Islam et al., 2023) and virtual reality avatars to help clients manage auditory hallucinations, have been successfully employed in more specialised contexts (O’Neill et al., 2024). These applications demonstrate that AI can offer innovative and valuable support for diverse mental health challenges, although careful consideration of each individual’s needs is essential. Moreover, many studies on AI-based interventions have involved homogeneous groups, which limits the generalisability of their findings across diverse demographic populations (Danieli et al., 2022).

In light of these challenges, it is crucial that AI literacy becomes a central component of mental health professional training. As the use of AI in psychotherapy continues to evolve, professional training programs for mental health practitioners must be updated to include AI literacy and the ethical implications of AI technology (Sedlakova & Trachsel, 2022). It is essential that future therapists are equipped with the knowledge and skills required to evaluate AI tools critically, understand their limitations, and apply them responsibly in clinical settings. Incorporating AI into psychotherapy and counselling training programs will ensure that practitioners are able to navigate the complex ethical issues that arise when integrating these technologies into their work.

Training programs should focus on not only how to use AI tools effectively but also how to recognise situations in which AI is inappropriate or human judgement and empathy are paramount. For example, while AI can assist in diagnosing certain conditions, it cannot replace the human capacity to form meaningful therapeutic relationships or exercise clinical judgement in nuanced cases (Montemayor et al., 2022). Additionally, ethical guidelines must be developed to govern the use of AI in clinical practice, ensuring that these tools prioritise client welfare, uphold confidentiality, and safeguard the integrity of the therapeutic process.

Ongoing professional development and regular evaluation of AI tools will ensure they continue to align with best practices and uphold the highest ethical standards. Mental health professionals must be supported to keep up to date with the latest developments in AI tools and their ethical implications. This ongoing education will allow practitioners to refine their skills, understand emerging risks, and continue to apply AI responsibly in their practice. Furthermore, regular evaluation of AI tools is necessary to assess their impact on client outcomes and ensure the tools remain aligned with best practices in mental health care.

Finally, the use of AI in mental health care demands the collaborative engagement of professional bodies, academic institutions, and AI developers in the formulation of comprehensive, robust, and flexible ethical frameworks. These frameworks should be designed to guide the responsible use of AI in psychotherapy and counselling and ensure that the technologies safeguard the integrity of the therapeutic process. These frameworks should not only address data protection and transparency but also consider the impact of AI on the therapeutic relationship and the long-term effects on client outcomes. Collaboration between AI developers, mental health professionals, and regulatory bodies will be crucial to ensure that AI remains a force for good in mental health care. These frameworks should promote inclusivity, ensuring that AI-based services are accessible to all, including individuals with limited access to technology or digital literacy. By establishing clear ethical guidelines and promoting AI literacy among mental health practitioners, the profession can harness the potential of AI while mitigating its risks. This approach will ensure that AI functions as a tool that enhances, rather than disrupts, the human-centric elements of therapeutic practice.

Conclusion

In summary, AI holds significant potential to improve the efficiency, accessibility, and affordability of mental health care, particularly for underserved and remote populations. By automating routine administrative tasks, such as appointment scheduling, client progress tracking, and data entry, AI can alleviate some of the increasing administrative burdens that clinicians face and thus help address the growing demand for mental health services. Furthermore, AI technologies offer the potential for low-cost, accessible therapeutic options, for instance, AI-powered chatbots or virtual assistants, which can support individuals who might otherwise face barriers to traditional face-to-face therapy. This would be especially beneficial for those in need of ongoing, accessible support between sessions. However, its integration into psychotherapy and counselling practice requires careful consideration of ethical issues related to bias, data privacy, and the preservation of essential human qualities such as empathy, connection, and attunement. As AI continues to evolve, it will be crucial for clinicians to remain informed and discerning, ensuring that these technologies enhance, rather than replace, the human elements of therapy. By equipping therapists with AI literacy and promoting ethical guidelines, AI can be leveraged as a complementary tool that supports the therapeutic process while safeguarding the integrity of the therapeutic relationship.