The emphasis on measuring the outcomes of mental health services has grown significantly over recent years in policy documents and best practice guidelines in Australia. Both professional peak bodies—the Psychotherapy and Counselling Federation of Australia (PACFA) and the Australian Counselling Association (ACA)—highlight the essential role of integrating outcome measures into evidence-informed practice. This priority is clearly outlined in their respective scope of practice documents (ACA, 2024; PACFA, 2018). The focus on outcome measures has gained renewed interest with the development of the national standards for counsellors and psychotherapists (Department of Health and Aged Care, 2024). The primary objective of these outcome measures is twofold: to enhance clients’ experiences of counselling and psychotherapy services, and to ensure accountability among clinicians in the delivery of their clinical work.
In compliance with the Australian National Mental Health Policy (Australian Health Ministers, 1992), clinicians providing counselling in public mental health services across all states are required to monitor client progress routinely. This monitoring has been associated with improved client outcomes (Bickman et al., 2011; Lambert et al., 2003; Reese et al., 2009). Similarly, psychologists and social workers delivering counselling under Medicare’s Better Access initiative, previously known as the Better Outcomes in Mental Health Care scheme, must use specific assessment tools, such as the Health of the Nation Outcome Scales (Wing et al., 1998), the Kessler-10 (K10; Andrews & Slade, 2001), and the Life Skills Profile-16 (LSP-16; Rosen et al., 1989; see also Kilbourne et al., 2018).
In contrast, counselling and psychotherapy organisations in the non-governmental and private sectors show significant variability in their program evaluation methodologies. This includes differences in the extent to which they employ these measures and the types of assessments they use (Posavac, 2015). Within the private sector and private practice, the level of adoption of these measures among clinicians therefore remains unclear, despite strong encouragement from regulatory bodies such as PACFA (2019), the Australian Psychological Society (2007), and the Australian Association of Social Workers (2013). Clinicians have tended to rely on their own judgement for monitoring clients’ responses to therapy (Garland et al., 2003), which may indicate underlying obstacles to the use of formal assessment tools. Given that clinical judgement is not a highly reliable method for identifying clients who are decompensating or failing to progress, it is important to uncover the barriers that hinder practitioners from employing more effective strategies for tracking client progress (Ionita et al., 2020; Jensen-Doss et al., 2018).
To our knowledge, surveys assessing ongoing use of progress monitoring have mostly sampled psychologists and doctoral students (Ionita & Fitzpatrick, 2014; Overington et al., 2015). Although much has been written about the benefits and drawbacks of outcome measure usage, studies eliciting the views and experiences of practising counsellors and psychotherapists are rare, especially in Australia. Moreover, the counselling and psychotherapy profession in Australia continues to meet barriers against participating in publicly funded health care. So, in order to contribute to positioning the counselling and psychotherapy profession as an equal player in the mental health workforce in Australia, we aimed to ascertain the prevalence and patterns of outcome measures usage by PACFA-registered counsellors and psychotherapists, and to identify mechanisms and barriers affecting the profession-wide use of these tools in counselling and psychotherapy.
Method
The study employed a mixed-methods design intended to gather quantitative and qualitative research via an online anonymous Qualtrics survey distributed to members of PACFA. The University of Adelaide’s Human Research Ethics Committee granted approval for this research (H-2020-170), while the PACFA Research Committee represented the research reference group. Participants were recruited via email, posts on the PACFA website and Facebook page, and the PACFA membership newsletter. This purposive sampling method ensured targeted access to a broad sample of psychotherapists and counsellors in Australia. Eligible participants were qualified counsellors and psychotherapists currently working in a paid or unpaid capacity in Australia. Participants were required to provide informed consent prior to responding to the survey questions. A total of 1,177 respondents participated in the online survey, representing 34% of clinicians on the PACFA registry at that time.
The survey utilised a combination of multiple-choice and open-ended questions to gather comprehensive data from participants. These questions explored demographic characteristics, qualifications, cultural background, and professional details such as length of registration with PACFA, type of employment, positions held, working hours, remuneration, main client groups, primary client presentations, and predominant practice modalities. Additionally, participants were asked about their perceptions and use of outcome measures. To ensure methodological rigour, the survey instrument was developed according to insights from previous PACFA workforce studies (Bloch-Atefi et al., 2021; Schofield, 2008; Schofield & Roedel, 2012) and refined with feedback from the PACFA Research Committee. A pilot test involving a small sample of experienced practitioners was conducted to validate the questionnaire, enhance question clarity, and address potential ambiguities.
The collected data were analysed using a mixed-methods approach. Quantitative data were examined through the Statistical Package for the Social Sciences (SPSS) to produce descriptive statistics; qualitative data were subjected to thematic analysis for deeper insights. This study builds on previous PACFA workforce research—the earliest study in this series was conducted in 2004 (Schofield, 2008)—and continues the exploration of trends and practices within the profession.
Results
Data Analysis
Quantitative Data
The quantitative data collected through the online questionnaire provided insights into the demographics of the sample, which predominantly comprised Australian counsellors and psychotherapists, and their use of formal, informal, or no outcome measures. Descriptive statistics, presented as frequencies and percentages, summarised participant characteristics, including employment type, roles, working hours, remuneration, primary client groups, key client presentations, and dominant practice modalities. All quantitative analyses were conducted using SPSS version 23. Tables in the Appendix detail the frequency distributions and percentages for the various survey questions.
Qualitative Data
Reflexive thematic analysis, as outlined by Braun and Clarke (2023), was employed to analyse the qualitative data collected from the open-ended survey responses. This approach extends traditional topic summarisation by adopting a deeper, interpretive stance that focuses on the construction of meaning-based narratives. The method is underpinned by the principles of reflexivity, subjectivity, and researcher interpretation, ensuring that the analysis is not only systematic but also contextually rich and theoretically informed.
Our analysis aimed to explore and understand participants’ experiences and perceptions of the use of outcome measures in their practice. The process began with line-by-line coding, whereby each segment of text was carefully examined to identify initial patterns and categories. These categories were then grouped and refined through an iterative process to develop overarching themes.
A descriptive qualitative theoretical approach was applied to stay close to the data and to provide clear, coherent interpretations of participants’ responses. An inductive analytic approach guided the identification of recurring patterns, which were subsequently organised into coherent themes. This offered a structured yet flexible framework for interpreting the qualitative data.
The first author (ABA) conducted the initial round of coding, documenting memos on initial impressions of the data. Following the initial coding, the open codes were refined, clustered, and reorganised during the second round of coding. The identified themes were discussed in detail with the second author (ED) who cross-checked the results for accuracy, consistency, and alignment with the data. The authors worked collaboratively to ensure that the themes were representative of the participants’ views and accurately reflected the content of the responses. This process involved resolving any discrepancies through discussion and consensus, ensuring that the final thematic categories were well grounded in the data. After the themes were collaboratively refined, a final manual review was conducted to ensure that the analysis was both rigorous and robust. This process enabled identification of phenomenological themes that co-emerged from the data, offering deeper insights into the participants’ perspectives on outcome measures. Although not essential in reflexive thematic analysis, we wanted to enhance the credibility of the analysis by the use of inter-coder reliability, ensuring a level of consistency across coding processes to bring coherence to the findings. Additionally, the qualitative themes were cross-validated using an Artificial Intelligence (AI) platform (ChatGPT Update 4; Bijker et al., 2024; Cheng et al., 2023) to further support the robustness of the findings through a layered approach to meaning making.
Survey Questions and Responses
Table 1 lists the open-ended survey questions posed to participants, along with the corresponding number of responses received. These questions aimed to gather detailed insights into participants’ experiences with outcome measures in therapeutic practice.
Demographic Characteristics: Age, Gender and Cultural Identity
The participants identified predominantly as women (81.4%, n = 959); a correspondingly smaller proportion identified as men (16.7%, n = 196). Twelve (1%) identified as non-binary or gender diverse. A very small portion of respondents identified as “Woman, Non-binary” and as “Man, Non-binary”. A further 0.5% (n = 6) indicated that their gender identity was not listed in the options provided (see Table A3 in the Appendix).
Participants’ ages ranged from 22 to 83 years (see Table A5 in the Appendix). The mean age was 54.33 years (SD = 12.19), and the median age was 55.00 years (IQR = 17). This indicates a relatively normal age distribution with moderate variability in the sample. The data suggest a diverse age range among the surveyed counsellors and psychotherapists, and a broad spectrum of cultural identities. The most common cultural identity was Australian (49%, n = 581), followed by English Australian (9%, n = 109), European Australian (6%, n = 66), New Zealand (3%, n = 33), and Irish (2%, n = 27). Fourteen (1.2%) respondents were Aboriginal and/or Torres Strait Islander. Other identities included Italian (2%, n = 22), South African (2%, n = 22), and Chinese (2%, n = 21). Further demographic information is detailed in Table A4 in the Appendix.
Years of Experience, Registration Status, Professional Role, Place of Work, Annual Income
The professional experience of participants varied widely, and the majority had more than five years of professional practice experience. Specifically, 26.6% of participants reported having 10 to 20 years of experience, followed by 20.9% who had more than 20 years of experience. A smaller proportion, 15.2%, had between one to three years of experience. A breakdown of years of experience is presented in Table A6 in the Appendix. Participants in this study were predominantly registered clinical counsellors (33.5%), followed by registered clinical psychotherapists (17.7%). Additionally, a notable portion of respondents held provisional registration status: provisionally registered counsellors comprised 14.8% and provisionally registered psychotherapists comprised 7.2%. A smaller number of respondents reported other registration statuses or did not indicate their registration status (see Table A7 in the Appendix).
Most participants in this study were qualified psychotherapists, Indigenous healing practitioners or counsellors in practice, and 87.6% indicated this was their current role. A smaller proportion worked in academic roles (6.5%) or in managerial or administrative roles (5.6%). A negligible number of respondents (0.34%) did not specify their current position (see Table A2 in the Appendix).
The majority of participants in this study worked in private practice: 52.8% reported this as their primary workplace, while 25.3% indicated they held multiple types of employment and roles, which reflects a diverse range of professional engagements. Smaller proportions of participants worked in other settings, including agencies/organisations (11.5%), health care settings (3.8%), and the third/charity/voluntary sector (2.2%). Other less common workplace settings included high schools (2%), employee assistance programs (EAPs) or workplaces (0.9%), non-private settings (0.6%), and universities (0.6%). A small number of respondents preferred not to specify their workplace (0.3%; see Table A10 in the Appendix).
The annual income of participants varied significantly. The largest proportion earned between $50,001 and $75,000 (21.6%) or between $75,001 and $100,000 (18.2%). A notable number of respondents reported earning between $40,001 and $50,000 (10%), while 10.4% of participants preferred not to disclose their income. Smaller percentages of respondents reported earning under $10,000 (6.8%) or between $10,000 and $20,000 (5.3%), and only a few participants earned more than $120,000 (5.7%; see Table A13 in the Appendix).
Outcome Measures
Most participants (76.6%) reported using outcome measures in some capacity. Of these, 43.7% used informal outcome measures, while 22.2% employed both formal and informal measures. A smaller proportion (10.7%) indicated using formal measures only. Conversely, 17.3% of participants did not use any outcome measures. Additionally, 3.1% specified alternative measures, and 3% left the question unanswered (see Table A1 in the Appendix).
Measures Used
The majority (76.6%) of respondents reported using outcome measures. Table 2 summarises the primary outcome measures used by participants, presenting the respective number of responses and providing a brief description of each measure.
This ranking highlights a mix of formal and informal measures used by practitioners. The most formalised and widely accepted measures, such as the Kessler Psychological Distress Scale (K10), Depression Anxiety Stress Scales (DASS), Outcome Rating Scale (ORS), and Session Rating Scale (SRS), were the most frequently used across practices.
Reasons for Usage
While workplace and external requirements were a primary reason for usage, the majority of respondents in the survey regarded measures as validated instruments that are beneficial for tracking client progress, enhancing client awareness and involvement, and supporting communication with other health care professionals, as well as reporting and accountability. Table 3 summarises key themes regarding the rationale behind the utilisation of outcome measures, providing insights into their importance and practical applications within therapeutic contexts.
Overall, while the use of outcome measures was driven chiefly by professional obligations, they were trusted as evidence-informed support for clinical work that benefits clients, clinicians, and the profession.
Reasons for Non-Usage
Although only a small portion of respondents reported non-usage of outcome measures, their reasons for this were multifaceted. Reasons ranged from philosophical objections to practical and contextual challenges, highlighting the complexity of therapeutic work and professional concern about the appropriateness of standard outcome measures within diverse therapeutic contexts. Table 4 outlines key themes related to the rationale for not utilising outcome measures, providing insights into the perceived limitations or challenges associated with their use in therapeutic contexts.
Non-users expressed a broad sentiment that formal outcome measures, while potentially useful in certain settings, often fail to capture the complexity and depth of therapeutic work. These respondents preferred more individualised, qualitative approaches to evaluating progress, prioritising the therapeutic relationship, client autonomy, and the natural unfolding of the therapeutic process.
Stage of Usage
The findings indicate a broad range of practices regarding the timing of outcome measure usage in therapy, reflecting both institutional requirements and individual therapist preferences (see Table A14 in the Appendix). A significant proportion of respondents (40.7%) reported using outcome measures at the end of the therapeutic process, which suggests that many practitioners use these tools primarily to evaluate the overall effectiveness of therapy. A slightly smaller group (36.9%) indicated that they employed outcome measures at the beginning of therapy, typically as baseline assessments that inform treatment planning. Additionally, 25.7% of respondents used outcome measures at the end of each session, demonstrating a commitment to ongoing evaluation and progress monitoring throughout therapy. A smaller group (10.7%) reported using outcome measures at the beginning of each session, which potentially indicates a more dynamic, session-specific approach to assessment.
Interestingly, 31.2% of participants selected “Other”, which suggests that some practitioners adopt flexible or context-specific approaches to the timing of outcome measure usage, including intermittent use or application at significant therapeutic milestones. Moreover, 13.3% of respondents did not select any stage, which may reflect uncertainty or a lack of engagement with outcome measures in practice.
The timing of outcome measure usage was influenced by both institutional frameworks and the therapeutic context. For some respondents, institutional requirements—such as organisational policies, mental health care plans, or specific reporting guidelines—dictated when outcome measures were used. For example, some practitioners indicated using specific measures, such as the ORS and SRS, at predefined points, like the beginning and end of each session, or at fixed intervals (e.g., first and sixth sessions for the DASS-21). Others used predefined stages, including assessments at the beginning, middle, and end of therapy, or periodically throughout the course of treatment, often based on client progress or specific therapeutic milestones.
In addition to these structured approaches, some practitioners employed a more flexible, client-driven model. These therapists tailored the use of outcome measures to the individual client’s needs; for example, assessments were conducted at key moments, such as when a client reported increased symptoms of anxiety or depression, or when progress was noted. Such approaches were often guided by client feedback, therapist intuition, and the perceived therapeutic benefit of the measures.
These findings underscore the adaptive nature of outcome measure usage in therapy, demonstrated by practitioners employing diverse strategies based on therapeutic goals, client needs, and professional judgement. Periodic or milestone-based assessments were prevalent, and therapists regularly evaluated client progress at intervals ranging from every few sessions to more structured points, such as midway through therapy or at the end of a defined treatment phase. In doing so, therapists ensured the relevance and efficacy of outcome measures, fostering an approach that is both client-centred and contextually informed.
How Measures Are Completed
Findings concerning the completion of outcome measures in agency settings reveal distinct preferences for particular methods of administration (see Table A16 in the Appendix). The majority of respondents working within agencies (63.3%) indicated a preference for completing outcome measures electronically, suggesting that digital formats are favoured for their ease of use, efficient data storage, and streamlined analysis. In contrast, 25.2% of respondents reported using handwritten methods, implying that some agencies or practitioners continue to rely on traditional paper-based approaches. Verbal completion of outcome measures was noted by 5% of respondents, potentially reflecting the use of this method in specific therapeutic contexts or with particular client populations. An additional 6.5% of respondents selected “Other”, indicating the use of various alternative methods, although no specific details were provided in the dataset. These results suggest that while electronic completion is the predominant approach, multiple methods are still in use, potentially due to varying agency requirements, client preferences, and practitioner discretion.
A considerable number of respondents indicated that they employed a combination of methods to complete outcome measures, often incorporating electronic, handwritten, and verbal approaches. Some practitioners reported flexibility in adapting their methods based on client needs or session context, for example, “All, depending on client’s preferences and ability”, and “All of the above depending on the method of delivery—online, by phone, or in person”. Verbal methods were particularly common in telephone sessions or private practice settings, as expressed in the following: “Currently verbal for telephone and entered into computer system”, and “Verbal in private practice”. One respondent noted that feedback measures are routinely used in educational contexts, such as the supervision of counselling students: “Master of Counselling students evaluate my 25 hours of supervision with them”.
The data indicate a diverse range of methods employed by Australian counsellors in completing outcome measures across different settings. Counsellors reported using a combination of handwritten and verbal methods, including some integrating electronic systems within agency environments. Verbal methods were frequently used during telephone sessions, and the data later entered into electronic records. Client preferences and their ability to engage with various formats were important factors influencing the method of choice; many practitioners demonstrated a flexible approach to service delivery, whether online, by phone, or in person. In summary, counsellors commonly employed a mix of handwritten, electronic, and verbal methods, adapting their approach to the needs of the client and the mode of service delivery.
Outcome Measure Data Usage in Agency Settings
The handling and use of data from outcome measures in agency settings varied across respondents, reflecting diverse practices. The majority (66.6%) of participants reported entering outcome measure data into an agency database, which indicates a prevalent reliance on electronic systems for data storage and management. This method facilitates efficient tracking, analysis, and access to data, thereby supporting systematic record-keeping and reporting within agency contexts.
A smaller proportion of respondents (12.2%) indicated that the data from outcome measures were used to generate written reports submitted to management. This highlights the role of outcome measures in formalising communication with management or other stakeholders, providing a structured means of documenting and justifying therapeutic interventions and outcomes. Additionally, 10.1% of respondents incorporated the data into personal note taking, which suggests that some practitioners prefer to maintain the data within their own clinical records for reference, assessment, and ongoing treatment planning. This practice may serve as part of a reflective process or guide further clinical decision-making. Furthermore, 10.4% of participants selected the “Other” category, indicating the use of alternative methods for managing data. While specific details were not provided, this category likely reflects diverse practices tailored to the specific needs of the organisation or clients, including methods not captured by the more common responses. A small percentage of respondents (0.7%) did not specify how the data were managed, which may indicate uncertainty or a lack of clarity regarding data-handling procedures in their organisations.
The findings suggest a general lack of awareness or concern regarding the precise handling of outcome measure data. Despite this, several common practices emerged in the data-handling process. These include the de-identification of data for reporting purposes to external bodies such as funding agencies or other organisational stakeholders. For example, anonymised data are commonly entered into agency databases, facilitating reporting to external funding bodies or use for internal record-keeping purposes. Some respondents expressed uncertainty about the final destination of the data; for example, one noted, “Passed to manager. Not sure where it goes from there”, indicating a lack of transparency in the procedural flow of data within certain agencies. In some cases, data were reviewed in collaboration with clients during sessions, allowing for client feedback and discussion of progress. One respondent mentioned that individual scores were reviewed with clients during therapy: “Individual scores are reviewed in session with clients.” This highlights the therapeutic value some practitioners place on outcome measure data as a tool for fostering client engagement and reflection. Most respondents, however, emphasised that the data were anonymised or de-identified before being shared with external parties. This was done to ensure client confidentiality while enabling data reporting to organisations such as primary health networks, funding bodies, or EAP providers. A few responses also suggested that de-identified data were being used for research purposes or included in evaluations. Some respondents highlighted the importance of collaboration with clients in the use of outcome measure data, particularly in reviewing progress or discussing results during sessions. This practice reflects the use of outcome data as a tool for client-centred therapy, ensuring that clients are actively involved in monitoring their own progress. Nevertheless, several respondents expressed uncertainty regarding the exact use or destination of the data after collection. This may indicate gaps in communication within agencies about data management procedures or a lack of resources and time dedicated to using the data fully in a therapeutic context.
In conclusion, while the majority of data from outcome measures were used for internal assessments, reporting, and client engagement, a strong emphasis was placed on maintaining client confidentiality through de-identification or anonymisation. The data were also used for various reporting purposes, including submission to funding agencies and internal evaluations. However, some uncertainty remained concerning the ultimate destination or use of the data, suggesting a need for clearer communication and transparency regarding data management practices within agency settings.
How Feedback Informs Practice
Feedback informed practice in several key ways. It supported client progress monitoring and the ongoing development of the therapeutic alliance, enabling clinicians to tailor practice to meet client needs better. Respondents reflected on feedback given during supervision and used it for personal and professional development. They also used feedback to identify and address gaps in therapy or service delivery. Agencies leverage feedback to identify trends, improve service delivery, and meet external reporting requirements.
Many respondents used feedback measures to track the progress and wellbeing of their clients. The feedback obtained helped them adjust the therapy to ensure it was meeting the client’s needs. This occurred through “tracking symptom severity, improvement over time” and “determining whether goals are being met”, and through cross-referencing with the client: “We use it as an indication of symptom severity and cross-reference it with their qualitative feedback to determine insight”. In this way, many respondents adjusted their approaches according to client feedback, ensuring that their interventions remained responsive to client needs. This involved collaborating with clients to refine goals and techniques, for example, “making practice more interactive and client-centred”; “making adjustments to therapy when client satisfaction is low”; and “using feedback to shape future sessions”. Respondents linked usage to professional collaboration, which they engaged in “to remain accountable to clients, to give feedback to GPs and other referral services”. Feedback was often integrated into supervision sessions for personal and professional development: “We work through any constructive feedback within supervision”; “[the data are] used in clinical supervision to identify strategies to help my clients”; “taking on board the feedback and actioning anything that may be within my power”.
Respondents also mentioned using feedback for administrative and external reporting purposes. This included generating reports for funding bodies, tracking key performance indicators, and satisfying reporting obligations, as indicated in the following comments: “The data is collected anonymously, the agency then compiles a report”; “reporting to Responsible Gambling fund (GambleAware) and management”. Some respondents mentioned how their agencies employed feedback measures to evaluate and improve service delivery. Feedback was used to track performance, identify trends, and inform organisational strategies: “We look at trends and what areas of service delivery have declined, improved, or stayed neutral”; “The feedback measure would be a way of justifying to the funding body of its effectiveness”. Overall, feedback was used “to improve quality of service and to maintain best practice”.
Suggestions for Making Usage Easier
Almost half of the 555 responses to the question about how to simplify usage of outcome measures indicated no need to make usage easier, either because there were no barriers to usage or, to a lesser extent, because the respondent had no interest in using outcome measures, irrespective of ease of usage. These latter respondents expressed concern about the clinical and rigid nature of outcome measures, experiencing usage as detached from the therapeutic process and viewing it as a disruption of the client–therapist relationship: “if they were better attuned to client needs”; “clients found it intruded on the counselling process”. Among those who did have suggestions for making usage easier, themes were simplification, digitisation, suitability, training, organisational support, time, and administrative burden.
Respondents expressed the need for outcome measures that are easy to use for both practitioners and clients. This included having access to shorter, simpler, and more user-friendly measures that do not take much time during therapy sessions—ideally, “a short easily accessible form”. Respondents preferred digital tools and automated systems such as apps, online forms, or software that streamline the process and offer automatic scoring and tracking, as well as the use of iPads or other devices for easy data entry by clients. They expressed wanting “an online system that automatically sends and stores client results”; a respondent who had access to this noted, “It is pretty easy now as we get an electronic report generated with the information”.
Several respondents emphasised the importance of measures that are relevant to their client base. They expressed concern that some existing measures are not always suitable for certain populations, such as children, those experiencing trauma, or older clients, for instance, “Use ones that are relevant to seniors as they do not understand most of the questions on the K10”. Some respondents called for measures that feel relevant to the therapeutic process and do not interrupt the flow of therapy, calling for them to be “human and non-clinical in design” and “something that was not derived from the psychology field like K10 and DASS”.
A desire for more training, resources, and professional development opportunities related to the use of outcome measures was clearly articulated. Respondents wanted to know about the available measures and how to use and interpret them effectively; they required “some training and also info on how it helps improve outcomes” and “training and access to all measures required in the allied health space”. Many of these respondents wanted support from their professional body, PACFA. Some suggested that PACFA provide training webinars and support for the use of outcome measures, while others wanted standardised measures recommended by PACFA and freely available from the PACFA website. One respondent noted that “I might use them if PACFA asked me to as part of a research program”, and another suggested “have them as a PACFA standard that we all use”. There was also a call for “a PACFA-endorsed best-practice guide” and for the Psychotherapy and Counselling Journal of Australia (PACJA) to publish journal articles on the use of feedback measures.
Several respondents wanted better support from their employment organisations too, including clearer guidance on how to implement and benefit from outcome measures. Some expressed a desire for their agency to take on more responsibility for handling data collection and analysis. Respondents frequently mentioned time and administrative barriers, in both public and private practice, that make it difficult to incorporate outcome measures into their practice. They requested solutions that are time-efficient and avoid increasing their workload: for instance, they wanted more “time in the session; client willingness to fill out forms”, more “time to set up an electronic survey”, and “an assistant to do the measures”.
Discussion
The current research makes a timely contribution to the understanding of outcome measure usage patterns and attitudes of Australian counsellors and psychotherapists. The findings of this research are relevant in the current climate, in which the role of counselling and psychotherapy is becoming increasingly vital in supporting the mental health and wellbeing of service consumers across Australia. This growing importance is highlighted by the recent move of the Australian Government to develop national standards for the counselling and psychotherapy professions. In May 2023, the Australian Government committed to a $300,000 investment over two years to develop these national standards. The standards aim to ensure professionals are equipped to meet the evolving needs of the community by supporting the integration of counsellors and psychotherapists into broader mental health frameworks, including primary health networks and Medicare, ensuring consistent and high-quality service delivery across the sector (PACFA, 2023).
As mental health challenges continue to rise, the counselling and psychotherapy profession will be better equipped to contribute to the public health space if, while maintaining the centrality and integrity of the therapeutic relationship, it adapts to provide effective, evidence-informed care.
Patterns of Usage and Non-Usage of Outcome Measures
The data reveal a divide in the use of formal outcome measures. While 76.6% of respondents employed outcome measures in their practice, a notable proportion of private practitioners were opting not to use them, highlighting a tension between institutional mandates for outcome measure use and the desire for a more flexible, client-centred approach. Many non-users expressed concerns that standardised tools undermine the therapeutic process, prioritising client autonomy and the organic flow of therapy. These findings underscore the importance of accounting for the diverse approaches employed by therapists in the profession-wide implementation of outcome measures. It is essential that evidence-based practices are integrated in a manner that supports, rather than disrupts, the therapeutic relationship, ensuring they enhance client care without becoming a barrier to effective practice.
Reasons for non-use of outcome measures among some practitioners stemmed primarily from a misalignment with their therapeutic orientation. Practitioners who prioritised relational and client-centred approaches often found formalised measures incompatible with the dynamic nature of long-term therapeutic work. These practitioners argued that tools like outcome measures fail to capture the complexities of progress, particularly when dealing with trauma, grief, or other sensitive issues. Instead, they favoured ongoing, qualitative dialogue as a more appropriate means of assessing client development. This concern aligns with existing critiques in the literature, which suggest that standardised outcome measures may oversimplify or misrepresent therapeutic progress, especially for vulnerable populations such as children or trauma survivors (Sales et al., 2018; Solstad et al., 2019).
Institutional Requirements and Professional Obligations
In institutional settings, the widespread use of outcome measures such as the K10, DASS, ORS, and SRS reflects the influence of agency policies and external reporting requirements. These tools are often mandated by agencies for consistency and accountability, which are significant factors, particularly in light of the current self-regulation within the counselling and psychotherapy sector. The findings demonstrate that these measures are largely driven by institutional obligations rather than clinical preferences. This divergence between private practice and institutional settings highlights a fragmented approach to outcome measurement, which could complicate efforts to create a more unified and consistent framework for practice.
The current reliance on external reporting requirements and institutional obligations suggests that outcome measures are often viewed as a necessary but secondary tool, rather than an integral part of the therapeutic process. While these measures do provide evidence that can support clinical decisions, the findings underscore the importance of implementing outcome measures that can bridge the gap between institutional requirements and clinical flexibility. A one-size-fits-all approach does not work for all settings. Instead, the findings call for diverse options in how and when outcome measures are used, recognising the differing needs of private practitioners and agency-based clinicians.
Data Handling and Transparency
The handling of data collected through outcome measures emerged as another key concern. Some respondents expressed uncertainty about where this data goes once submitted, highlighting a gap in communication between practitioners and the organisations for which they work. This lack of clarity affects trust in the process; in addition, it limits the potential benefits of data obtained from outcome measures.
Ensuring that data are not only collected but also appropriately analysed and integrated into clinical practice can significantly enhance their utility, benefiting both clients and clinicians (Lutz et al., 2022; Solstad et al., 2019). By improving communication about how outcome data are tied to funding, agencies can increase transparency and promote a better understanding among clinicians of the broader implications of their data collection efforts. Outcome measures often influence funding decisions and resource allocation (Lambert & Shimokawa, 2011; Sharples et al., 2017; Tasca et al., 2019); therefore, ensuring this connection is clearer can encourage greater ownership and participation in the process.
When counsellors and psychotherapists understand that outcome data are used for compliance and to inform and improve client care, they can take a more collaborative approach to data collection (Lambert & Shimokawa, 2011; Solstad et al., 2019; Unsworth et al., 2012). Furthermore, aligning funding with outcome measurement practices can help secure counsellor positions, support professional development, and ultimately lead to more effective, evidence-based outcomes for clients (Lambert & Shimokawa, 2011; Sharples et al., 2017; Tasca et al., 2019). This integration creates a positive feedback loop, in which funding drives better use of outcome measures, which in turn enhances clinical practices and delivers improved client outcomes (Bickman, 2008; Jensen-Doss et al., 2018; Reese et al., 2009).
Feedback and Informing Practice
The research revealed that feedback from outcome measures plays a central role in shaping clinical practice. Many respondents noted that feedback helps monitor client progress, enhance the therapeutic alliance, and guide professional development. Additionally, agencies leverage feedback to identify trends and improve service delivery. However, some respondents expressed concerns about the clinical and rigid nature of outcome measures, feeling that these tools can disrupt the relational flow of therapy. This disconnect between the perceived value of feedback and the rigidity of some outcome measures indicates an area for research and policy attention.
The preference for outcome measures that are more aligned with the therapeutic process, such as tools that are easy to use, digitally based, and human-centred, reflects a broader desire for tools that complement rather than disrupt therapy. Such desire for more intuitive, flexible measures is important to address. Measures need to be not only evidence-informed but also attuned to the therapeutic process, respecting the centrality and nuanced quality of the client–therapist relationship. The development of tools that are both user-friendly and effective in capturing the nuances of therapy will be a critical consideration in the development of future outcome measure frameworks, and in profession-wide uptake.
Barriers to Usage and Suggestions for Improvement
Three main barriers to broader adoption of outcome measures were clearly reported in the findings. These were time constraints, administrative burden, and lack of appropriate training. Participants expressed a clear preference for digital tools that could simplify the process and reduce the time and administrative load associated with using outcome measures. The call for more training and professional development opportunities was also prominent, and many practitioners indicated that additional support from professional bodies such as PACFA and ACA would help them more effectively integrate outcome measures into their practice.
These findings are particularly relevant as the Australian Government works to establish national standards for the sector (Department of Health and Aged Care, 2024). Professional development and training in the use of outcome measures will need to be considered within these standards. Training and support will be essential in ensuring that practitioners are equipped to use outcome measures in a way that enhances clinical practice while avoiding unnecessary burdens on time and resources. Additionally, the development of user-friendly, digital tools could help address some of the concerns about the administrative load that currently inhibits broader usage.
Implications
Training
Given that a broad range of measures exist and are not known about, according to our findings, an opportunity exists to educate clinicians and trainees in the use and benefits of outcome measures. During training it may be beneficial to expose clinicians to a variety of measures and their applications, including individualised, process, and qualitative measures. Ideally, training would focus on the application of these measures in addition to the underlying rationale for their clinical use, including a particular focus on the relationship between the therapeutic alliance and treatment outcomes.
Tailored Outcome Measures
Generic outcome measures affect not only therapists but the clients whose responses are being sought for their benefit. Existing across the sociocultural spectrum, clients interpret mental health concepts differently (Carpenter-Song et al., 2010; Kato, 2018; Rivera & Bennetto, 2023). This can lead to misunderstandings of the questions asked of them in outcome measures, or reluctance to disclose feelings about the therapeutic process. Non-native English speakers may find it difficult to understand the wording, which affects their ability to respond accurately. Similarly, clients with limited reading or writing skills may struggle to complete the measures, leading to inaccurate responses (Aldalur et al., 2022; Unsworth et al., 2012). Clients managing chronic health conditions might have overlapping symptoms that complicate their responses to mental health questionnaires. Those with a history of trauma can be triggered into distress, causing dysregulation that makes it hard for them to engage with the questions, not to mention it would be unethical to ask them to do so. Those who have experienced stigma due to mental ill health may feel uncomfortable or unwilling to disclose their feelings, leading to underreporting. Clients who have had negative experiences with health care systems may be distrustful of mental health assessments, potentially skewing their responses. For these reasons, the effective use of outcome measures requires that they be tailored to client needs and presentations and administered in a culturally safe way (Center for Substance Abuse Treatment (US), 2014; de Jong et al., 2012; Errázuriz & Zilcha-Mano, 2018).
Limitations
While this study provides insights into the patterns and barriers surrounding the use of outcome measures among Australian counsellors and psychotherapists, several limitations should be considered in interpreting the findings. First, the study relied on a purposive sampling method by distributing a survey to members of PACFA. This sampling strategy may limit the generalisability of the findings to the broader Australian counselling and psychotherapy workforce. Specifically, the sample may not fully represent practitioners who are not registered with PACFA or those in private practice who may be less likely to engage with professional associations. Although the study surveyed a substantial proportion (34%) of PACFA’s registered clinicians, the experiences of those outside this membership—such as those operating independently or under different associations—may not have been adequately captured. Additionally, practitioners who chose not to participate in the survey may differ from those who did, further constraining the representativeness of the sample.
The study’s reliance on descriptive statistics may not have captured the depth of practitioners’ experiences, particularly in complex clinical contexts. While open-ended questions were included in the survey, its design—focusing on broad categories—may not have allowed participants to articulate fully the subtleties of their therapeutic practices or the perceived value of outcome measures. Future research could benefit from in-depth qualitative methods, such as interpersonal interviews, to gain more nuanced insights.
Furthermore, the qualitative analysis in this study, conducted using reflexive thematic analysis, is inherently shaped by the researcher’s interpretive lens. While this approach is widely recognised as a valid and rigorous method within qualitative research, subjectivity remains an integral component of the interpretive process. As such, it is crucial to acknowledge the potential for inconsistencies or biases, particularly during the coding and theme development stages, where researcher reflexivity plays a pivotal role in ensuring analytical rigour. To mitigate this potential, both researchers engaged in the coding process, ensuring that the themes were grounded in the data and reflected the participants’ perspectives as far as possible. Additionally, inter-coder reliability was applied, whereby discrepancies between the researchers were discussed, resolved, and then third-checked using AI to enhance consistency and reliability in the analysis.
Although this study identified several barriers to the use of outcome measures, such as time constraints, complexity, and misalignment with client-centred approaches, the reasons behind these barriers were not fully explored in depth. For example, while some practitioners cited time constraints as a barrier, this could be influenced by factors such as the caseload, institutional expectations, or lack of training, which were not systematically explored in this study. The broad nature of the barrier questions meant that specific details on the nature of these challenges were not captured. Future research could provide more in-depth exploration of these barriers, potentially through qualitative interviews or longitudinal studies that track changes in practitioners’ views and experiences over time.
The study did not extensively examine how outcome measures are integrated within various therapeutic modalities. While it highlighted differences between institutional and private practice settings, it did not explore how these tools are perceived and used across different therapeutic orientations (e.g., trauma-informed therapy, cognitive behavioural therapy, psychodynamic therapy). Given that different therapeutic approaches may view the role of outcome measures differently, future research should explore how outcome measures are adapted and used across various therapeutic modalities to ensure they are relevant and aligned with the goals of the therapy.
Conclusion
We found that the majority of counsellors and psychotherapists participating in our study use outcome measures for a variety of reasons, most commonly to meet institutional requirements and because of a belief in their utility for tracking client progress. A significant proportion of respondents also trust the measures they use, particularly those that are standardised and validated, such as the K10 and DASS. However, barriers to the broader use of these tools persist, particularly time constraints, lack of training, the complexity of some measures, and concerns about their relevance or utility in clinical practice. These factors complicate the profession-wide adoption of outcome measures; many practitioners reported that the time required for administration and interpretation detracts from valuable therapeutic work.
The predominant reason for non-use of outcome measures among practitioners is that many existing tools—especially those developed for clinical psychologists—are not always transferable to therapeutic models that centre on the therapeutic alliance. Relational and client-centred approaches, often prioritised in counselling and psychotherapy, may consider these tools incompatible with their focus on qualitative, ongoing dialogue and the dynamic nature of therapeutic work. This misalignment between the standardised, structured approach of outcome measures and the more fluid, human-centred model of therapy was frequently highlighted by non-users in the study.
In the current context of mental health care under-resourcing, the increasing standardisation of service provision, and the Australian Government’s push to develop national standards for the profession, our findings propel us in two key directions.
First, there is a clear need for the development of more tailored, flexible outcome measures that are better suited to the diverse approaches within counselling and psychotherapy. These tools should respect the centrality of the therapeutic relationship, capturing client progress in a way that complements rather than disrupts therapy.
Second, our findings call for a more robust provision of training by training institutions and professional associations in the rationale, benefits, limitations, and application of a range of existing outcome measures. Equipping practitioners with the skills to integrate these tools effectively into their practice, while understanding their limitations, will be essential to ensuring that they are used in a manner that enhances rather than hinders client care.
The integration of outcome measures into counselling and psychotherapy practice must be a balanced process. While evidence-informed practices are important for improving client outcomes, the profession must remain attuned to the diverse therapeutic approaches that characterise it. Tailoring outcome measures to fit the context of therapy and providing adequate training for their use will ensure that they serve as a valuable tool for enhancing client care while preserving the integrity of the therapeutic alliance.