Loading [Contrib]/a11y/accessibility-menu.js
Bloch-Atefi, A., & Day, E. (2025). The use of therapeutic outcome measures by Australian psychotherapists and counsellors. Psychotherapy and Counselling Journal of Australia. https:/​/​doi.org/​10.59158/​001c.137570

Abstract

Background

Outcome measures are increasingly emphasised as effective tools in Australian mental health policy. However, limited understanding exists about usage patterns and barriers among counsellors and psychotherapists practising in Australia. This study addresses this gap by examining the prevalence, usage, and perceptions of outcome measures within the Australian counselling and psychotherapy workforce.

Objectives

This study explored whether and why members of the Psychotherapy and Counselling Federation of Australia (PACFA) use outcome measures and identified what factors influence their choices.

Method

A mixed-methods design was used, combining quantitative and qualitative data from an online survey distributed to PACFA members. A total of 1,177 respondents participated, representing 34% of PACFA’s registered clinicians. Quantitative data were analysed using descriptive statistics via the Statistical Package for the Social Sciences, and qualitative responses were analysed thematically.

Results

The majority of respondents (76.6%) used outcome measures regularly, primarily because of institutional requirements and their utility in tracking client progress. Barriers included time constraints, complexity of use and evaluation of the measures, and perceived misalignment with client-centred approaches. Non-users cited concerns about the incompatibility of standardised tools with therapeutic models focused on relational dynamics.

Conclusions/Implications

The study finds a need for outcome measures that align with diverse therapeutic approaches and for training practitioners to use them effectively. These findings are timely as the Australian government moves towards establishing national standards for the sector and have implications for policy, practice, and professional development. Future research should focus on developing flexible, user-friendly tools and addressing barriers to their adoption in both public and private practice settings.

The emphasis on measuring the outcomes of mental health services has grown significantly over recent years in policy documents and best practice guidelines in Australia. Both professional peak bodies—the Psychotherapy and Counselling Federation of Australia (PACFA) and the Australian Counselling Association (ACA)—highlight the essential role of integrating outcome measures into evidence-informed practice. This priority is clearly outlined in their respective scope of practice documents (ACA, 2024; PACFA, 2018). The focus on outcome measures has gained renewed interest with the development of the national standards for counsellors and psychotherapists (Department of Health and Aged Care, 2024). The primary objective of these outcome measures is twofold: to enhance clients’ experiences of counselling and psychotherapy services, and to ensure accountability among clinicians in the delivery of their clinical work.

In compliance with the Australian National Mental Health Policy (Australian Health Ministers, 1992), clinicians providing counselling in public mental health services across all states are required to monitor client progress routinely. This monitoring has been associated with improved client outcomes (Bickman et al., 2011; Lambert et al., 2003; Reese et al., 2009). Similarly, psychologists and social workers delivering counselling under Medicare’s Better Access initiative, previously known as the Better Outcomes in Mental Health Care scheme, must use specific assessment tools, such as the Health of the Nation Outcome Scales (Wing et al., 1998), the Kessler-10 (K10; Andrews & Slade, 2001), and the Life Skills Profile-16 (LSP-16; Rosen et al., 1989; see also Kilbourne et al., 2018).

In contrast, counselling and psychotherapy organisations in the non-governmental and private sectors show significant variability in their program evaluation methodologies. This includes differences in the extent to which they employ these measures and the types of assessments they use (Posavac, 2015). Within the private sector and private practice, the level of adoption of these measures among clinicians therefore remains unclear, despite strong encouragement from regulatory bodies such as PACFA (2019), the Australian Psychological Society (2007), and the Australian Association of Social Workers (2013). Clinicians have tended to rely on their own judgement for monitoring clients’ responses to therapy (Garland et al., 2003), which may indicate underlying obstacles to the use of formal assessment tools. Given that clinical judgement is not a highly reliable method for identifying clients who are decompensating or failing to progress, it is important to uncover the barriers that hinder practitioners from employing more effective strategies for tracking client progress (Ionita et al., 2020; Jensen-Doss et al., 2018).

To our knowledge, surveys assessing ongoing use of progress monitoring have mostly sampled psychologists and doctoral students (Ionita & Fitzpatrick, 2014; Overington et al., 2015). Although much has been written about the benefits and drawbacks of outcome measure usage, studies eliciting the views and experiences of practising counsellors and psychotherapists are rare, especially in Australia. Moreover, the counselling and psychotherapy profession in Australia continues to meet barriers against participating in publicly funded health care. So, in order to contribute to positioning the counselling and psychotherapy profession as an equal player in the mental health workforce in Australia, we aimed to ascertain the prevalence and patterns of outcome measures usage by PACFA-registered counsellors and psychotherapists, and to identify mechanisms and barriers affecting the profession-wide use of these tools in counselling and psychotherapy.

Method

The study employed a mixed-methods design intended to gather quantitative and qualitative research via an online anonymous Qualtrics survey distributed to members of PACFA. The University of Adelaide’s Human Research Ethics Committee granted approval for this research (H-2020-170), while the PACFA Research Committee represented the research reference group. Participants were recruited via email, posts on the PACFA website and Facebook page, and the PACFA membership newsletter. This purposive sampling method ensured targeted access to a broad sample of psychotherapists and counsellors in Australia. Eligible participants were qualified counsellors and psychotherapists currently working in a paid or unpaid capacity in Australia. Participants were required to provide informed consent prior to responding to the survey questions. A total of 1,177 respondents participated in the online survey, representing 34% of clinicians on the PACFA registry at that time.

The survey utilised a combination of multiple-choice and open-ended questions to gather comprehensive data from participants. These questions explored demographic characteristics, qualifications, cultural background, and professional details such as length of registration with PACFA, type of employment, positions held, working hours, remuneration, main client groups, primary client presentations, and predominant practice modalities. Additionally, participants were asked about their perceptions and use of outcome measures. To ensure methodological rigour, the survey instrument was developed according to insights from previous PACFA workforce studies (Bloch-Atefi et al., 2021; Schofield, 2008; Schofield & Roedel, 2012) and refined with feedback from the PACFA Research Committee. A pilot test involving a small sample of experienced practitioners was conducted to validate the questionnaire, enhance question clarity, and address potential ambiguities.

The collected data were analysed using a mixed-methods approach. Quantitative data were examined through the Statistical Package for the Social Sciences (SPSS) to produce descriptive statistics; qualitative data were subjected to thematic analysis for deeper insights. This study builds on previous PACFA workforce research—the earliest study in this series was conducted in 2004 (Schofield, 2008)—and continues the exploration of trends and practices within the profession.

Results

Data Analysis

Quantitative Data

The quantitative data collected through the online questionnaire provided insights into the demographics of the sample, which predominantly comprised Australian counsellors and psychotherapists, and their use of formal, informal, or no outcome measures. Descriptive statistics, presented as frequencies and percentages, summarised participant characteristics, including employment type, roles, working hours, remuneration, primary client groups, key client presentations, and dominant practice modalities. All quantitative analyses were conducted using SPSS version 23. Tables in the Appendix detail the frequency distributions and percentages for the various survey questions.

Qualitative Data

Reflexive thematic analysis, as outlined by Braun and Clarke (2023), was employed to analyse the qualitative data collected from the open-ended survey responses. This approach extends traditional topic summarisation by adopting a deeper, interpretive stance that focuses on the construction of meaning-based narratives. The method is underpinned by the principles of reflexivity, subjectivity, and researcher interpretation, ensuring that the analysis is not only systematic but also contextually rich and theoretically informed.

Our analysis aimed to explore and understand participants’ experiences and perceptions of the use of outcome measures in their practice. The process began with line-by-line coding, whereby each segment of text was carefully examined to identify initial patterns and categories. These categories were then grouped and refined through an iterative process to develop overarching themes.

A descriptive qualitative theoretical approach was applied to stay close to the data and to provide clear, coherent interpretations of participants’ responses. An inductive analytic approach guided the identification of recurring patterns, which were subsequently organised into coherent themes. This offered a structured yet flexible framework for interpreting the qualitative data.

The first author (ABA) conducted the initial round of coding, documenting memos on initial impressions of the data. Following the initial coding, the open codes were refined, clustered, and reorganised during the second round of coding. The identified themes were discussed in detail with the second author (ED) who cross-checked the results for accuracy, consistency, and alignment with the data. The authors worked collaboratively to ensure that the themes were representative of the participants’ views and accurately reflected the content of the responses. This process involved resolving any discrepancies through discussion and consensus, ensuring that the final thematic categories were well grounded in the data. After the themes were collaboratively refined, a final manual review was conducted to ensure that the analysis was both rigorous and robust. This process enabled identification of phenomenological themes that co-emerged from the data, offering deeper insights into the participants’ perspectives on outcome measures. Although not essential in reflexive thematic analysis, we wanted to enhance the credibility of the analysis by the use of inter-coder reliability, ensuring a level of consistency across coding processes to bring coherence to the findings. Additionally, the qualitative themes were cross-validated using an Artificial Intelligence (AI) platform (ChatGPT Update 4; Bijker et al., 2024; Cheng et al., 2023) to further support the robustness of the findings through a layered approach to meaning making.

Survey Questions and Responses

Table 1 lists the open-ended survey questions posed to participants, along with the corresponding number of responses received. These questions aimed to gather detailed insights into participants’ experiences with outcome measures in therapeutic practice.

Table 1.Open-Ended Survey Questions Regarding Use of Outcome Measures
Question No. of responses
1. If no, please briefly explain the reason(s) you don’t use outcome measures 141
2. If yes, please specify the measures you are using 810
3. Please briefly explain the reason(s) you use outcome measures in general and why this/these one/s in particular 810
4. At what stage do you use outcome measures in your practice?—Other. Please specify 345
5. How are these outcome measures completed?—Other. Please specify 17
6. What happens to the data from these outcome measures?—Other. Please specify 24
7. How do you and/or your agency use the information from feedback measures to inform your practice? 616
8. What would make it easier for you to include outcome measures and/or psychotherapy/counselling quality measures in your practice? 555

Demographic Characteristics: Age, Gender and Cultural Identity

The participants identified predominantly as women (81.4%, n = 959); a correspondingly smaller proportion identified as men (16.7%, n = 196). Twelve (1%) identified as non-binary or gender diverse. A very small portion of respondents identified as “Woman, Non-binary” and as “Man, Non-binary”. A further 0.5% (n = 6) indicated that their gender identity was not listed in the options provided (see Table A3 in the Appendix).

Participants’ ages ranged from 22 to 83 years (see Table A5 in the Appendix). The mean age was 54.33 years (SD = 12.19), and the median age was 55.00 years (IQR = 17). This indicates a relatively normal age distribution with moderate variability in the sample. The data suggest a diverse age range among the surveyed counsellors and psychotherapists, and a broad spectrum of cultural identities. The most common cultural identity was Australian (49%, n = 581), followed by English Australian (9%, n = 109), European Australian (6%, n = 66), New Zealand (3%, n = 33), and Irish (2%, n = 27). Fourteen (1.2%) respondents were Aboriginal and/or Torres Strait Islander. Other identities included Italian (2%, n = 22), South African (2%, n = 22), and Chinese (2%, n = 21). Further demographic information is detailed in Table A4 in the Appendix.

Years of Experience, Registration Status, Professional Role, Place of Work, Annual Income

The professional experience of participants varied widely, and the majority had more than five years of professional practice experience. Specifically, 26.6% of participants reported having 10 to 20 years of experience, followed by 20.9% who had more than 20 years of experience. A smaller proportion, 15.2%, had between one to three years of experience. A breakdown of years of experience is presented in Table A6 in the Appendix. Participants in this study were predominantly registered clinical counsellors (33.5%), followed by registered clinical psychotherapists (17.7%). Additionally, a notable portion of respondents held provisional registration status: provisionally registered counsellors comprised 14.8% and provisionally registered psychotherapists comprised 7.2%. A smaller number of respondents reported other registration statuses or did not indicate their registration status (see Table A7 in the Appendix).

Most participants in this study were qualified psychotherapists, Indigenous healing practitioners or counsellors in practice, and 87.6% indicated this was their current role. A smaller proportion worked in academic roles (6.5%) or in managerial or administrative roles (5.6%). A negligible number of respondents (0.34%) did not specify their current position (see Table A2 in the Appendix).

The majority of participants in this study worked in private practice: 52.8% reported this as their primary workplace, while 25.3% indicated they held multiple types of employment and roles, which reflects a diverse range of professional engagements. Smaller proportions of participants worked in other settings, including agencies/organisations (11.5%), health care settings (3.8%), and the third/charity/voluntary sector (2.2%). Other less common workplace settings included high schools (2%), employee assistance programs (EAPs) or workplaces (0.9%), non-private settings (0.6%), and universities (0.6%). A small number of respondents preferred not to specify their workplace (0.3%; see Table A10 in the Appendix).

The annual income of participants varied significantly. The largest proportion earned between $50,001 and $75,000 (21.6%) or between $75,001 and $100,000 (18.2%). A notable number of respondents reported earning between $40,001 and $50,000 (10%), while 10.4% of participants preferred not to disclose their income. Smaller percentages of respondents reported earning under $10,000 (6.8%) or between $10,000 and $20,000 (5.3%), and only a few participants earned more than $120,000 (5.7%; see Table A13 in the Appendix).

Outcome Measures

Most participants (76.6%) reported using outcome measures in some capacity. Of these, 43.7% used informal outcome measures, while 22.2% employed both formal and informal measures. A smaller proportion (10.7%) indicated using formal measures only. Conversely, 17.3% of participants did not use any outcome measures. Additionally, 3.1% specified alternative measures, and 3% left the question unanswered (see Table A1 in the Appendix).

Measures Used

The majority (76.6%) of respondents reported using outcome measures. Table 2 summarises the primary outcome measures used by participants, presenting the respective number of responses and providing a brief description of each measure.

Table 2.Summary of Outcome Measures Used by Participants
Name of scale No. of responses Description Reference
Informal/customised measures 690 Informal, often self-created measures based on conversational or narrative approaches, tailored to specific clients and therapeutic methods. N/A
DASS (Depression Anxiety Stress Scales) 195 A set of three self-report scales measuring depression, anxiety, and stress, used to assess emotional states. Lovibond and Lovibond (1995)
K10 (Kessler Psychological Distress Scale) 92 A checklist to assess psychological distress based on the presence of anxiety and depression symptoms over the past four weeks. Kessler et al. (2002)
ORS (Outcome Rating Scale) 71 A brief measure of client functioning to track progress in therapy. Miller and Duncan (2000)
SRS (Session Rating Scale) 63 A measure of client satisfaction and the therapeutic alliance. Miller and Duncan (2004)
PHQ-9 (Patient Health Questionnaire) 11 A depression screening tool reflecting diagnostic criteria for depression based on DSM-5. Spitzer et al. (1999)
Trauma-specific measures 11 Measures used specifically for trauma survivors, particularly those with complex PTSD or trauma-related disorders. N/A
SUDS (Subjective Units of Distress Scale) 12 A self-report tool measuring the intensity of emotional distress, with a numerical value (0–10 or 0–100) representing current discomfort. Wolpe (1973)
Child-centred measures 6 Child-appropriate measures, though specific names were not always provided. Respondents highlighted the need for such measures in child therapy. N/A

Note: DSM-5 = Diagnostic and Statistical Manual of Mental Disorders (5th ed.).

This ranking highlights a mix of formal and informal measures used by practitioners. The most formalised and widely accepted measures, such as the Kessler Psychological Distress Scale (K10), Depression Anxiety Stress Scales (DASS), Outcome Rating Scale (ORS), and Session Rating Scale (SRS), were the most frequently used across practices.

Reasons for Usage

While workplace and external requirements were a primary reason for usage, the majority of respondents in the survey regarded measures as validated instruments that are beneficial for tracking client progress, enhancing client awareness and involvement, and supporting communication with other health care professionals, as well as reporting and accountability. Table 3 summarises key themes regarding the rationale behind the utilisation of outcome measures, providing insights into their importance and practical applications within therapeutic contexts.

Table 3.Summary of Key Themes Regarding the Utilisation of Outcome Measures
Theme Details Example responses
Tracking client progress Outcome measures help indicate symptom severity, aid diagnosis, and measure treatment effectiveness. They provide feedback on client progress, even when clients may not perceive it. “Baseline measures and subsequent measures help indicate symptom severity, aid diagnosis, and indicate treatment effectiveness.”
“[To] provide feedback when clients say they haven’t progressed, even when they have (or vice versa).”
“I want to make sure the client is being looked after and gets the outcomes they are seeking.”
Evidence-informed practice Outcome measures align clinical practice with research, providing measurable data to guide interventions and support evidence-based approaches. “They help with research informing treatment efficacy and provide potential for changes in treatment.”
“Whilst studying, I researched the ORS and was impressed with the evidence that supported the tool.”
Enhancing client awareness and involvement Outcome measures help clients recognise their progress or areas needing attention, fostering greater engagement in the therapeutic process. “They inform the client and provide feedback regarding change.”
Communication, reporting, and accountability Psychometrically validated measures support professional communication with other health care providers, enhance confidence in therapy, and justify therapeutic outcomes to stakeholders. “These ones in particular are psychometrically sound so that they are accepted and understood by the wider mental health community.”
“I use tools that are accessible to counsellors without psychology qualifications.”
“They provide a clearer picture when doing a report on how clients are responding to therapy.”

Note. ORS = Outcome Rating Scale.

Overall, while the use of outcome measures was driven chiefly by professional obligations, they were trusted as evidence-informed support for clinical work that benefits clients, clinicians, and the profession.

Reasons for Non-Usage

Although only a small portion of respondents reported non-usage of outcome measures, their reasons for this were multifaceted. Reasons ranged from philosophical objections to practical and contextual challenges, highlighting the complexity of therapeutic work and professional concern about the appropriateness of standard outcome measures within diverse therapeutic contexts. Table 4 outlines key themes related to the rationale for not utilising outcome measures, providing insights into the perceived limitations or challenges associated with their use in therapeutic contexts.

Table 4.Summary of Key Themes Related to Not Utilising Outcome Measures
Theme Details Example responses
Non-alignment with therapeutic orientation Some respondents reported that outcome measures did not align with their therapeutic approach, particularly in relational or client-empowered practices. “I work relationally … we talk about their satisfaction with the work we are doing.”
“It is entirely up to the client to engage or disengage from my practice and their right and empowerment to do so.”
Unsuitability for process or specific client populations Outcome measures were considered disruptive to the therapeutic processes or unsuitable for certain client groups, such as children, trauma survivors, or those with complex psychological conditions like CPTSD. “There is no way to measure if they are ‘better’ or not. The work with this population is life long.”
“My work is mainly in grief and loss. I find that asking people to rate their experience can be disruptive.”
“I find them to interfere with the relationship … outcomes do not capture a person’s changing sense of self.”
Lack of training, exposure, or support Respondents cited insufficient training or exposure to outcome measures, or lack of organisational support for their use. “I have never heard of this … no idea what they are.”
“Too busy and don’t have the support for that.”
“Most of my work is EAP brief interventions … employer has their own measures in place.”
Preference for collaboration Some respondents preferred using informal assessments or ongoing dialogue with clients rather than formal outcome measures. “I check in with them around the direction we are headed and what they are finding helpful.”
“We note changes as they happen.”
Scepticism about validity of outcome measures Some respondents doubted the validity of outcome measures in capturing clients’ progress, particularly in long-term therapeutic work. “My patients have always viewed them as an evaluation of me rather than the therapeutic process.”

Note. CPTSD = complex post-traumatic stress disorder; EAP = employee assistance program.

Non-users expressed a broad sentiment that formal outcome measures, while potentially useful in certain settings, often fail to capture the complexity and depth of therapeutic work. These respondents preferred more individualised, qualitative approaches to evaluating progress, prioritising the therapeutic relationship, client autonomy, and the natural unfolding of the therapeutic process.

Stage of Usage

The findings indicate a broad range of practices regarding the timing of outcome measure usage in therapy, reflecting both institutional requirements and individual therapist preferences (see Table A14 in the Appendix). A significant proportion of respondents (40.7%) reported using outcome measures at the end of the therapeutic process, which suggests that many practitioners use these tools primarily to evaluate the overall effectiveness of therapy. A slightly smaller group (36.9%) indicated that they employed outcome measures at the beginning of therapy, typically as baseline assessments that inform treatment planning. Additionally, 25.7% of respondents used outcome measures at the end of each session, demonstrating a commitment to ongoing evaluation and progress monitoring throughout therapy. A smaller group (10.7%) reported using outcome measures at the beginning of each session, which potentially indicates a more dynamic, session-specific approach to assessment.

Interestingly, 31.2% of participants selected “Other”, which suggests that some practitioners adopt flexible or context-specific approaches to the timing of outcome measure usage, including intermittent use or application at significant therapeutic milestones. Moreover, 13.3% of respondents did not select any stage, which may reflect uncertainty or a lack of engagement with outcome measures in practice.

The timing of outcome measure usage was influenced by both institutional frameworks and the therapeutic context. For some respondents, institutional requirements—such as organisational policies, mental health care plans, or specific reporting guidelines—dictated when outcome measures were used. For example, some practitioners indicated using specific measures, such as the ORS and SRS, at predefined points, like the beginning and end of each session, or at fixed intervals (e.g., first and sixth sessions for the DASS-21). Others used predefined stages, including assessments at the beginning, middle, and end of therapy, or periodically throughout the course of treatment, often based on client progress or specific therapeutic milestones.

In addition to these structured approaches, some practitioners employed a more flexible, client-driven model. These therapists tailored the use of outcome measures to the individual client’s needs; for example, assessments were conducted at key moments, such as when a client reported increased symptoms of anxiety or depression, or when progress was noted. Such approaches were often guided by client feedback, therapist intuition, and the perceived therapeutic benefit of the measures.

These findings underscore the adaptive nature of outcome measure usage in therapy, demonstrated by practitioners employing diverse strategies based on therapeutic goals, client needs, and professional judgement. Periodic or milestone-based assessments were prevalent, and therapists regularly evaluated client progress at intervals ranging from every few sessions to more structured points, such as midway through therapy or at the end of a defined treatment phase. In doing so, therapists ensured the relevance and efficacy of outcome measures, fostering an approach that is both client-centred and contextually informed.

How Measures Are Completed

Findings concerning the completion of outcome measures in agency settings reveal distinct preferences for particular methods of administration (see Table A16 in the Appendix). The majority of respondents working within agencies (63.3%) indicated a preference for completing outcome measures electronically, suggesting that digital formats are favoured for their ease of use, efficient data storage, and streamlined analysis. In contrast, 25.2% of respondents reported using handwritten methods, implying that some agencies or practitioners continue to rely on traditional paper-based approaches. Verbal completion of outcome measures was noted by 5% of respondents, potentially reflecting the use of this method in specific therapeutic contexts or with particular client populations. An additional 6.5% of respondents selected “Other”, indicating the use of various alternative methods, although no specific details were provided in the dataset. These results suggest that while electronic completion is the predominant approach, multiple methods are still in use, potentially due to varying agency requirements, client preferences, and practitioner discretion.

A considerable number of respondents indicated that they employed a combination of methods to complete outcome measures, often incorporating electronic, handwritten, and verbal approaches. Some practitioners reported flexibility in adapting their methods based on client needs or session context, for example, “All, depending on client’s preferences and ability”, and “All of the above depending on the method of delivery—online, by phone, or in person”. Verbal methods were particularly common in telephone sessions or private practice settings, as expressed in the following: “Currently verbal for telephone and entered into computer system”, and “Verbal in private practice”. One respondent noted that feedback measures are routinely used in educational contexts, such as the supervision of counselling students: “Master of Counselling students evaluate my 25 hours of supervision with them”.

The data indicate a diverse range of methods employed by Australian counsellors in completing outcome measures across different settings. Counsellors reported using a combination of handwritten and verbal methods, including some integrating electronic systems within agency environments. Verbal methods were frequently used during telephone sessions, and the data later entered into electronic records. Client preferences and their ability to engage with various formats were important factors influencing the method of choice; many practitioners demonstrated a flexible approach to service delivery, whether online, by phone, or in person. In summary, counsellors commonly employed a mix of handwritten, electronic, and verbal methods, adapting their approach to the needs of the client and the mode of service delivery.

Outcome Measure Data Usage in Agency Settings

The handling and use of data from outcome measures in agency settings varied across respondents, reflecting diverse practices. The majority (66.6%) of participants reported entering outcome measure data into an agency database, which indicates a prevalent reliance on electronic systems for data storage and management. This method facilitates efficient tracking, analysis, and access to data, thereby supporting systematic record-keeping and reporting within agency contexts.

A smaller proportion of respondents (12.2%) indicated that the data from outcome measures were used to generate written reports submitted to management. This highlights the role of outcome measures in formalising communication with management or other stakeholders, providing a structured means of documenting and justifying therapeutic interventions and outcomes. Additionally, 10.1% of respondents incorporated the data into personal note taking, which suggests that some practitioners prefer to maintain the data within their own clinical records for reference, assessment, and ongoing treatment planning. This practice may serve as part of a reflective process or guide further clinical decision-making. Furthermore, 10.4% of participants selected the “Other” category, indicating the use of alternative methods for managing data. While specific details were not provided, this category likely reflects diverse practices tailored to the specific needs of the organisation or clients, including methods not captured by the more common responses. A small percentage of respondents (0.7%) did not specify how the data were managed, which may indicate uncertainty or a lack of clarity regarding data-handling procedures in their organisations.

The findings suggest a general lack of awareness or concern regarding the precise handling of outcome measure data. Despite this, several common practices emerged in the data-handling process. These include the de-identification of data for reporting purposes to external bodies such as funding agencies or other organisational stakeholders. For example, anonymised data are commonly entered into agency databases, facilitating reporting to external funding bodies or use for internal record-keeping purposes. Some respondents expressed uncertainty about the final destination of the data; for example, one noted, “Passed to manager. Not sure where it goes from there”, indicating a lack of transparency in the procedural flow of data within certain agencies. In some cases, data were reviewed in collaboration with clients during sessions, allowing for client feedback and discussion of progress. One respondent mentioned that individual scores were reviewed with clients during therapy: “Individual scores are reviewed in session with clients.” This highlights the therapeutic value some practitioners place on outcome measure data as a tool for fostering client engagement and reflection. Most respondents, however, emphasised that the data were anonymised or de-identified before being shared with external parties. This was done to ensure client confidentiality while enabling data reporting to organisations such as primary health networks, funding bodies, or EAP providers. A few responses also suggested that de-identified data were being used for research purposes or included in evaluations. Some respondents highlighted the importance of collaboration with clients in the use of outcome measure data, particularly in reviewing progress or discussing results during sessions. This practice reflects the use of outcome data as a tool for client-centred therapy, ensuring that clients are actively involved in monitoring their own progress. Nevertheless, several respondents expressed uncertainty regarding the exact use or destination of the data after collection. This may indicate gaps in communication within agencies about data management procedures or a lack of resources and time dedicated to using the data fully in a therapeutic context.

In conclusion, while the majority of data from outcome measures were used for internal assessments, reporting, and client engagement, a strong emphasis was placed on maintaining client confidentiality through de-identification or anonymisation. The data were also used for various reporting purposes, including submission to funding agencies and internal evaluations. However, some uncertainty remained concerning the ultimate destination or use of the data, suggesting a need for clearer communication and transparency regarding data management practices within agency settings.

How Feedback Informs Practice

Feedback informed practice in several key ways. It supported client progress monitoring and the ongoing development of the therapeutic alliance, enabling clinicians to tailor practice to meet client needs better. Respondents reflected on feedback given during supervision and used it for personal and professional development. They also used feedback to identify and address gaps in therapy or service delivery. Agencies leverage feedback to identify trends, improve service delivery, and meet external reporting requirements.

Many respondents used feedback measures to track the progress and wellbeing of their clients. The feedback obtained helped them adjust the therapy to ensure it was meeting the client’s needs. This occurred through “tracking symptom severity, improvement over time” and “determining whether goals are being met”, and through cross-referencing with the client: “We use it as an indication of symptom severity and cross-reference it with their qualitative feedback to determine insight”. In this way, many respondents adjusted their approaches according to client feedback, ensuring that their interventions remained responsive to client needs. This involved collaborating with clients to refine goals and techniques, for example, “making practice more interactive and client-centred”; “making adjustments to therapy when client satisfaction is low”; and “using feedback to shape future sessions”. Respondents linked usage to professional collaboration, which they engaged in “to remain accountable to clients, to give feedback to GPs and other referral services”. Feedback was often integrated into supervision sessions for personal and professional development: “We work through any constructive feedback within supervision”; “[the data are] used in clinical supervision to identify strategies to help my clients”; “taking on board the feedback and actioning anything that may be within my power”.

Respondents also mentioned using feedback for administrative and external reporting purposes. This included generating reports for funding bodies, tracking key performance indicators, and satisfying reporting obligations, as indicated in the following comments: “The data is collected anonymously, the agency then compiles a report”; “reporting to Responsible Gambling fund (GambleAware) and management”. Some respondents mentioned how their agencies employed feedback measures to evaluate and improve service delivery. Feedback was used to track performance, identify trends, and inform organisational strategies: “We look at trends and what areas of service delivery have declined, improved, or stayed neutral”; “The feedback measure would be a way of justifying to the funding body of its effectiveness”. Overall, feedback was used “to improve quality of service and to maintain best practice”.

Suggestions for Making Usage Easier

Almost half of the 555 responses to the question about how to simplify usage of outcome measures indicated no need to make usage easier, either because there were no barriers to usage or, to a lesser extent, because the respondent had no interest in using outcome measures, irrespective of ease of usage. These latter respondents expressed concern about the clinical and rigid nature of outcome measures, experiencing usage as detached from the therapeutic process and viewing it as a disruption of the client–therapist relationship: “if they were better attuned to client needs”; “clients found it intruded on the counselling process”. Among those who did have suggestions for making usage easier, themes were simplification, digitisation, suitability, training, organisational support, time, and administrative burden.

Respondents expressed the need for outcome measures that are easy to use for both practitioners and clients. This included having access to shorter, simpler, and more user-friendly measures that do not take much time during therapy sessions—ideally, “a short easily accessible form”. Respondents preferred digital tools and automated systems such as apps, online forms, or software that streamline the process and offer automatic scoring and tracking, as well as the use of iPads or other devices for easy data entry by clients. They expressed wanting “an online system that automatically sends and stores client results”; a respondent who had access to this noted, “It is pretty easy now as we get an electronic report generated with the information”.

Several respondents emphasised the importance of measures that are relevant to their client base. They expressed concern that some existing measures are not always suitable for certain populations, such as children, those experiencing trauma, or older clients, for instance, “Use ones that are relevant to seniors as they do not understand most of the questions on the K10”. Some respondents called for measures that feel relevant to the therapeutic process and do not interrupt the flow of therapy, calling for them to be “human and non-clinical in design” and “something that was not derived from the psychology field like K10 and DASS”.

A desire for more training, resources, and professional development opportunities related to the use of outcome measures was clearly articulated. Respondents wanted to know about the available measures and how to use and interpret them effectively; they required “some training and also info on how it helps improve outcomes” and “training and access to all measures required in the allied health space”. Many of these respondents wanted support from their professional body, PACFA. Some suggested that PACFA provide training webinars and support for the use of outcome measures, while others wanted standardised measures recommended by PACFA and freely available from the PACFA website. One respondent noted that “I might use them if PACFA asked me to as part of a research program”, and another suggested “have them as a PACFA standard that we all use”. There was also a call for “a PACFA-endorsed best-practice guide” and for the Psychotherapy and Counselling Journal of Australia (PACJA) to publish journal articles on the use of feedback measures.

Several respondents wanted better support from their employment organisations too, including clearer guidance on how to implement and benefit from outcome measures. Some expressed a desire for their agency to take on more responsibility for handling data collection and analysis. Respondents frequently mentioned time and administrative barriers, in both public and private practice, that make it difficult to incorporate outcome measures into their practice. They requested solutions that are time-efficient and avoid increasing their workload: for instance, they wanted more “time in the session; client willingness to fill out forms”, more “time to set up an electronic survey”, and “an assistant to do the measures”.

Discussion

The current research makes a timely contribution to the understanding of outcome measure usage patterns and attitudes of Australian counsellors and psychotherapists. The findings of this research are relevant in the current climate, in which the role of counselling and psychotherapy is becoming increasingly vital in supporting the mental health and wellbeing of service consumers across Australia. This growing importance is highlighted by the recent move of the Australian Government to develop national standards for the counselling and psychotherapy professions. In May 2023, the Australian Government committed to a $300,000 investment over two years to develop these national standards. The standards aim to ensure professionals are equipped to meet the evolving needs of the community by supporting the integration of counsellors and psychotherapists into broader mental health frameworks, including primary health networks and Medicare, ensuring consistent and high-quality service delivery across the sector (PACFA, 2023).

As mental health challenges continue to rise, the counselling and psychotherapy profession will be better equipped to contribute to the public health space if, while maintaining the centrality and integrity of the therapeutic relationship, it adapts to provide effective, evidence-informed care.

Patterns of Usage and Non-Usage of Outcome Measures

The data reveal a divide in the use of formal outcome measures. While 76.6% of respondents employed outcome measures in their practice, a notable proportion of private practitioners were opting not to use them, highlighting a tension between institutional mandates for outcome measure use and the desire for a more flexible, client-centred approach. Many non-users expressed concerns that standardised tools undermine the therapeutic process, prioritising client autonomy and the organic flow of therapy. These findings underscore the importance of accounting for the diverse approaches employed by therapists in the profession-wide implementation of outcome measures. It is essential that evidence-based practices are integrated in a manner that supports, rather than disrupts, the therapeutic relationship, ensuring they enhance client care without becoming a barrier to effective practice.

Reasons for non-use of outcome measures among some practitioners stemmed primarily from a misalignment with their therapeutic orientation. Practitioners who prioritised relational and client-centred approaches often found formalised measures incompatible with the dynamic nature of long-term therapeutic work. These practitioners argued that tools like outcome measures fail to capture the complexities of progress, particularly when dealing with trauma, grief, or other sensitive issues. Instead, they favoured ongoing, qualitative dialogue as a more appropriate means of assessing client development. This concern aligns with existing critiques in the literature, which suggest that standardised outcome measures may oversimplify or misrepresent therapeutic progress, especially for vulnerable populations such as children or trauma survivors (Sales et al., 2018; Solstad et al., 2019).

Institutional Requirements and Professional Obligations

In institutional settings, the widespread use of outcome measures such as the K10, DASS, ORS, and SRS reflects the influence of agency policies and external reporting requirements. These tools are often mandated by agencies for consistency and accountability, which are significant factors, particularly in light of the current self-regulation within the counselling and psychotherapy sector. The findings demonstrate that these measures are largely driven by institutional obligations rather than clinical preferences. This divergence between private practice and institutional settings highlights a fragmented approach to outcome measurement, which could complicate efforts to create a more unified and consistent framework for practice.

The current reliance on external reporting requirements and institutional obligations suggests that outcome measures are often viewed as a necessary but secondary tool, rather than an integral part of the therapeutic process. While these measures do provide evidence that can support clinical decisions, the findings underscore the importance of implementing outcome measures that can bridge the gap between institutional requirements and clinical flexibility. A one-size-fits-all approach does not work for all settings. Instead, the findings call for diverse options in how and when outcome measures are used, recognising the differing needs of private practitioners and agency-based clinicians.

Data Handling and Transparency

The handling of data collected through outcome measures emerged as another key concern. Some respondents expressed uncertainty about where this data goes once submitted, highlighting a gap in communication between practitioners and the organisations for which they work. This lack of clarity affects trust in the process; in addition, it limits the potential benefits of data obtained from outcome measures.

Ensuring that data are not only collected but also appropriately analysed and integrated into clinical practice can significantly enhance their utility, benefiting both clients and clinicians (Lutz et al., 2022; Solstad et al., 2019). By improving communication about how outcome data are tied to funding, agencies can increase transparency and promote a better understanding among clinicians of the broader implications of their data collection efforts. Outcome measures often influence funding decisions and resource allocation (Lambert & Shimokawa, 2011; Sharples et al., 2017; Tasca et al., 2019); therefore, ensuring this connection is clearer can encourage greater ownership and participation in the process.

When counsellors and psychotherapists understand that outcome data are used for compliance and to inform and improve client care, they can take a more collaborative approach to data collection (Lambert & Shimokawa, 2011; Solstad et al., 2019; Unsworth et al., 2012). Furthermore, aligning funding with outcome measurement practices can help secure counsellor positions, support professional development, and ultimately lead to more effective, evidence-based outcomes for clients (Lambert & Shimokawa, 2011; Sharples et al., 2017; Tasca et al., 2019). This integration creates a positive feedback loop, in which funding drives better use of outcome measures, which in turn enhances clinical practices and delivers improved client outcomes (Bickman, 2008; Jensen-Doss et al., 2018; Reese et al., 2009).

Feedback and Informing Practice

The research revealed that feedback from outcome measures plays a central role in shaping clinical practice. Many respondents noted that feedback helps monitor client progress, enhance the therapeutic alliance, and guide professional development. Additionally, agencies leverage feedback to identify trends and improve service delivery. However, some respondents expressed concerns about the clinical and rigid nature of outcome measures, feeling that these tools can disrupt the relational flow of therapy. This disconnect between the perceived value of feedback and the rigidity of some outcome measures indicates an area for research and policy attention.

The preference for outcome measures that are more aligned with the therapeutic process, such as tools that are easy to use, digitally based, and human-centred, reflects a broader desire for tools that complement rather than disrupt therapy. Such desire for more intuitive, flexible measures is important to address. Measures need to be not only evidence-informed but also attuned to the therapeutic process, respecting the centrality and nuanced quality of the client–therapist relationship. The development of tools that are both user-friendly and effective in capturing the nuances of therapy will be a critical consideration in the development of future outcome measure frameworks, and in profession-wide uptake.

Barriers to Usage and Suggestions for Improvement

Three main barriers to broader adoption of outcome measures were clearly reported in the findings. These were time constraints, administrative burden, and lack of appropriate training. Participants expressed a clear preference for digital tools that could simplify the process and reduce the time and administrative load associated with using outcome measures. The call for more training and professional development opportunities was also prominent, and many practitioners indicated that additional support from professional bodies such as PACFA and ACA would help them more effectively integrate outcome measures into their practice.

These findings are particularly relevant as the Australian Government works to establish national standards for the sector (Department of Health and Aged Care, 2024). Professional development and training in the use of outcome measures will need to be considered within these standards. Training and support will be essential in ensuring that practitioners are equipped to use outcome measures in a way that enhances clinical practice while avoiding unnecessary burdens on time and resources. Additionally, the development of user-friendly, digital tools could help address some of the concerns about the administrative load that currently inhibits broader usage.

Implications

Training

Given that a broad range of measures exist and are not known about, according to our findings, an opportunity exists to educate clinicians and trainees in the use and benefits of outcome measures. During training it may be beneficial to expose clinicians to a variety of measures and their applications, including individualised, process, and qualitative measures. Ideally, training would focus on the application of these measures in addition to the underlying rationale for their clinical use, including a particular focus on the relationship between the therapeutic alliance and treatment outcomes.

Tailored Outcome Measures

Generic outcome measures affect not only therapists but the clients whose responses are being sought for their benefit. Existing across the sociocultural spectrum, clients interpret mental health concepts differently (Carpenter-Song et al., 2010; Kato, 2018; Rivera & Bennetto, 2023). This can lead to misunderstandings of the questions asked of them in outcome measures, or reluctance to disclose feelings about the therapeutic process. Non-native English speakers may find it difficult to understand the wording, which affects their ability to respond accurately. Similarly, clients with limited reading or writing skills may struggle to complete the measures, leading to inaccurate responses (Aldalur et al., 2022; Unsworth et al., 2012). Clients managing chronic health conditions might have overlapping symptoms that complicate their responses to mental health questionnaires. Those with a history of trauma can be triggered into distress, causing dysregulation that makes it hard for them to engage with the questions, not to mention it would be unethical to ask them to do so. Those who have experienced stigma due to mental ill health may feel uncomfortable or unwilling to disclose their feelings, leading to underreporting. Clients who have had negative experiences with health care systems may be distrustful of mental health assessments, potentially skewing their responses. For these reasons, the effective use of outcome measures requires that they be tailored to client needs and presentations and administered in a culturally safe way (Center for Substance Abuse Treatment (US), 2014; de Jong et al., 2012; Errázuriz & Zilcha-Mano, 2018).

Limitations

While this study provides insights into the patterns and barriers surrounding the use of outcome measures among Australian counsellors and psychotherapists, several limitations should be considered in interpreting the findings. First, the study relied on a purposive sampling method by distributing a survey to members of PACFA. This sampling strategy may limit the generalisability of the findings to the broader Australian counselling and psychotherapy workforce. Specifically, the sample may not fully represent practitioners who are not registered with PACFA or those in private practice who may be less likely to engage with professional associations. Although the study surveyed a substantial proportion (34%) of PACFA’s registered clinicians, the experiences of those outside this membership—such as those operating independently or under different associations—may not have been adequately captured. Additionally, practitioners who chose not to participate in the survey may differ from those who did, further constraining the representativeness of the sample.

The study’s reliance on descriptive statistics may not have captured the depth of practitioners’ experiences, particularly in complex clinical contexts. While open-ended questions were included in the survey, its design—focusing on broad categories—may not have allowed participants to articulate fully the subtleties of their therapeutic practices or the perceived value of outcome measures. Future research could benefit from in-depth qualitative methods, such as interpersonal interviews, to gain more nuanced insights.

Furthermore, the qualitative analysis in this study, conducted using reflexive thematic analysis, is inherently shaped by the researcher’s interpretive lens. While this approach is widely recognised as a valid and rigorous method within qualitative research, subjectivity remains an integral component of the interpretive process. As such, it is crucial to acknowledge the potential for inconsistencies or biases, particularly during the coding and theme development stages, where researcher reflexivity plays a pivotal role in ensuring analytical rigour. To mitigate this potential, both researchers engaged in the coding process, ensuring that the themes were grounded in the data and reflected the participants’ perspectives as far as possible. Additionally, inter-coder reliability was applied, whereby discrepancies between the researchers were discussed, resolved, and then third-checked using AI to enhance consistency and reliability in the analysis.

Although this study identified several barriers to the use of outcome measures, such as time constraints, complexity, and misalignment with client-centred approaches, the reasons behind these barriers were not fully explored in depth. For example, while some practitioners cited time constraints as a barrier, this could be influenced by factors such as the caseload, institutional expectations, or lack of training, which were not systematically explored in this study. The broad nature of the barrier questions meant that specific details on the nature of these challenges were not captured. Future research could provide more in-depth exploration of these barriers, potentially through qualitative interviews or longitudinal studies that track changes in practitioners’ views and experiences over time.

The study did not extensively examine how outcome measures are integrated within various therapeutic modalities. While it highlighted differences between institutional and private practice settings, it did not explore how these tools are perceived and used across different therapeutic orientations (e.g., trauma-informed therapy, cognitive behavioural therapy, psychodynamic therapy). Given that different therapeutic approaches may view the role of outcome measures differently, future research should explore how outcome measures are adapted and used across various therapeutic modalities to ensure they are relevant and aligned with the goals of the therapy.

Conclusion

We found that the majority of counsellors and psychotherapists participating in our study use outcome measures for a variety of reasons, most commonly to meet institutional requirements and because of a belief in their utility for tracking client progress. A significant proportion of respondents also trust the measures they use, particularly those that are standardised and validated, such as the K10 and DASS. However, barriers to the broader use of these tools persist, particularly time constraints, lack of training, the complexity of some measures, and concerns about their relevance or utility in clinical practice. These factors complicate the profession-wide adoption of outcome measures; many practitioners reported that the time required for administration and interpretation detracts from valuable therapeutic work.

The predominant reason for non-use of outcome measures among practitioners is that many existing tools—especially those developed for clinical psychologists—are not always transferable to therapeutic models that centre on the therapeutic alliance. Relational and client-centred approaches, often prioritised in counselling and psychotherapy, may consider these tools incompatible with their focus on qualitative, ongoing dialogue and the dynamic nature of therapeutic work. This misalignment between the standardised, structured approach of outcome measures and the more fluid, human-centred model of therapy was frequently highlighted by non-users in the study.

In the current context of mental health care under-resourcing, the increasing standardisation of service provision, and the Australian Government’s push to develop national standards for the profession, our findings propel us in two key directions.

First, there is a clear need for the development of more tailored, flexible outcome measures that are better suited to the diverse approaches within counselling and psychotherapy. These tools should respect the centrality of the therapeutic relationship, capturing client progress in a way that complements rather than disrupts therapy.

Second, our findings call for a more robust provision of training by training institutions and professional associations in the rationale, benefits, limitations, and application of a range of existing outcome measures. Equipping practitioners with the skills to integrate these tools effectively into their practice, while understanding their limitations, will be essential to ensuring that they are used in a manner that enhances rather than hinders client care.

The integration of outcome measures into counselling and psychotherapy practice must be a balanced process. While evidence-informed practices are important for improving client outcomes, the profession must remain attuned to the diverse therapeutic approaches that characterise it. Tailoring outcome measures to fit the context of therapy and providing adequate training for their use will ensure that they serve as a valuable tool for enhancing client care while preserving the integrity of the therapeutic alliance.

References

Aldalur, A., Bridgett, T., & Pick, L. H. (2022). Psychological assessment reports for linguistically minoritized clients: Considerations for ethical and professional practice. Professional Psychology, Research and Practice, 53(6), 606–614. https:/​/​doi.org/​10.1037/​pro0000462
Google Scholar
Andrews, G., & Slade, T. (2001). Interpreting scores on the Kessler Psychological Distress Scale (K10). Australian and New Zealand Journal of Public Health, 25(6), 494–497. https:/​/​doi.org/​10.1111/​j.1467-842x.2001.tb00310.x
Google Scholar
Australian Association of Social Workers. (2013). Practice standards. https:/​/​www.aasw.asn.au/​about-aasw/​ethics-standards/​practice-standards/​
Australian Health Ministers. (1992). National mental health policy. Australian Government Publishing Services.
Google Scholar
Bickman, L. (2008). A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child & Adolescent Psychiatry, 47(10), 1114–1119. https:/​/​doi.org/​10.1097/​CHI.0b013e3181825af8
Google Scholar
Bickman, L., Kelley, S. D., Breda, C., de Andrade, A. R., & Riemer, M. (2011). Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services, 62(12), 1423–1429. https:/​/​doi.org/​10.1176/​appi.ps.002052011
Google Scholar
Bijker, R., Merkouris, S. S., Dowling, N. A., & Rodda, S. N. (2024). ChatGPT for automated qualitative research: Content analysis. Journal of Medical Internet Research, 26, Article e59050. https:/​/​doi.org/​10.2196/​59050
Google Scholar
Bloch-Atefi, A., Day, E., Snell, T., & O’Neill, G. (2021). A snapshot of the counselling and psychotherapy workforce in Australia in 2020: Underutilised and poorly remunerated, yet highly qualified and desperately needed. Psychotherapy and Counselling Journal of Australia, 9(2). https:/​/​doi.org/​10.59158/​001c.71216
Google Scholar
Braun, V., & Clarke, V. (2023). Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a knowing researcher. International Journal of Transgender Health, 24(1), 1–6. https:/​/​doi.org/​10.1080/​26895269.2022.2129597
Google Scholar
Carpenter-Song, E., Chu, E., Drake, R. E., Ritsema, M., Smith, B., & Alverson, H. (2010). Ethno-cultural variations in the experience and meaning of mental illness and treatment: Implications for access and utilization. Transcultural Psychiatry, 47(2), 224–251. https:/​/​doi.org/​10.1177/​1363461510368906
Google Scholar
Center for Substance Abuse Treatment (US). (2014). Improving cultural competence. Substance Abuse and Mental Health Services Administration (US). https:/​/​www.ncbi.nlm.nih.gov/​books/​NBK248428/​
Cheng, L., Xingxuan, L., & Lidong, B. (2023). Is GPT-4 a good data analyst? In H. Bouamor, J. Pino, & K. Bali (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 9496–9514). Association for Computational Linguistics. https:/​/​doi.org/​10.18653/​v1/​2023.findings-emnlp.637
Google Scholar
de Jong, K., van Sluis, P., Nugter, M. A., Heiser, W. J., & Spinhoven, P. (2012). Understanding the differential impact of outcome monitoring: Therapist variables that moderate feedback effects in a randomized clinical trial. Psychotherapy Research, 22(4), 464–474. https:/​/​doi.org/​10.1080/​10503307.2012.673023
Google Scholar
Errázuriz, P., & Zilcha-Mano, S. (2018). In psychotherapy with severe patients discouraging news may be worse than no news: The impact of providing feedback to therapists on psychotherapy outcome, session attendance, and the alliance. Journal of Consulting and Clinical Psychology, 86(2), 125–139. https:/​/​doi.org/​10.1037/​ccp0000277
Google Scholar
Garland, A. F., Kruse, M., & Aarons, G. A. (2003). Clinicians and outcome measurement: What’s the use? The Journal of Behavioral Health Services & Research, 30(4), 393–405. https:/​/​doi.org/​10.1007/​BF02287427
Google Scholar
Ionita, G., Ciquier, G., & Fitzpatrick, M. (2020). Barriers and facilitators to the use of progress-monitoring measures in psychotherapy. Canadian Psychology/Psychologie Canadienne, 61(3), 245–256. https:/​/​doi.org/​10.1037/​cap0000205
Google Scholar
Ionita, G., & Fitzpatrick, M. (2014). Bringing science to clinical practice: A Canadian survey of psychological practice and usage of progress monitoring measures. Canadian Psychology/Psychologie Canadienne, 55(3), 187–196. https:/​/​doi.org/​10.1037/​a0037355
Google Scholar
Jensen-Doss, A., Haimes, E. M. B., Smith, A. M., Lyon, A. R., Lewis, C. C., Stanick, C. F., & Hawley, K. M. (2018). Monitoring treatment progress and providing feedback is viewed favorably but rarely used in practice. Administration and Policy in Mental Health, 45(1), 48–61. https:/​/​doi.org/​10.1007/​s10488-016-0763-0
Google Scholar
Kato, K. (2018). Cultural understandings of mental health: The role of language and ethnic identity. Journal of Ethnic and Cultural Studies, 5(1), 58–73. https:/​/​doi.org/​10.29333/​ejecs/​102
Google Scholar
Kessler, R. C., Andrews, G., Colpe, L. J., Hiripi, E., Mroczek, D. K., Normand, S. L., Walters, E. E., & Zaslavsky, A. M. (2002). Short screening scales to monitor population prevalences and trends in non-specific psychological distress. Psychological Medicine, 32(6), 959–976. https:/​/​doi.org/​10.1017/​s0033291702006074
Google Scholar
Kilbourne, A. M., Beck, K., Spaeth-Rublee, B., Ramanuj, P., O’Brien, R. W., Tomoyasu, N., & Pincus, H. A. (2018). Measuring and improving the quality of mental health care: A global perspective. World Psychiatry, 17(1), 30–38. https:/​/​doi.org/​10.1002/​wps.20482
Google Scholar
Lambert, M. J., & Shimokawa, K. (2011). Collecting client feedback. Psychotherapy, 48(1), 72–79. https:/​/​doi.org/​10.1037/​a0022238
Google Scholar
Lambert, M. J., Whipple, J. L., Hawkins, E. J., Vermeersch, D. A., Nielsen, S. L., & Smart, D. W. (2003). Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice, 10(3), 288–301. https:/​/​doi.org/​10.1093/​clipsy.bpg025
Google Scholar
Lovibond, P. F., & Lovibond, S. H. (1995). The structure of negative emotional states: Comparison of the Depression Anxiety Stress Scales (DASS) with the Beck Depression and Anxiety Inventories. Behaviour Research and Therapy, 33(3), 335–343. https:/​/​doi.org/​10.1016/​0005-7967(94)00075-U
Google Scholar
Lutz, W., Schwartz, B., & Delgadillo, J. (2022). Measurement-based and data-informed psychological therapy. Annual Review of Clinical Psychology, 18, 71–98. https:/​/​doi.org/​10.1146/​annurev-clinpsy-071720-014821
Google Scholar
Miller, S. D., & Duncan, B. L. (2000). The Outcome Rating Scale. ICCE Press.
Google Scholar
Miller, S. D., & Duncan, B. L. (2004). The Outcome and Session Rating Scales: Administration and scoring manual. ISTC.
Google Scholar
Overington, L., Fitzpatrick, M., Hunsley, J., & Drapeau, M. (2015). Trainees’ experiences using progress monitoring measures. Training and Education in Professional Psychology, 9(3), 202–209. https:/​/​doi.org/​10.1037/​tep0000088
Google Scholar
Posavac, E. J. (2015). Program evaluation: Methods and case studies. Routledge. https:/​/​doi.org/​10.4324/​9781315664972
Google Scholar
Psychotherapy and Counselling Federation of Australia. (2019). Evidence-informed practice statement. https:/​/​pacfa.org.au/​portal/​Portal/​Publications-and-Research/​EI-Prac-Stmnt.aspx
Psychotherapy and Counselling Federation of Australia. (2023, May 17). National standards for counselling and psychotherapy to be established. https:/​/​www.pacfa.org.au/​Portal/​News-and-Advocacy/​News/​2023/​National-Standards-for-Counselling-and-Psychotherapy-to-be-established.aspx
Reese, R. J., Norsworthy, L. A., & Rowlands, S. R. (2009). Does a continuous feedback system improve psychotherapy outcome? Psychotherapy, 46(4), 418–431. https:/​/​doi.org/​10.1037/​a0017901
Google Scholar
Rivera, R. A., & Bennetto, L. (2023). Applications of identity-based theories to understand the impact of stigma and camouflaging on mental health outcomes for autistic people. Frontiers in Psychiatry, 14, Article 1243657. https:/​/​doi.org/​10.3389/​fpsyt.2023.1243657
Google Scholar
Rosen, A., Hadzi-Pavlovic, D., & Parker, G. (1989). The Life Skills Profile: A measure assessing function and disability in schizophrenia. Schizophrenia Bulletin, 15(2), 325–337. https:/​/​doi.org/​10.1093/​schbul/​15.2.325
Google Scholar
Sales, C. M. D., Neves, I. T. D., Alves, P. G., & Ashworth, M. (2018). Capturing and missing the patient’s story through outcome measures: A thematic comparison of patient-generated items in PSYCHLOPS with CORE-OM and PHQ-9. Health Expectations, 21(3), 615–619. https:/​/​doi.org/​10.1111/​hex.12652
Google Scholar
Schofield, M. J. (2008). Australian counsellors and psychotherapists: A profile of the profession. Counselling and Psychotherapy Research, 8(1), 4–11. https:/​/​doi.org/​10.1080/​14733140801936369
Google Scholar
Schofield, M. J., & Roedel, G. (2012). Australian psychotherapists and counsellors: A study of therapists, therapeutic work and professional development. La Trobe University.
Google Scholar
Sharples, E., Qin, C., Goveas, V., Gondek, D., Deighton, J., Wolpert, M., & Edbrooke-Childs, J. (2017). A qualitative exploration of attitudes towards the use of outcome measures in child and adolescent mental health services. Clinical Child Psychology and Psychiatry, 22(2), 219–228. https:/​/​doi.org/​10.1177/​1359104516652929
Google Scholar
Solstad, S. M., Castonguay, L. G., & Moltu, C. (2019). Patients’ experiences with routine outcome monitoring and clinical feedback systems: A systematic review and synthesis of qualitative empirical literature. Psychotherapy Research, 29(2), 157–170. https:/​/​doi.org/​10.1080/​10503307.2017.1326645
Google Scholar
Spitzer, R. L., Kroenke, K., Williams, J. B., & The Patient Health Questionnaire Primary Care Study Group. (1999). Validation and utility of a self-report version of PRIME-MD: The PHQ primary care study. JAMA, 282(18), 1737–1744. https:/​/​doi.org/​10.1001/​jama.282.18.1737
Google Scholar
Tasca, G. A., Angus, L., Bonli, R., Drapeau, M., Fitzpatrick, M., Hunsley, J., & Knoll, M. (2019). Outcome and progress monitoring in psychotherapy: Report of a Canadian Psychological Association task force. Canadian Psychology/Psychologie Canadienne, 60(3), 165–177. https:/​/​doi.org/​10.1037/​cap0000181
Google Scholar
Unsworth, G., Cowie, H., & Green, A. (2012). Therapists’ and clients’ perceptions of routine outcome measurement in the NHS: A qualitative study. Counselling & Psychotherapy Research, 12(1), 71–80. https:/​/​doi.org/​10.1080/​14733145.2011.565125
Google Scholar
Wing, J., Beevor, A., Curtis, R., Park, S., Hadden, J., & Burns, A. (1998). Health of the Nation Outcome Scales (HoNOS). The British Journal of Psychiatry, 172(1), 11–18. https:/​/​doi.org/​10.1192/​bjp.172.1.11
Google Scholar
Wolpe, J. (1973). The practice of behavior therapy (2nd ed.). Pergamon Press.
Google Scholar

Appendix: Descriptive Statistics of Workforce Characteristics

Table A1.Use of Outcome Measures
Frequency Percent
No, I don’t use any outcome measures 204 17.3
Yes, both 261 22.2
Yes, formal 126 10.7
Yes, informal 514 43.7
Other. Please specify 37 3.1
Left blank 35 3.0
Total 1,177 100.0
Table A2.Current Position/Work Role
Frequency Percent
A qualified psychotherapist, Indigenous healing practitioner or counsellor in practice 1,031 87.6
A qualified psychotherapist, Indigenous healing practitioner or counsellor in an academic role 76 6.5
A qualified psychotherapist, Indigenous healing practitioner or counsellor in a managerial or administrative role 66 5.6
No box ticked 4 0.34
Total 1,177 100.0
Table A3.What is Your Gender?
Frequency Percent
Woman 959 81.4
Woman, non-binary 2 0.2
Man 196 16.7
Man, non-binary 2 0.2
Non-binary/gender diverse 12 1.0
My gender identity is not listed 6 0.5
Total 1,777 100.0
Table A4.What is Your Cultural Identity?
Frequency Percent
Australian 581 49.3
English Australian 109 9.3
European Australian 66 5.6
New Zealand Australian 33 2.8
Irish Australian 27 2.3
Italian Australian 22 1.9
South African Australian 22 1.9
Chinese Australian 21 1.8
Greek Australian 16 1.4
Indian Australian 16 1.4
Scottish Australian 15 1.3
American Australian 14 1.2
German Australian 11 0.9
Latin American Australian 10 0.9
Singaporean Australian 8 0.7
Japanese Australian 7 0.6
Jewish Australian 6 0.5
Malaysian Australian 6 0.5
Lebanese Australian 6 0.5
Sri Lankan Australian 6 0.5
Welsh Australian 6 0.5
Prefer not to say 15 1.3
Other (e.g., Afghan, Bulgarian, Korean, Iranian, Turkish, Zimbabwean) 45 3.8
No box ticked 66 5.6
Total 1,177 100.0
Table A5.Age of Participants
Descriptive Measures Value
Mean 54.33
Median 55.00
Range 22–83
Interquartile range 17
Standard deviation 12.19
Total number of participants 1,177
Table A6.Years of Experience as a Practising Counsellor/Psychotherapist/Indigenous Healing Practitioner
Frequency Percent
Less than 1 year 49 4.2
1 to 3 years 179 15.2
3 to 5 years 157 13.3
5 to 10 years 231 19.6
10 to 20 years 313 26.6
More than 20 years 246 20.9
Total 1,177 100.0
Table A7.Are You Currently a Registered (Student, Provisional, Clinical) Psychotherapist, Indigenous Healing Practitioner or Counsellor? Select One Option
Frequency Percent
Certified practising counsellor 98 8.3
Provisionally registered counsellor 174 14.8
Provisionally registered psychotherapist 85 7.2
Provisionally registered psychotherapist and provisionally registered counsellor 44 3.7
Registered clinical counsellor 394 33.5
Registered clinical psychotherapist 208 17.7
Other. Please specify 58 4.9
No box ticked 116 9.8
Total 1,177 100.0
Table A8.Please Indicate Your Main Theoretical Model of Primary Training
Frequency Percent
Humanistic school, i.e., person-centred,
existential and gestalt
435 37.9
Constructivist school, i.e., feminist, narrative,
solution-focused brief therapy
179 15.2
Eclectic 174 14.8
Psychodynamic school 73 6.2
Behavioural school 68 5.8
Pragmatic school, i.e., CBT, REBT, DBT, Mindfulness, ACT 59 5.0
Integrative, i.e., EFT 55 4.7
Family approaches 24 2.0
Systemic 14 1.2
Other 68 5.8
Prefer not to say 7 0.6
No box ticked 21 1.8
Total 1,177 100.0

Note. CBT = cognitive behavioural therapy; REBT = rational emotive behavior therapy; DBT = dialectical behavior therapy; ACT = acceptance and commitment therapy; EFT = emotion focused therapy.

Table A9.Please Indicate Your Main Practice Modality
Frequency Percent
Person-centred counselling 202 17.2
Eclectic 132 11.2
Trauma informed 117 9.9
Integrative 95 8.1
Psychodynamic 83 7.0
Gestalt therapy 61 5.2
Family and couples therapy 49 4.2
ACT 45 3.8
CBT 45 3.8
Emotion-focused therapy 40 3.4
Multi-modal 38 3.2
Art therapy/expressive therapies 33 2.8
Somatic psychotherapy 31 2.6
Solution focused 28 2.4
Narrative 22 1.9
Mindfulness-based approaches 16 1.4
EMDR 13 1.1
Spiritually informed 10 0.8
Other (less than 10) (Aboriginal and Torres Strait Islander healing practices, equine therapy, sensorimotor, etc) 94 8.0
No box ticked 23 17.2
Total 1,177 100.0

Note. ACT = acceptance and commitment therapy; CBT = cognitive behavioural therapy; EMDR = eye movement desensitization and reprocessing.

Table A10.Type of Workplace/Professional Roles
Frequency Percent
Practitioner in private practice 622 52.8
Practitioner in an agency/organisation 135 11.5
Practitioner in a health care setting 45 3.8
Practitioner in the third/charity/voluntary sector 26 2.2
Practitioner in a high school 23 2.0
Practitioner in an employee assistance program/workplace 11 0.9
Practitioner in another non-private setting 7 0.6
Practitioner in a university 7 0.6
Prefer not to say 3 0.3
Multiple types of employments and roles 298 25.3
Total 1,177 100.0
Table A11.Client Groups (Click All That Apply)
Frequency Percent
Adults 1,043 88.5
Individual clients 747 63.5
Couples 511 43.4
Young people 421 35.7
High school students 338 28.7
Seniors 333 28.3
Families 327 27.8
University students 295 25.1
Groups 222 18.8
Primary school students 199 16.9
Total 1,177 100.0
Table A12.Primary Platform for Working
Frequency Percent
In person and online 566 48.1
In person 419 35.6
Online videoconferencing (e.g., Zoom) 103 8.8
Phone 57 4.8
Online chat 3 0.3
Email 5 0.4
Texting 1 0.1
Therapy in natural settings (e.g., Aboriginal and Torres Strait Islander healing practices, ecotherapy, equine therapy) 10 0.8
Other (e.g., hospital bedside, etc). Please specify 12 1.0
Left blank 1 1.0
Total 1,177 100.0
Table A13.Annual Income
Frequency Percent
$0 10 0.8
Up to $10,000 80 6.8
$10,000–$20,000 62 5.3
$20,001–$30,000 69 5.9
$30,001–$40,000 87 7.4
$40,001–$50,000 118 10.0
$50,001–$75,000 254 21.6
$75,001–$100,000 214 18.2
$100,001–$120,000 74 6.3
$120,001–$150,000 31 2.6
More than $150,000 36 3.1
Prefer not to say 122 10.4
No box ticked 20 1.7
Total 1,177 100.0
Table A14.At What Stage do You Use Outcome Measures in Your Practice?
Frequency Percent
At the end of the therapy/counselling process 480 40.7
At the beginning of the therapy/counselling process 434 36.9
At the end of each session 303 25.7
At the beginning of each session 126 10.7
Other. Please specify 367 31.2
No box ticked 157 13.3
Total 1,177 100.0
Table A15.If You Work in an Agency, Are You Required to Use Specific Outcome Measures?
Frequency Percent
No 815 69.3
Yes 278 23.6
No box ticked 84 7.1
Total 1,177 100.0
Table A16.If You Work in an Agency, How Are the Outcome Measures Completed?
Frequency Percent
Electronic 176 63.3
Handwritten 70 25.2
Verbal 14 5.0
Other 18 6.5
Total 278 100.0
Table A17.What Happens to the Data From These Outcome Measures?
Frequency Percent
Entered into agency database 185 66.6
A written report submitted to management 34 12.2
Included in personal note taking 28 10.1
Other. Please specify 29 10.4
No box ticked 2 0.7
Total 278 100.0