Skip to content

The Development and Implementation of an Outcome Monitoring System for Addiction Treatment

By The Canadian Journal of Addiction - Oct 12th 2016
Share This Story

Mary Jean Costello, PhD, Courtney Ropp, MSc, Sarah Sousa, MSc, Wendi Woo, MA, Harry Vedelago, MD, FCFP, ABAM, Brian Rush, PhD


ABSTRACT

Objectives: Routine outcome monitoring is critical for evaluating quality and effectiveness of mental health and addiction (MHA) services. This paper describes the development, design and implementation of an outcome monitoring system (OMS) within an inpatient addiction treatment program, presents feasibility findings from pilot testing and early implementation, and shares lessons learned.

Methods: A logic model, as well as data collection tools and protocols, were developed collaboratively with stakeholders including staff, former patients, and external experts. Pilot testing assessed the reliability of the tool’s items. Following implementation, preliminary participation rates were evaluated and early lessons were documented.

Results: The logic model classified recovery outcomes into eleven domains. The OMS was designed to routinely collect self-reported data on each recovery domain from patients (19+ years) at admission and discharge (self-administered tool using electronic software), and post-discharge at 1-, 3-, 6-, and 12-month intervals (via telephone or email). The average time for tool completion via tablet was 20.5 minutes, while telephone was 18.1 minutes. Test-retest analysis of key outcome measures ranged from 0.36 to 0.94 (poor to excellent agreement) for categorical items and 0.55 to 0.82 (good to excellent agreement) for dichotomous items. At admission, 41.8% of patients consented to participate and 98% completed the tool. Lessons learned relating to stakeholder commitment; system and tool development; standardized base-line measurement; and use of electronic questionnaires are shared. 

Conclusions: Sharing approaches used and lessons learned may inform the development and implementation of similar systems that can be used to evaluate MHA services within other settings.

INTRODUCTION

There is increasing interest in routinely monitoring outcomes within the mental health and addiction (MHA) field, both in Canada and internationally.1-4 Benefits include: the ability to continually evaluate treatment effectiveness, consistency and cost-effectiveness; inform and monitor quality improvement efforts; and provide accountability information to consumers, administrators, funders and overarching systems.4,5 An effective outcome monitoring system (OMS) can also contribute to MHA service research by illuminating what type or intensity of treatment works for whom, and how long positive effects are sustained.5,6 Ideally, such systems also include processes that provide information back to clinicians to inform treatment decisions, as well as encourage persons to return-to-treatment if a need is identified.5-8

Outcome monitoring within an inpatient treatment setting ideally includes routine follow up with individuals, post-treatment.4 This involves measuring outcomes that are expected to change as a result of participation in treatment, including symptoms, behaviours and functioning. The most effective OMSs collect standardized patient-level data at admission, followed by repeated measurement at subsequent time points post-discharge.6 However, only a small number of inpatient treatment programs conduct routine follow up with patients for evaluation purposes (e.g., Hazelden Betty Ford Foundation).9

Current outcome measurement practices embedded within the Ontario MHA system are limited insofar as they reflect only the short period of time individuals are in treatment and tend to measure symptom reduction rather than more functional domains of recovery. There is a need to develop OMSs that reflect the current shift in perspective that MHA are chronic conditions requir-ing ongoing management—much like diabetes or heart disease—rather than the historical view that sees MHA as acute conditions.7,10 Routine outcome monitoring (OM) that extends beyond the end of a single treatment episode and measures recovery based on a number of life domains is critical for evaluating the effectiveness of MHA treatment, as well as advancing knowledge of the recovery process in general.

However, defining recovery for the purpose of measurement is challenging. Within the addiction field the measurement of recovery has almost always been limited to abstinence-centred outcomes.4,11 More recently, recovery from addiction has been conceptualized as a person achieving or maintaining outcomes in a number of life domains, including symptom reduction, behaviour change and improved life functioning (e.g., social relationships, occupation, quality of life, etc.).10,12,13 Moreover, recovery is increasingly being described as a process where one is actively involved in managing his/her addiction or risk having the problem(s) resurface.10 This view is more reflective of how the mental health field conceives recovery, where the goal of treatment is not solely symptom reduction but rather equipping individuals with skills and tools to manage symptoms and improve quality of life.14 Although some experts have suggested which recovery domains may be important to measure,10,12,15-18 there is currently no set of standard outcome measures. Measurement tools that focus on multiple life domains are needed to reflect the shift toward a more holistic definition of recovery within a recovery management paradigm.18

Other challenges with implementing OMSs include extensive costs and resource allocation associated with system development and maintenance, as well as prob-lems contacting patients post-treatment.4,5 Some promis-ing practices have been identified including embedding baseline measures into routine assessment practice,19 integrating follow up data collection as part of recovery management check-ups or continuing care services,4,6,7 and rigorous methods for increasing follow up rates with hard to reach populations.3,20 However, only one published study in Canada has examined the feasibility of imple-menting an OMS within addiction treatment agencies.5,21 Given the paucity of research that examines the feasibility of such OMSs, there remains a need to share approaches used and lessons learned when developing and imple-menting OMSs within various MHA treatment settings.

This paper describes the development, design, and early implementation of an OMS project within an inpatient addiction treatment program in Ontario; presents feasibility findings from pilot testing and early implementation; and, shares early lessons learned. 

CONTEXT

This project is part of a multi-phase endeavor to develop and implement an OMS across a MHA treatment centre in Southwestern Ontario. The goal of the OMS is to collect information that enables rigorous evaluation of the qual-ity and effectiveness of MHA treatment. The initiative also aims to establish a sustainable infrastructure to collect data that can be used to: (1) improve clinical care, (2) provide accountability, and (3) answer research and evaluation questions.

The project setting is a 105 bed, inpatient addiction medicine service (AMS). The program offers residential, group-based treatment to adults (19+) addicted to alcohol and/or other substances (length of stay of 35 days) and specialized programming for treating co-occurring Post-Traumatic Stress Disorder (PTSD; up to 56 days of stay). Treatment is abstinence-based, with 12-step facilitation, provided by a multidisciplinary team of health professionals including Addiction Medicine Certified physicians. The program focuses on medical stabilization, assessment, and recovery-oriented education and skills training. About 1,050 patients are admitted to the program annually. Treatment cost is covered by an array of funding structures, including provincial, semi-private (e.g., co-payments through private health insurance), and private expenditures (e.g., out-of-pocket payments). More detail on the program and its admission criteria can be found at: http://www.homewoodhealth.com/health-centre/addict...

METHODS
IDENTIFICATION OF OUTCOMES

In collaboration with program staff, a logic model was developed to clarify and articulate the program’s theory of change (i.e., the underlying assumptions that guide program delivery and are believed to contribute to changes and improvements in patients).22,23 Main program activities were linked with anticipated short-, intermediate-, and longer-term changes expected of patients completing the program.24 Focus groups with former patients were held to verify that the outcomes identified were realistic and reflective of the patient experience.25 Content experts were consulted to review and validate the outcomes identified.

QUESTIONNAIRE CONTENT

A review of previously published instruments was conducted and an inventory of candidate measures that addressed each of the main outcomes was created. Where possible, validated tools, sub-scales or individual measures were selected to make up a final set. Primary sources included: Global Assessment of Individual Needs-Q3 (GAIN-Q3);26 Addiction Severity Index Version 6;27,28 Penn Alcohol Craving Scale;29 Alcoholics Anonymous Affiliation Scale;30 International Physical Activity Questionnaire;31 World Health Organization Quality of Life Instruments;32 and Canadian Community Health Survey – Mental Health.33 In several cases, modifications to the wording or response format was necessary to improve clarity and appropriateness of the measure. New measures were created when necessary. Program leaders and content experts reviewed the set of measures for comprehensiveness, appropriateness, redundancies, and face-validity. Together, this final set of measures comprised the Recovery Questionnaire (RQ).

PILOT TESTING

The RQ was pilot-tested with 46 participants who were former inpatients, 19 years of age or older. The purpose was to evaluate the feasibility of administering the instrument (both in-person and over the phone) and assess the test-retest reliability of key items. At Time 1, participants either self-completed the questionnaire using a tablet in a group setting (n=21) or completed the questionnaire over the phone with project staff (n=25). At Time 2, the questionnaire was re-administered over the phone to 38 participants after 3-7 days

EARLY IMPLEMENTATION

This project received clearance from the Research Ethics Board at Homewood Health Centre in Guelph, Ontario. Eligible participants were registered patients of AMS admitted after April 1, 2015. Patients attended a mandatory group during the first week of admission to the program, facilitated by project staff, where they were informed of the project, invited to provide consent to participate and self-completed the RQ. Follow up locator information was also collected from participants, including phone number(s), email address, and phone number(s) for an alternative contact person. Upon discharge (i.e., during the final week of the program; planned length of stay 35-56 days), patients attended a second mandatory group, facilitated by project staff, where they were re-informed of the project, re-invited to provide consent and self-completed the RQ. In both cases, the RQ was administered via tablets using electronic data capture software. Those who declined participation at admission were still eligible to participate at discharge.

Participants were re-contacted either by phone or email at 1-, 3-, 6- and 12-month intervals, post-discharge and asked to complete the RQ again. To test the feasibility of the two follow up methods, participants who provided both an email address and a phone number were random-ized to either the email or phone follow up condition. Those who provided only one method of contact were followed up accordingly. All participants were provided with the phone number of a local support service at the end of each questionnaire. Those who disclosed having had suicidal thoughts during the past 30 days were immediately prompted to call a local support service and provided with the telephone number; those in immediate distress were prompted to call #911. If via phone, project staff also offered to directly connect participants to the support service or directly contacted #911 if a participant was in immediate risk of harming him/herself or others.

LESSONS LEARNED

During development and implementation, project staff met regularly to reflect, discuss, and document successes and challenges encountered as the project unfolded. Lessons learned were then derived and are shared below.

RESULTS
IDENTIFICATION OF OUTCOMES

Outcomes identified by program staff and former patients were classified into nine domains: substance use; mental health; psychological-, physical-, social-, and occupational-wellness; daily life functioning; engage-ment in continuing care programs/services; and, overall quality of life or life satisfaction. In addition, two system-level outcome domains (i.e., use of health services and engagement in criminal activity) were identified by content experts.

QUESTIONNAIRE CONTENT

The RQ was designed to be administered at six time points: at admission to gather baseline data against which to assess change overtime, as well as collect participant characteristics to describe the sample (e.g., gender, age, ethno-cultural group, education, employ-ment, etc.); at discharge to assess within-program change for select outcomes, as well as collect treatment process measures which may help explain recovery outcomes; and, at 1-, 3-, 6- and 12-months post-discharge to monitor outcomes and assess changes over time. The admission and post-discharge versions consisted of approximately 150 measures each (Table 1).

The discharge version, comprised of 129 measures, assessed only a sub-set of the pre-identified outcomes: substance use, mental health, psychological- and physical-wellness, and overall quality of life or life satisfaction. Unique to the RQ at discharge were items measuring therapeutic alliance (based on the 6-item Session Alliance Inventory),34 perception of care (using the 38-item Ontario Perception of Care Tool for Mental Health and Addiction),35 and characteristics of the treat-ment received, including program stream and participa-tion in specialized groups.

PILOT TESTING

The average time for self-completion via tablet was 20.5 minutes (SD = 5.6; 95% confidence interval [CI] = 18.1– 22.9; range = 11.7 – 34.0), while phone was 18.1 minutes (SD = 5.6; 95% CI = 15.9 – 20.2; range = 10.1 – 31.9). Retest reliability inter-class correlation coefficients (ICCs) for primary patient-level outcome measures ranged from 0.36 to 0.94 indicating poor to excellent agreement for categorical items.36 Some of the weakest items were within the social wellness, occupational wellness, and overall quality of life and life satisfaction domains (ICC < 0.60, poor; ICC = 0.60 to 7.0, poor but acceptable). Cohen’s kappa coefficients for dichotomous items ranged from 0.55 to 0.82 indicating good to excellent agreement (Table 2).37 Modifications to the data collection protocol and some items were made following pilot testing.

EARLY IMPLEMENTATION

By September 30 2015, 41.8% of individuals admitted to the program over the previous six months provided consent to participate in the project at admission (n=203), the majority (98.0%) of whom completed the RQ at that time (Figure 1). Of those who were discharged over this same time period (n=151), 145 re-consented representing a 96.0% retention rate from admission to discharge. Just over half of these participants completed the RQ at discharge (55.9%). In addition, 23 new participants provided consent at discharge representing a 10.0% participation rate among previous non-consenters; 91.3% of whom completed the RQ. Reasons for refusal of consent were not systematically documented, however, anecdotal evidence recorded by the project team suggested patients chose not to participate because they felt unfit or too unwell to participate, had not yet “bought into” or committed to treatment and therefore felt apprehensive about participating in the project, while others merely appeared disinterested or disengaged. To assess the potential for non-response bias, preliminary analyses were conducted comparing characteristics of those who consented at admission (n=203) to those who did not (n=283). There were no significant differences between groups on gender, age, education, marital status, substance use, treatment stream or presenting diagnoses (Table 3). Participants continue to be recruited into the project and followed up accordingly.

LESSONS LEARNED

Stakeholder commitment

The most important factor in the successful implementation for the project thus far has been the commitment and investments made by the treatment centre in building an OMS, particularly at the ownership, executive, and program levels. This includes a commitment to account-ability, transparency and the rigorous collection of data. Understanding the needs and expectations of stakeholders has been critical during development and implementation and continues to be as data analysis begins and reports are prepared. Building confidence in the integrity of a system is critical to ensure meaningful data is gener-ated and used by stakeholders.

System and tool development

Development and implementation of an OMS requires significant time and human resources. A dedicated team that can work with the various levels of stakeholders and has expertise in evaluation design, methodology, survey development and data analysis is critical. Collaborating with external leaders in OM to learn from and build on previous work has strengthened the team’s capacity to develop and implement a rigorous system. For example, developing a data collection tool to measure recovery-oriented outcomes is time consuming and complicated. Although previously validated tools exist that can be used to measure various aspects of recovery as demonstrated by Rush and colleagues,5,21 no one tool was identified that measured all aspects of recovery as defined here. However, building in several validated tools and sub-scales facilitated the development of the current RQ enabling measurement of each outcome of interest.

Baseline measurement

The current project was initially framed as research, designed and implemented by an external evaluation team. Its purpose was to systematically test data collection tools and methods prior to the organization making a full commitment to implement the system as part of routine practice. However, framing the project as research has meant the need to obtain informed consent to participate. It has also required deliberate efforts to build trust and establish rapport with patients especially during recruitment, as well as address participant concerns about data confidentiality and use. Embedding baseline measures into routine assessment practices would streamline the collection of data at the inpatient time points, increasing the availability of baseline data on all patients, and facilitating its use to inform clinical care, research and evaluation.

Electronic questionnaires

Self-administration via tablets and use of electronic data capture software has facilitated data collection at admission and discharge by reducing the associated time and costs; however, it has required some technical support. Choosing data collection software can be difficult given the variety of features and capabilities each product has to offer. In this case, it was critical the product adhere to the data storage and security standards within Canada, be able to handle complex skip patterns and a variety of ques-tion formats, offer multiple deployment functionalities, including the ability to function offline (i.e., no internet connection) and via email invitation. The software also needed to be user friendly both from the perspective of the survey developer and end-user, and offer ongoing software support as needed. Annual costs of the software and inter-nal staff capacity to work with the software also needed to be considered and weighed against the cost of hiring an external developer to build a custom data collection plat-form. However, as the OMS eventually moves into routine practice the need for building a permanent data collection platform and infrastructure must be considered.  

DISCUSSION

The RQ presented here as part of an OMS for addiction treatment covers a broad range of recovery domains. Measuring recovery beyond abstinence is important for gaining a better understanding of the recovery process and its multiple dimensions. Much effort went into defining recovery within in this context and developing a tool that could reliably measure each dimension. In most cases, the test retest reliability of key outcome measures was good; however, agreement for some items was less stable over time. One explanation may be that these less stable items measure phenomena that naturally tend to fluctuate over time (e.g. general level of happiness). Future work will continue to refine the definition of recovery and its measurement so to reduce such error.

Almost half of the patient population at the time participated in the project. Preliminary analyses found no significant differences between participants and non-participants on basic demographic and clinical characteristics providing some evidence that the baseline data collected may be generalizable to the broader patient population. Although retention at discharge was high, not all participants completed the RQ at that time point. The poor completion rate was primarily due to restricted access to the unit during periods of infectious disease outbreak which prevented face-to-face data collection. Employing alternative methods to face-to-face data collection during periods of outbreak would likely improve completion rates at discharge.

The importance of baseline measurement is evident and lessons learned to enhance baseline data collection are worthy of discussion. In this case, the initial contact with patients was crucial for recruitment and required significant investments in the project team’s time to communicate the project’s purpose, its importance, and address participant concerns. Presumably this investment has contributed to the relatively high participation rate and may motivate participants to remain engaged.

LIMITATIONS

Although the RQ collects data on several important covariates including participant characteristics, it does not extensively measure treatment characteristics. Currently, such data is limited to the program stream and which specialized treatment groups the participant may have attended. Collecting more information of the type and duration of specific treatment components (e.g., physician and psychiatrist visits) would assist in evaluat-ing what treatment components and combination may work best.

Data collected during pilot testing was use to assess test retest reliability. Although adequate, the sample size was relatively small. Furthermore, at Time 1, data was purposely collected by two different methods (phone and electronic), whereas at Time 2 data was collect by phone only. The inconsistency in data collection methods from Time 1 to Time 2 may have contributed to poorer agree-ment between responses across the two time points.

Although anecdotal evidence was recorded, reasons for refusal of consent or for declining to complete the RQ were not systematically documented. Systematic docu-mentation may have helped to uncover trends or common reasons for why patients chose not to participate in the project or complete the RQ.

NEXT STEPS

Data analysis

The pre-/post-test design permits the analysis of change in participant outcomes from admission to post-treat-ment. The degree these changes can be attributed direct-ly to participation in the program, however, is limited by the extent to which no other variable or intervention is responsible for the observed change. This, of course, is a major limitation of the design and has implications for how results can be interpreted. Future analyses will control for known predictors of change and investigate how other possibly important factors may mediate treatment effects, including engagement in other MHA programs, services and supports. Both multiple regres-sion38 and multilevel linear modeling techniques for repeated measurement38 will be employed. To account for possible bias in responses due to attrition at follow up, intention-to-treat analysis will also be explored.39 Efforts to measure and evaluate the effects of treatment dosage may be another area of focus for future analyses.

Enhancing the sustainability of the system

As part of efforts to examine the feasibility of implement-ing an OMS and inform its sustainability, the actual cost of following up with participants post-discharge via phone or email will be assessed. Both methods have advantages and disadvantages; however, evidence gener-ated by directly comparing the feasibility, costs, response rates, quality of data, and other important indicators will help inform which method may be most cost-effective to build into a permanent OMS. Efforts to embed some baseline measures into routine assessment practices are currently being explored, as is the possibility of coupling post-discharge OM with future continuing care services. Both efforts would greatly enhance the sustainability of the system.

CONCLUSIONS

There is growing need for the ongoing, systematic collec-tion of data that can be used to evaluate the quality and effectiveness of MHA services, not only at the program-level but also at the organization- and system-levels. Such data are also needed to inform clinical care; providing evidence to support best practices and quality improve-ment efforts. The OMS described in this paper provides the foundation for a much larger initiative aimed at trans-forming how one organization collects and uses data to continually monitor, evaluate and inform clinical prac-tice. It also has the potential to help inform and shape how similar systems can be developed and implemented within other programs, organizations or even across the MHA system has a whole.

ACKNOWLEDGEMENTS

We thank Roy Cameron (Executive Director, Homewood Research Institute) and Jagoda Pike (CEO, Homewood Health Centre) who provided joint leadership for this initiative, as well as, the Outcomes Working Group and project staff (Katie Junkin, Kayla Deroux, Chris Ryan and Rachel Wells). Special thanks also to program staff for their support in implementation and program patients for their ongoing participation.

CONFLICT OF INTEREST

Homewood Research Institute is a free-standing, non-for-profit organization that receives private donations, including philanthropic support from the Schlegel family to cover foundational expenses. The Schlegel family owns Homewood Health.

DOWNLOAD IN PDF

REFERENCES

  1. Health Canada. Drug Treatment Funding Program (DTFP) Framework. Retrieved from www.hc-sc.gc.ga/hc-ps/drugsdrogues/dtfp-pftt/framework-cadre-eng.php. 2008.
  2. Marsden J, Farrell M, Bradbury C, Dale-Perera A, Eastwood B, Roxburgh M, et al. Development of the treatment outcomes profile, Addiction. 2008; 103: 1450-1460.
  3. Darke S, Ross J, Teesson M. The Australian treatment outcome study (ATOS): What have we learnt about treatment for heroin dependence? Drug Alcohol Rev 2007; 26: 49-54.
  4. Lennox RD, Sternquist MA, Paredes A. A simplified method for routine outcome monitoring after drug abuse treatment. Subst Abuse 2013; 7: 155-169.
  5. Rush B, Rotondi NK, Chau N, et al. Drug treatment funding program client recovery monitoring project. Centre for Addiction and Mental Health: Toronto, ON; 2013. Retrieved from http://eenet.ca/dtfp/ client-outcome-monitoring-project/.
  6. McLellan AT, McKay JR, Forman R, Cacciola J, Kemp J. Reconsidering the evaluation of addiction treatment: From retrospective follow-up to concurrent recovery monitoring. Addiction 2005; 100: 447-458.
  7. Dennis M, Scott CK, Funk R. An experimental evaluation of recovery management checkups (RMS) for people with chronic substance use disorders. Eval Program Plann 2003; 26:339-352.
  8. Rush B, Martin G, Corea L, Rotondi NK. Engaging stakholders in review and recommendations for models of outcome monitoring for substance abuse treatment. Subst Use Misuse 2012; 47: 1293-1302.
  9. Stinchfield R, Owen P. Hazeldon’s model of treatment and its outcome. Addict Behav. 1998; 23(5): 669-683.
  10. McLellan AT, Chalk M, Bartlett J. Outcomes, performance, and quality—What’s the difference? J Subst Abuse Treat 2007; 32:
    331-340.
  11. White W, Boyle M, Loveland D. Recovery from addiction and from mental illness: Shared and contrasting lessons. In Ralph RO, Corrigan PW, eds. Recovery in mental illness: Broadening our understanding of wellness. Washington: American Psychological Association, 2005: 233-258.
  12. Kaskutas LA, Borkman TJ, Laudet A, et al. Elements that define recovery: The experiential perspective. J Stud Alcohol Drugs 2014; 75: 999-1010.
  13. Substance Abuse and Mental Health Services Administration (SAMHSA). Working definition of recovery: 10 guiding principles of recovery. Rockville, MD: U.S. Department of Health and Human Services, 2012. Retrieved from http://store.samhsa.gov/product/SAMHSA-s-Working-Definition-of-Recovery/PEP12-RECDEF.
  14. Watson DP, Rollins AL. The meaning of recovery from co-occuring disroders: Views from consumers and staff members living and working in housing first programming. Int J Ment Health Addiction 2015; 13: 635-649.
  15. Advisory Council on the Misuse of Drugs (ACMD). What recovery outcomes does the evidence tell us we can expect?
    London, UK: Home Office, 2013. 
  16. Lal S. Prescribing recovery as the new mantra for mental health: Does one prescription serve all? Can J Occup Ther 2010; 77(2): 82-89.
  17. Simpson DD. A conceptual framework for drug treatment process and outcomes. J Subst Abuse Treat 2004; 27: 99-121.
  18. Groshkova T, Best D, White W. The assessment of recovery capital: Properties and psychometrics of a measure of addiction recovery strengths. Drug Alcohol Rev 2013; 32: 187-194.
  19. Roe D, Gelkopf M, Isolde Gornemann M, Baloush-Kleinman V, Shadmi E. Implementing routing outcome measurement in psychiatric rehabilitation services in Israel. International Review of Psychiatry 2015; 27(4): 345-353.
  20. Scott CK. A replicable model for achieving over 90% follow-up rates in logitudinal studies of substance abusers. Drug Alcohol Depen 2004; 74: 21-36.
  21. Rush B, et al. Monitoring recovery from substance abuse treatment– Results of the Ontario trial and feasibility assessment. Canadian Journal of Addiction; Current issue.
  22. Chen HT. Theory-driven evaluation. Thousand Oaks, CA: Sage Publications, 1990. 
  23. Lipsey MW. Theory as method: Small theories of treatments. In: Sechrest LB, Scott AG, eds. Understanding causes
    and generalizing about them. New Directions for Program Evaluation, 1993; 57: 5-38.
  24. Kumpfer KL, Shur GH, Ross JG, Bunnell KK, Librett JJ, Millward AR. Measurements in prevention. Rockville, MD: U. S. Department of Health and Human Services, Public Health Service, Substance Abuse and Mental Health Services Administration, Center for Substance Abuse Prevention, 1993.
  25. Costello MJ, Ropp C, Sousa S, Junkin K, Deroux K, Vedelago H, Woo W. “Being clean doesn’t mean you’re in recovery”: Defining recovery for ongoing monitoring and program evaluation. Paper presented at Issues of Substance Conference 2015; November 16, Montreal, QC.
  26. Titus JC, Feeney T, Smith DC, Rivers TL, Kelly LL, Dennis ML. GAIN-Q3 3.2: Administration, clinical interpretation, and brief intervention. Normal, IL: Chestnut Health Systems, 2013. Retrieved from http://gaincc.org/GAINQ3.
  27. Cacciola JS, Alterman AI, Habing B, McLellan AT. Recent status scores for version 6 of the Addiction Severity Index (ASI-6). Addiction 2011; 106: 1588-1602.
  28. McLellan AT, Cacciola JC, Alterman AI, Rikoon SH, Carise D. The Addiction Severity Index at 25: Origins, contributions and transitions. Am J Addiction 2006; 15: 113-124.
  29. Flannery BA, Volpicelli JR, Pettinati HM. Psychometric properties of the Penn Alcohol Craving Scale. Alcohol Clin Exp Res 1999; 23: 1289-1295.
  30. Humphreys K, Kaskutas LA, Weisner C. The Alcoholics Anonymous Affiliation Scale: development, reliability, and norms for diverse treated and untreated populations. Alcohol Clin Exp Res 1998; 22(2): 974-978.
  31. Booth ML. Assessment of Physical Activity: An International Perspective. Res Q Exercise Sport 2002; 71(2): s114-20.
  32. WHOQOL Group. Development of the World Health Organization WHOQOL-BREF quality of life assessment. Psychol Med 1998; 28: 551–558.
  33. Statistics Canada. Canadian Community Health Survey (CCHS) – Mental Health. Ottawa: Statistics Canada, 2013.Retrieved from http://www23.statcan.gc.ca/imdb/p2SV.pl?Function=getSurvey&SDDS=5015
  34. Falkenstrom F, Hatcher RL, Skjulsvik T, Holmqvist Larsson M, Holmqvist R. Development and validation of a 6-item working alliance questionnaire for repeated administrations during psychotherapy. Psychol Assessment 2015; 27(1): 169-183.
  35. Rush B, Hansson E, Cvetanova Y, Rotondi N, Furlong A, Behrooz R. Development of a client perception of care tool for mental health and addictions: Qualitative, quantitative, and psychometric analysis. Centre for Addiction and Mental Health: Toronto, ON; 2013. Retrieved from http://eenet.ca/
    dtfp/client-satisfaction-project/.
  36. Shrout PE, Fleiss JL. Interclass correlations: uses in assessing rater reliability. Psychol Bull 1979; 86: 420-428.
  37. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas 1960; 20: 37-46.
  38. Tabachnick BG, Fidell LS. Using multivariate statistics 5th ed. Boston, MA: Pearson Education Inc., 2007.
  39. Hollis S, Campbell F. What is meant by intention to treat analysis? Survey of published randomized controlled trials. Brit Med J 1999; 319(7211): 670-674.