PCOMS Research, Mischaracterizations & Limitations
Doing research is hard. Especially original research and data collection (vs. harvesting existing data), and particularly in real clinical settings. Believe me, I know that to be sure. And no research is perfect, including ours about PCOMS. There are things that you just can’t control and there are things that just happen while you are collecting data that you didn’t count on. And there are feasibility issues that must be balanced in real clinical settings. But that is what the limitation section in the “Discussion” is all about, to identify confounding variables and alternative explanations that potentially limit conclusions gleaned from the study. That, you do have total control over. You also have control over how you position your study and the way you write your introduction—how you reference and describe existing research. Unfortunately, mischaracterizations occur that potentially misinform readers and promulgate misunderstandings. There is a fair amount of that happening about PCOMS.
To illustrate the mischaracterization and limitation issues, consider a study just published in a prestigious journal:
Pauline D. Janse, Kim De Jong, Maarten K. Van Dijk, Giel J. M. Hutschemaekers & Marc J. P. M. Verbraak (2016): Improving the efficiency of cognitive behavioural therapy by using formal client feedback, Psychotherapy Research, DOI:10.1080/10503307.2016.1152408
Here is the abstract:
Objective: Feedback from clients on their view of progress and the therapeutic relationship can improve effectiveness and efficiency of psychological treatments in general. However, what the added value is of client feedback specifically within cognitive-behavioural therapy (CBT), is not known. Therefore, the extent to which the outcome of CBT can be improved is investigated by providing feedback from clients to therapists using the Outcome Rating Scale (ORS) and Session Rating Scale (SRS). Method: Outpatients () of a Dutch mental health organization either participated in the “treatment as usual” (TAU) condition, or in Feedback condition of the study. Clients were invited to fill in the ORS and SRS and in the Feedback condition and therapists were asked to frequently discuss client feedback. Results: Outcome on the SCL-90 was only improved specifically with mood disorders in the Feedback condition. Also, in the Feedback condition, in terms of process, the total number of required treatment sessions was on average two sessions fewer. Conclusion: Frequently asking feedback from clients using the ORS/SRS does not necessarily result in a better treatment outcome in CBT. However, for an equal treatment outcome significantly fewer sessions are needed within the Feedback condition, thus improving efficiency of CBT.
Before I make my points, let me reiterate that doing research is hard, and that this study has many good and interesting things about it. It is a nice finding that PCOMS results in better outcomes on the ORS in general and with “mood disorders” as measured on the SCL-90, and was more efficient.
Regarding positioning the study and the introduction: First, the abstract, intro, and continued narrative often refer to use of the ORS and SRS in treatment rather than calling it the Partners for Change Outcome Management System or PCOMS. PCOMS involves far more than merely using the forms and is a specific, designated evidence-based methodology that goes well beyond just flicking the forms and includes a systematic, transparent conversation with clients based on their responses. Second, the intro suggests that drop outs have not been previously addressed, but actually dropouts and retention have been addressed in two PCOMS studies. Both Schuman et al., 2015 and Slone et al., 2015 found higher retention rates. Third, the intro makes the very good and valid point that using different or additional instruments (to the ORS) is preferable and can confirm (or deny) the changes noted on the ORS. The authors surprisingly assert that the ORS is not a valid outcome measure because it is used as an intervention during treatment and that alternative measures are largely missing in PCOMS research. This is misleading (and quite an assertion regarding the the ORS being invalid that is repeated in the discussion) because 3 PCOMS studies have used additional measures and found an advantage for PCOMS on alternate measures. First, the Norway couple study (Anker et al., 2009) used both separation/divorce rates and the Locke-Wallace Marital Adjustment Test. PCOMS achieved significantly lower separation/divorce rates and trended to better outcomes on the Locke-Wallace (the power was insufficient given the lower number of intact couples). In addition, the ORS was mailed to clients at 6 month follow-up and the feedback effect was maintained—it was only an outcome measure at that point. Next, the group study with Iraq and Afghanistan vets with substance abuse problems (Schuman et al., 2015) found a significant difference in blinded commander ratings for soldiers in the PCOMS condition (as well as unblinded clinician ratings). Finally, in the UK study of school counseling and PCOMS (Cooper et al., 2013), the Strength and Difficulty Questionnaire confirmed all the gains noted on the CORS with both parents and teachers.
Regarding the unstated limitations in the “Discussion” section: The assertion that the ORS is not a valid outcome measure is continued in the “Discussion” and the authors even assert that their study calls into question interpretation of previous PCOMS studies. Wow. Again quite an assertion based on this study and conveniently leaving out the three studies mentioned above that supported the PCOMS gains with other measures. I do agree that more research of this issue is warranted and we are working on it.
In the limitations discussion, the authors did mention that intervention integrity about feedback was limited to a physical check of the charts. What they didn’t mention is that no real time fidelity checks were conducted to see if therapists were actually doing PCOMS. Moreover, they also didn’t mention that the physical check of the charts revealed that 11% were missing and that of the remaining 419, 23.2% had no evidence of PCOMS, no measures, no graphs, and no mention. In other words, 97 clients had no PCOMS—and we really don’t know how many clients that PCOMS was used to fidelity. When nearly a quarter of the clients (at least) included in the analysis didn’t have PCOMS as part of their therapy, the conclusions are severely limited. This should have been mentioned in the limitations section and perhaps, the analysis should have excluded these clients given PCOMS was not done.