Occasionally, for reasons unknown, a researcher or researchers decide to make scholarship more of a vendetta than a scientific enterprise. Such is the case with O.K. Østergård and colleagues, investigators from Denmark.
It started with a meta-analysis of the Partners for Change Outcome Management System (PCOMS) they conducted which concluded that PCOMS was only minimally effective with mild distress and not effective with “psychiatric disorders” and that feedback effects were due largely to social desirability and researcher allegiance.
Østergård, O. K., Randa, H. & Hougaard, E. (2018). The effect of using the Partners for Change Outcome Management System as a feedback tool in psychotherapy—A systematic review and meta-analysis, Psychotherapy Research, 30(2), 195-212. https://doi:10.1080/10503307.2018.1517949
Jacqueline Sparks and I thoroughly debunk their flawed analyses and ridiculous conclusions in a published response.
Duncan, B. L., & Sparks, J. A. (2020). When meta-analysis misleads: A critical case study of a meta-analysis of client feedback. Psychological Services, 17, 487– 496. http://dx.doi.org/10.1037/ser0000398
Bottom Line: Despite the sophisticated statistics, this lack of scholarly scrutiny of included studies that didn’t have enough sessions to have a feedback effect (<4) and/or even moderate adherence to PCOMS protocol resulted in a misleading over-attribution of significance to studies of dubious methodology as well as questionable claims social desirability and researcher allegiance. Read a summary of our response here.
And they are at it again:
Østergård, O. K., Nilsson, K.K., & Gonzalez, A.D.P. (2021). The Partners for Change Outcome Management System: A pilot effectiveness randomized clinical trial. Nordic Journal of Psychiatry, DOI: 10.1080/08039488.2021.1921265
While the peer-review process is not perfect and it is difficult for reviewers to have a grasp of all the different content areas supporting a given article, this article is particularly troublesome in its methodology, scholarship, and misrepresentation of the literature.
First, the Study Itself Contains Numerous Design Flaws
For example, including an unknown (not reported) number of one-session participants in an analysis of feedback is erroneous on its face given participants would have no opportunity for feedback. Moreover, there is no circumstance in which any intervention could show an effect with 40% adherence accompanied by therapists expressing that they did not find the intervention helpful.
Conducting a study where “PCOMS …was met with resistance by many therapists” without addressing these adherence problems in the study design is suspect. PCOMS is intended to be administered in each and every session and, moreover, discussed with clients to collaboratively change treatment if the benefit is not forthcoming.
They excuse this design flaw by analyzing 19 participants with at least 50% adherence. Greater than 50% administration of the measures without knowledge of the fidelity to the PCOMS protocol of discussing the results with clients is still inadequate. Given the reported attitude of the therapists, it is unlikely that any discussion occurred. Many examples of addressing adherence successfully are in the literature.
While reviewers cannot be fully aware of all aspects of the literature relevant to a specific study, the authors should not only be aware of it but also report it accurately. The authors cite their own and other meta-analyses as undisputed facts, reiterate their flawed conclusions, and fail to mention any critique of the cited studies or the multiple exceptions to their interpretations.
This is particularly troublesome because they cite the critique article (Duncan & Sparks, 2020), but do not mention the major point of the critique, i.e., that the studies upon which they draw their conclusions, similar to the current study, had significant adherence problems and often did not include enough sessions for a feedback effect to occur.
They ignore the dubious methodology of included studies, all documented in the critique article they cite. Not mentioning the critique is unacceptable scholarship, especially when presenting such strong assertions about an intervention’s effectiveness.
As an example of the author’s ignoring published exceptions to their claims, also documented in Duncan and Sparks (2020), consider Brattland et al. (2018). They cite this study to support the adherence difficulties that their investigation experienced. They don’t, however, report that the Brattland study offered a notable exception to their claim that independent measures do not show a PCOMS intervention effect.
Brattland et al. found the same effect size on the Basis 32 as the ORS. This is only an example. They do not report any exceptions, of which there are many documented, to their flawed conclusions.
It is difficult not to wonder about the intentions of a researcher who misrepresents what is written and ignores published criticisms and exceptions. Their interpretations far overreach the data and are unfortunately amplified to almost “fact” status through the author’s repetition.
We Had Hopes
We contacted the editor and associate editors of the journal (Nordic Journal of Psychiatry) that published this study. In the interest of upholding the standards of ethical and accurate scholarship, we requested the opportunity to respond in any format to this article. While we realized the risk the journal faced in exposing the flaws of the review process, we had hopes that ethical scholarship would prevail. It didn’t.
Ten randomized clinical trials demonstrate the power of PCOMS to improve outcomes and reduce dropouts, including the latest conducted in an integrated care setting with Better Outcomes Now.