By David Tuller, DrPH
*This post has been corrected. I initially wrote that the protocol did not appear to call for sub-group analyses, but it appears it did. I also included a passage about that is from documentation in the supplemental material. I had identified it as coming from the protocol. I apologize for the errors.
Dutch investigators appear to be on a roll—they are producing so much crap that it is hard to keep track. (I have recently written multiple posts about a Dutch study of CBT for fatigue associated with long Covid.) The latest to come to my attention is a real mess called “Effectiveness of psychosomatic therapy for patients with persistent somatic symptoms: Results from the CORPUS randomised controlled trial in primary care,” published in the Journal of Psychosomatic Research.
As I have noted before, the Journal of Psychosomatic Research appears to be something of a house organ for the psychologizing camp. Professor Michael Sharpe, a psychiatrist and one of the principal investigators of the discredited and arguably fraudulent PACE trial, is on its advisory board. That pretty much says it all. Interestingly, current and previous editors of the journal noted in a commentary published a few years ago that unblinded studies relying on self-reported outcomes are more prone to bias than those including objective measures. That recognition does not seem to prevent the journal from accepting reports about studies that fit this pattern—including the doozy of a disaster discussed in this post.
Persistent somatic symptoms (PSS) is another name for physical complaints that resist easy biomedical explanations. This category has also been labeled as medically unexplained symptoms, somatic symptom disorder, persistent physical symptoms, and bodily distress syndrome, although definitions of these terms can vary somewhat. Recently, the term functional disorders has gained currency. Many in the medical field, including the authors of this study and the journal in which it was published, continue to describe these ailments as psychosomatic, an older term often perceived by patients as derogatory and dismissive. Here is the study’s description of the construct in question: “The term Persistent Somatic Symptoms (PSS) refers to a heterogeneous group of physical symptoms such as chronic widespread pain, headache, dizziness, fibromyalgia, chronic fatigue and irritable bowel syndrome that cannot be directly attributed to detectable underlying diseases or an organic pathology.”
The trial included 169 patients with PSS drawn from 39 “general practices” and 34 “psychosomatic therapists.” Half received the intervention, which consisted of six to 12 sessions of psychosomatic therapy “delivered by specialised exercise- and physiotherapists.” The other received their standard care. In other words, the trial was not in fact “controlled,” as claimed in the title, because the investigators made no apparent effort to “control” for the effect of the time and attention lavished upon those receiving the intervention. The primary outcome was the patient’s level of functioning on three activities that they themselves had selected as important, with separate analyses for each of these three activities. Secondary outcomes included “severity of physical and psychosocial symptoms, health-related quality of life, health-related anxiety, illness behaviour and number of GP contacts.”
What is psychosomatic therapy? Good question. Here’s the description from the trial registration:
“It is a multi-component, stepped-care and tailor-made approach and includes the following modules: (1) psycho-education, (2) relaxation therapy and mindfulness, (3) cognitive behavioural approaches and (4) activating therapy. Psychosomatic therapy is captured in a treatment protocol which allows the therapists to change the intensity, frequency and order of the four modules in order to deliver a tailor-made approach. In the psychosomatic therapy sessions the therapist together with the patient explores and treats somatic symptoms by integrating the physical, cognitive, emotional, behavioural and social dimensions of the symptoms presented.” According to the paper, “the overall aim of the treatment is to improve patients’ functioning by stimulating self-regulation and empowerment to regain control over [their] own health.”
Sounds to me like standard CBT/GET-type material.
Despite including dozens of assessments of the primary and secondary outcomes at five and 12 months, the authors made a crucial decision—their statistical analysis did not include adjustments for multiple testing. This is a standard technique used for studies loaded with outcomes. Multiple outcomes increase the likelihood that some will yield positive results due to random variation. Not adjusting for this factor means that any positive findings be treated with great caution, which of course undermines the ability to make any authoritative claims. The investigators did not explain why they chose to avoid this key step of the analysis—perhaps because there does not seem to be a very good explanation.
High expectations, disastrous results
The investigators declared their confidence in the intervention in documentation included in supplementary material: “We expect that psychosomatic therapy for patients frequently attending primary care with medically unexplained symptoms improves daily functioning, decreases severity of the symptoms and the care consumption,” they wrote. *[I initially referred to this passage as coming from the trial protocol. The actual protocol contains similar but somewhat different language.]
Yet none of those expectations were borne out. There were no statistically significant differences in the primary outcome of patient functioning and in the secondary outcomes of severity of symptoms and the frequency of health care consumption, as measured in contacts with clinicians. Yet these disappointing findings did not discourage the investigators, who concluded anyway that “the psychosomatic therapy appears promising for further study.”
Huh? How’d they derive that conclusion from this wreckage? Well, a few of their very many secondary measures yielded modest statistical benefits. Whether these findings would have survived adjustments for multiple tests is unknown, given the failure to conduct that analysis. Whether any of the results were clinically significant, in addition to having been found to be of modest statistical significance, is another question entirely.
To bolster their argument, the investigators conducted a sub-group analysis by separating the participants into those with “moderate” and those with “severe” PPS and declared that the moderates experienced some benefit from the psychosomatic therapy. But here’s the weird thing. The table with this analysis highlights analyses of only one of the three activities included as part of each person’s primary outcome—with no explanation given for excluding the other two. Moreover, the main data point offered in the abstract for these results about moderate sufferers appears under a column heading that, if correct, indicates that all the participants in this group were lumped together, regardless of whether or not they received the intervention. (I assume the column heading is an error.) Unless I’m misreading all this—which is always possible–the effect is the statistical version of gibberish. [I have removed a section from this paragraph to correct the error noted above. See end of post for the deleted passage.]
Of course, perhaps the intervention’s failure is the fault of the participants. As as was pointed out by an astute observer on the Science For ME forum, that seems to be the implication of statements like the following: “We may have included patients who did not always expect a psychosomatic approach.” Excuse me? If that were the case, why would they have agreed to be in a trial of a therapeutic intervention that explicitly bills itself as embodying a psychosomatic approach? Are the investigators suggesting that participants were too stupid to read or understand the materials provided to them in the course of enrollment?
And this: “Psychosomatic therapy aims at behavioural change and readiness to change might influence a positive outcome…Some patients may lack this readiness to change.” Again, if they did not harbor sufficient “readiness to change,” why would they have agreed to enter the trial? How much “readiness to change” is required for the psychosomatic therapy to work? Are patients required to be 100% convinced of the intervention’s effectiveness in order for it to be effective? If that’s the case, is there any actual point in spending the money on a clinical trial?
And remember—this was an unblinded study relying solely on subjective responses. It was already prone to significant bias because of this study design. Even with that undeniable built-in advantage, the investigators had to scrape the bottom of the barrel to find anything positive to report. Who peer-reviewed this nonsense, and why did they approve it for publication? Why didn’t editors at the Journal of Psychosomatic Research flag these issues themselves?
Whatever. Notwithstanding the investigators’ plea for more research based on their assurance that the findings were “promising,” there’s no way to pretty up this pig with lipstick. The investigators might be impervious to embarrassment, as seems the case with many in this domain of research. But the journal should be thoroughly ashamed of its failure to conduct appropriately vetting.
Deleted passage: “This sub-group analysis was not mentioned in the trial registration or in what I think was the trial protocol. That suggests that it was perhaps added only after the investigators realized their overall results were disastrous. Such exercises in what is called data-mining are often likely to yield some sort of positive findings, no matter how tenuous.”
(Originally posted on Virology Blog.)