By David Tuller, DrPH
The Journal of Psychosomatic Research (JSR), an influential publication. recently published an article that made a crucial point—in clinical trials, subjective outcomes are at “a greater risk of bias due to any unblinding.” The article, which I wrote about here, was authored by the journal’s current editor and two previous editors, both of whom are still on the journal’s advisory board.
The article involved whether or not a well-blinded clinical trial of homeopathy, which the journal had published years earlier, should be retracted. The details were complex and of little consequence here, beyond this: The journal’s decision not to retract the study rested on the assessment that the blinding remained robust despite one investigator’s efforts to undermine it. Given the article’s editorial provenance, it would likely be fair to assume the following passage represents the journal’s position:
“Reporting on the integrity of the blind has merit and is especially valuable when dealing with subjective outcomes for which there is a greater risk of bias due to any unblinding…Subjective outcomes are frequently used in studies that fall within this journal’s scope, at the interface of psychology and medicine. We recommend assessing the integrity of the blind for any clinical trial, particularly those utilizing subjective outcomes.”
One obvious corollary of this point is that extra care must be taken in interpreting subjective outcomes when intervention assignment is not blinded. Another is that, when blinding is not secure, objective outcomes do not present the same risk of bias as subjective ones.
Unfortunately, at least two members of JSR’s advisory board—Professors Michael Sharpe and Peter White, two of the three lead PACE investigators–do not share this cautious view, at least judging by their research history. Now JSR has provided Professor White and several colleagues yet another opportunity to misinterpret findings of their research.
The new study is called “Guided graded exercise self-help for chronic fatigue syndrome: Long term follow up and cost-effectiveness following the GETSET trial.” This was essentially a home-based version of the GET intervention tested in PACE. The primary outcomes were self-reported fatigue and physical function. In 2017, the investigators reported in The Lancet that those in the graded exercise self-help (GES) arm reported modest but positive results at 12 weeks post-randomization, compared to those who received so-called standard medical care (SMC) alone. (The GES arm also received SMC.) Professor White was the senior author.
The study was hyped by the UK’s Science Media Centre, a beehive of support for the biopsychosocial ideological brigades. For the SMC’s round-up of “expert reaction,” some of the usual suspects presented cheery comments. “This study contributes to a body of evidence that graded exercise can help to improve functioning and reduce fatigue in people with chronic fatigue syndrome,” declared Professor Trudie Chalder, the other lead PACE investigator along with Professors Sharpe and White.
The SMC also solicited a comment from Professor Chris Ponting, a geneticist at the University of Edinburgh. Here’s what he said: “The beneficial effect [of GES] was for fewer than 1 in 5 individuals, for an unblinded trial, and there was no consideration of long-term benefit or otherwise. The study could also have exploited actometers that would have more accurately measured participant activity.”
As Ponting noted, the reported benefits were not impressive. The fact that the study was unblinded, he appeared to imply, raised questions about the credibility of even those meager results. In mentioning the decision to forego actometers, he was also highlighting the risk of bias inherent in relying on subjective outcomes in the context of unblinded research. In effect, he was drawing attention four years ago to a significant problem that the journal’s current and former editors addressed last month.
**********
GETSET Follow-Up Fails Upwards
According to the new study, posted on April 2 with Professor White again as senior author, the GES intervention provided no benefits at one-year follow-up over SMC. Yet here’s how the findings were described in the “highlights” section describing the paper on the ScienceDirect site: “Guided graded exercise self-help (GES) can lead to sustained improvement in patients with chronic fatigue syndrome.”
Given the null results, this was deceptive. In the abstract, the conclusion was marginally less dishonest but still unacceptable: “The short-term improvements after GES were maintained at long-term follow-up, with further improvement in the SMC group such that the groups no longer differed at long-term follow-up.” (I have not yet been able to access the full study through the Berkeley library; I’m not sure why. Also, it is not clear to me if the investigators or others involved in the publication process wrote the “highlights” section.)
In sum, the findings were presented as if the study had proven the intervention to be effective over the long-term period–even though the findings documented the exact opposite. The fact that the non-intervention group caught up is treated as a secondary matter–ignored completely in the “highlights” section and downplayed in the abstract’s conclusion.(Another of the four “highlights” is significant but for some reason is not mentioned anywhere in the abstract: “Most patients remained unwell at follow up; more effective treatments are required.”)
As usual with these people, this is not the proper or transparent way to report the results of a clinical trial. The main outcome of a clinical trial—and the first that should be reported, even in a follow-up study—is the comparison between the intervention and non-intervention groups. So let’s be clear: At one year, the GETSET study produced null results for the only important comparison—between the group that received GES and the group that received SMC alone. The only conclusion possible from this study is that GES had no documented long-term benefits.
Any other way of framing the findings—such as the way the investigators have framed them—is spin. And egregious spin at that. Only investigators aware and perhaps scared that their findings undermine the foundational theories of their entire approach to intervention would try to disguise the bad news in this clumsy and anti-scientific manner.
Incidentally, Professor White and his PACE colleagues used this same silly parlor trick when they faced a similar dilemma a few years ago. The long-term PACE results, published in Lancet Psychiatry in 2015, showed no benefits for the CBT and GET interventions over the two comparison groups. As with GETSET and other follow-up studies of these psycho-behavioral treatments, the non-intervention study participants had caught up. And what did the PACE investigators do? As with GETSET, they declared success based on within-group findings. Then they tried to explain away the disastrous fact that they themselves had uncovered: Their prized intervention had no long-term benefits.
In this case, the GETSET follow-up’s hyping of its null results appeared to diss the journal’s own recent admonition that subjective findings have “a greater risk of bias due to any unblinding.” That common-sense wisdom has apparently not yet tempered the continuing passion of the most devoted biopsychosocial brigadiers—a group that includes Professor White–to make unfounded claims of success.
In addition to its fake-news presentation of the GETSET results, the study also suggests that the intervention might be “cost-effective.” Huh? What does it mean for an intervention that produces null results to be cost-effective? Cost-effective at what? I’ll let others dissect that particular claim.
Comments
9 responses to “GETSET Study Reports Null Results for Self-Help Graded Exercise–but Declares Success Anyway”
I hope you will write to the Journal, David, and ask them to correct the paper on the grounds that the highlights and abstract conclusion are directly contradictory to the actual outcome. White et al should not be allowed to get away with using the same trick as PACE – of pretending the long term follow up results show the treatments work, when they actually show the opposite. The peer review and editorial review processes have clearly failed here. I was going to suggest it should be retracted, but I think it’s important that the results showing the treatment is a failure need to be in the literature.
David, did you notice the ‘modified ITT population’ of the follow-up analyses?
In SMC only: 2/104 lost to follow-up, 102 analyzed.
In GES&SMC: 10/107 lost to follow-up, 97 analyzed.
I have no access to the full text either, so I don’t know if they discussed/explained this. I don’t think it is usual that loss to follow-up is greater in the intervention group, unless dissatisfaction or adverse effects are the reason. Another, here unlikely, reason might be the fact that the treatment was so successful that the participations did not had time for follow-up assessments because of work obligations.
i am sorry, I might need to withdraw my comment. I see now I checked the trial figure from the 2017 paper. I am not sure this valid for this new 2021 paper as well.
Thank you David. It’s getting almost comical if it weren’t so dangerous. GETSET presumably puts a new spin on GET for those who haven’t been following this blog. Lightning Process seems to go through rebranding as soon as someone finds the time to search through the companies register to discover one that hasn’t been taken yet. With the burgeoning cases of Long Covid, I suppose it’s looking like money in the bank for this crowd.
Is GES really cost effective?
I understand from the paper’s conclusion that the usual cost-effectiveness threshold wasn’t reached for GES, although apparently there was some ‘uncertainty’.
But the ‘Highlights’ section seems to indicate that GES is likely to be cost-effective.
So what happened?
(I can’t access the full paper so I’m in the dark as to how these can possibly match up.)
“Journal of Psychosomatic Research”
It seems obvious that the first order of business for such a journal would be to find out if psychosomatic actually exists. So where is the evidence that thinking the wrong thoughts creates disease?
For a genius like Regius Professor Sir Simon it should be child’s play to think the wrong thoughts and give himself an illness. After all, he claims that millions of ignorant patients have done this without even trying.
So how ’bout it, Sir Simon? We know you and your pals pay close attention to your critics and read Dr Tuller’s reports. I dare ya. I double dare ya. Go ahead and think some disease-causing thoughts and prove us wrong.
“Only investigators aware and perhaps scared that their findings undermine the foundational theories of their entire approach to intervention would try to disguise the bad news in this clumsy and anti-scientific manner.”
So that would be fraud then?
It’s time that physical andental health therapies were subject to a Yellow Card scheme.
As we know Graded Exercise is harmful for m.e sufferers and the GET/CBT cabal are now talking to insurance companies and psychologising Long Covid in the same way, so they can get funding and jobs. There is no care for the patients, it’s purely career gains.
Thank you David for the forensic work you do. I will be contributed to your fund.
All these weak studies being pumped out by the BPS brigade recently makes me think… Why now? I think it’s been a strategic move to try to influence the NICE comittee, to try to come up with new documentation that it works.
Even though a lot of these papers were probably submitted before the new draft was published, my guess is that a lot of the BPSers knew that CBT/GET was probably at risk of being cut in the new draft. So they launched a counterattack with submitting studies to journals that were supposed to be published while NICE was in it’s evaluation period of the new draft, to try to prove the merit of these ineffective and possibly harmful treatments.
I don’t think it’s a coincidence that so many of these have been published now. Some of them with old, weak data that’s been lying around for years and years. Why not publish them when the data was new?