By David Tuller, DrPH
In early April, I wrote about a study published in the Journal of Psychosomatic Research—a one-year follow-up of the GETSET trial of self-help graded exercise therapy for ME/CFS. The investigators had previously reported short-term benefits for the intervention. In this new paper, despite no benefits of the intervention over regular care, the team reported success because the intervention arm did not score worse than it had at 12 weeks. Needless to say, this is not the proper way to report clinical trial results.
On April 24th, I sent a letter to Professor Jess Fiedorowicz, editor-in-chief of the Journal of Psychosomatic Research. He responded quickly and promised to review the matter with journal colleagues. Given the August deadline for the National Institute for Health and Care Excellence to publish its revised version of its new ME/CFS guidelines, I sent a follow-up letter today to try to nudge the journal to respond sooner rather than later.
Below I have posted the exchange.
Dear Dr Fiedorowicz–
The Journal of Psychosomatic Research recently published a study called “Guided graded exercise self-help for chronic fatigue syndrome: Long term follow up and cost-effectiveness following the GETSET trial.” Professor Peter White, a member of the journal’s advisory board, is the senior author.
In this clinical trial, the investigators were testing a self-help graded exercise program, which they reported had shown some short-term benefits. According to the new study, Clark et al, the intervention provided no benefits at one-year follow-up over specialist medical care (SMC). Yet here’s how the findings were described in the “highlights” section: “Guided graded exercise self-help (GES) can lead to sustained improvement in patients with chronic fatigue syndrome.”
Given the null results, this description is troubling. In the study abstract, the conclusion is marginally better but still unacceptable: “The short-term improvements after GES were maintained at long-term follow-up, with further improvement in the SMC group such that the groups no longer differed at long-term follow-up.”
In sum, the results were presented as if the trial had shown the intervention to be effective–even though the one-year findings should have reasonably led to the opposite assessment. The fact that the non-intervention group scored the same at the end is framed as a matter of lesser significance–ignored completely in the “highlights” section and given second billing in the abstract’s conclusion.
(I wrote about the problems with Clark et al in a post on Virology Blog, a science site hosted by Vincent Racaniello, Higgins Professor of Microbiology at Columbia University. I have cc’d him on this letter.)
The main outcomes of a clinical trial are the comparisons between the intervention and non-intervention groups, not within-group comparisons. The main point that should have been made in both the highlights and the conclusion is this: At one year, there were no benefits in the group that received the self-help graded exercise plus SMC over the group that received SMC alone. In sum, the study had null results.
Any other way of framing the findings is inappropriate. The framing of Clark et al is a clumsy effort to gloss over the fact that the intervention failed to show benefits at one year by standard clinical trial metrics.
This is the same problematic strategy Professor White and colleagues used to handle the long-term follow-up to the discredited PACE trial, which at that point also showed no benefits for the interventions over SMC. In Lancet Psychiatry, the authors claimed success because of the “within-group” comparisons and presented the null results for the between-group comparison as an afterthought. That is spin. It is not proper science.
Beyond this issue, Clark et al’s presentation of its results appears to undermine the spirit of an admirable recent commentary in the journal, in which you and two of your predecessors as editor noted that subjective outcomes have “a greater risk of bias due to any unblinding.” (I wrote about this commentary on Virology Blog.)
GETSET was an unblinded trial relying on subjective outcomes—exactly the kind that would be most “at risk of bias,” per your cogent commentary. It is therefore perplexing that the Journal of Psychosomatic Research did not apply a more rigorous approach to evaluating Clark et al than appears to have been the case.
The presentation of what are clearly null results as a success suggests that the peer review was less than rigorous. The trial design itself also stands in apparent disregard of the journal’s own professed position on subjective outcomes and the importance of blinding. Perhaps peer review procedures are different for papers authored by the journal’s advisory board members.
As you might know, the UK’s National Institute for Health and Care Excellence is currently developing a new clinical guidance for ME/CFS. The final version is scheduled to be published in August. That means Clark et al still has the unfortunate potential to influence the deliberations. I am therefore cc-ing several clinician and patient members of the NICE ME/CFS guidance committee, to alert them to the concerns about the study.
At this point, the Journal of Psychosomatic Research should correct any statements in Clark et al implying that the most important findings are the intervention arm’s within-group comparisons rather than the null results for the comparisons between the groups. Thank you for your attention to this matter.
David Tuller, DrPH
Senior Fellow in Public Health and Journalism
Center for Global Public Health
School of Public Health
University of California, Berkeley
Thank you for bringing this concern to our attention. I have forwarded this and will discuss with our publisher, Associate Editors, and immediate past Editors and respond following discussion with the broader group.
Best wishes in life, work, and advocacy,
Jess G. Fiedorowicz, M.D., Ph.D.
Editor-in-Chief, Journal of Psychosomatic Research
Adjunct Faculty, Departments of Psychiatry, Epidemiology, and Internal Medicine
The University of Iowa
It has been almost two weeks since I alerted the journal to the problems with the reporting of the GETSET one-year follow-up results. By promoting the within-group comparison for the intervention arm rather than the null results of the between-group comparison–a form of outcome-swapping–Professor White and his colleagues engaged in a deceptive presentation of their data. As I have pointed out, Professor White was already familiar with this strategy, since the PACE trial follow-up paper similarly downplayed null results for the between-group comparisons by highlighting first the within-group comparisons. Perhaps Professor White and colleagues are unaware that this is not an acceptable way to report clinical trial results, even at follow-up.
As you know, there is a hard deadline here–the National Institute for Health and Care Excellence is planning to release the final version of its revised ME/CFS clinical guidance in August, and deliberations are ongoing. It would be fair to assume this paper is being raised by some committee members to push for GET to be re-endorsed, since the draft version released in November recommended against it. The GETSET follow-up reads like a paper designed to influence the NICE debate, although whether that was in fact the goal I have no way of knowing.
The urgency of the matter means the standard academic tendency to examine and debate issues for weeks and months is not appropriate in this instaance. Cc-ing a few NICE committee members, as I did with my initial letter to you and am doing again here, is also insufficient to avert the potentially disastrous outcome of having this study taken at face value. The Journal of Psychosomatic Research has an obligation to make it clear sooner rather than later that this follow-up report documented null benefits for GET at one year and should not have been framed as evidence for the effectiveness of the intervention.
Beyond that, the journal should investigate–and explain to the public–how and why a paper with such an elementary flaw passed peer review in the first place. My Berkeley epidemiology colleagues would be dismayed if their students reported study results in this dishonest fashion.
Going forward, the journal’s recent affirmation of the bias inherent in subjective outcomes when blinding is not rigorous should certainly be taken into account during the peer review process. If your laudatory editorial on this important issue is to amount to more than pretty phrases, perhaps the journal should refrain in future from accepting any unblinded studies that rely solely on subjective outcomes–even though this is the sort of study design favored by Professor White and other members of your editorial advisory board.
By your own standards, the reported results of such research are inevitably fraught with bias, rendering them of questionable value. Given that these studies can nonetheless impact both health policy and clinical decision-making, routinely peer-reviewing and publishing them would appear to be unwarranted as a scientific matter as well as antithetical to the interests of patients.
I look forward to hearing how the journal plans to address these critical matters involving both the GETSET follow-up and the larger issue of unblinded studies relying on subjective outcomes. Thanks!
6 responses to “My Letters to Psychosomatics Journal About Prof White’s Misleading GETSET Paper”
In any case – i.e. even if the NICE deliberations weren’t in the frame – when patient health and lives are in peril then problems with the scientific literature need to be exposed, interrogated and corrected post-haste. Thanks for exposing this David, and for spelling out to the editor exactly what the issues are so that his job is made very easy indeed. I fear that some doctors have become so wrapped up in the glory of their academic positions that they’ve forgotten that mistakes cause serious harm to patients and that delays in correcting them increases the harm arising.
Will Professor White the journal advisor go against Professor White the author? Perhaps he is also one of the peer reviewers.
I find it hard to believe that *anybody* takes this journal seriously, and that includes the journal editors and advisors.
They seem to be mocking advocates by publishing obvious rubbish, then laughing at everyone, secure in the knowledge that they are protected by the highest authorities in UK society, as evidenced by the continuous stream of awards and titles.
Their arrogance will be their downfall. And it is going to leave a mark.
In 2015 on the day of the publishing of the follow up results to the PACE trial, I received an email from my sister, who had been continuously sceptical of my 18 years of ME, stating that she had heard on BBC Radio 4 that a trial published in the Lancet, no less, showed ME could be improved with CBT and GET. She attached a copy of the Lancet trial publication, which I proceeded to read. It was immediately obvious to me that the follow up trial had produced null results. I replied to her email asking if she had read the Lancet article, which she had bothered to seek out in order to send it to me as “proof”. She hadn’t. She took the reporting of it in respected UK media at face value, which is what most members of the public would do. But these members of the public are the family and friends of people with ME and the devastating impact of these misreported trial results on ME patients lives is immeasurable.
Thanks for sharing that Eliza. I think you provide a very good example of the potential for iatrogenic harm that can be caused by this flawed BPS model and its accompanying faulty research. People don’t appreciate the harm that can be done e.g. to people’s relationships and careers when organic symptoms are interpreted as being psychologically based. If patients believe that difficult relationships are causing their symptoms then they may ditch those relationships or take out their frustration on those who seem to be the source of their stress. If the people who would normally support you when you’re ill don’t believe that your symptoms are ‘real’ then they may view you as not pulling your weight, as a party pooper, a hypochondriac or as a ‘head case’. If you’re led to believe that your stressful job is making you ill then you may change jobs and/or sacrifice a successful career for the sake of your health. This is iatrogenic harm in spades with patients’ confidence and support networks being stripped away, and that’s without the biological harms that might arise from being put on inappropriate psych drugs (which is likely to happen if you don’t make any progress with psych therapies). If doctors believe in a truly ‘biopsychosocial’ approach then surely they should be taught the psychosocial harms that can be caused by imposing a false psychosomatic label on patients?
1. Find LYING, EXAGGERATING, INCOMPETENT, and/or IGNORANT weasels.
2. Hold their feet to the fire, publicly if possible.
3. Repeat ad infinitum.
Thank you for all you do.
Thank you David for all the work you do advocating on behalf of patients. I wish these psychiatrists would leave ME/CFS alone. They should have no business in it, yet here we are, in 2021 still battling against their useless (sometimes harmful) GET treatments. We should be treated by neurologists and immunologists, not psychiatrists.