By David Tuller, DrPH
For years, Professor Esther Crawley, the University of Bristol’s methodologically and ethically challenged ME/CFS investigator, has hoovered up millions of pounds from public and private funders to support her misbegotten research. She achieved this success as a grant magnet despite abundant and easily available evidence that she was violating core principles of scientific research.
Now, perhaps, the disastrous results of a much-ballyhooed study—“Graded exercise therapy compared to activity management for paediatric chronic fatigue syndrome/myalgic encephalomyelitis: pragmatic randomized controlled trial”–could help end her long “reign of error.” If so, her dominant impact on the treatment of British kids with the disease will hopefully dwindle or disappear altogether, along with her undeserved reputation as an authoritative and credible voice in this domain. The sooner that comes to pass, the better for families throughout the realm.
It must be said that this new article, in the European Journal of Pediatrics, represents something of a change for Professor Crawley—she and her colleagues report that the trial, nicknamed MAGENTA, yielded null results for the primary outcome of self-reported physical function at six months. Given that leading members of the GET/CBT ideological brigades, like Professor Crawley, long ago adopted a Trumpian strategy toward evidence and truth, this unvarnished acknowledgement that their much-touted treatment approach has proven useless is, frankly, surprising.
In MAGENTA, 123 participants were offered GET and 118 were offered an intervention called “activity management” (AM). In the paper’s explanation, the latter sounds like GET-lite. However, it is also described as a form of “pacing”—which is odd, because AM as described, with a focus on gradually increasing activity, does not conform to the understanding of pacing widely shared by patients. One question about the study is why Professor Crawley believed any difference in outcomes between these two similar-sounding interventions would be detectable.
Maybe things woulda been sorta-kinda okay for Professor Crawley and her team if participants in either arm had, you know, gotten better. But that’s not what happened. “There was no evidence that GET was more effective or cost-effective than AM in this setting, with very limited improvement in either study group evident by the 6-month or 12-month assessment points,” notes the abstract. Oops! Overall, the results for the secondary outcomes—including actigraphy, an objective measure of movement–were similarly disappointing.
Professor Crawley undoubtedly understands that everything she produces these days will come under more rigorous scrutiny in the post-publication period than it has received from the incompetent and cheerleading peer-reviewers and journal editors who have routinely approved her nonsense. She presumably knows, for example, that she cannot present a trial as prospective if more than half the participants were recruited before registration. When she pulled that stunt in her 2017 report about the Lightning Process trial, it led to a 3,000-word correction and a 1,000-word editor’s note offering a pathetic rationale for republishing the original findings.
Professor Crawley also likely knows that related professional stumbles—such as, for example, accusing me in lectures of “libellous blogging” and then refusing to respond to requests for evidence, or informing me at a public presentation that Bristol had sent me a “cease-and-desist” letter when no such letter had been sent–have further tarnished her reputation. At that time, Bristol’s administration did not distinguish itself in this matter. The university behaved as if we were in a Sopranos episode, engaging in thuggish threats to my employment in the form of complaints to Berkeley’s chancellor about my “behaviour.” Berkeley determined these complaints were meritless, and ignored them.
Who knows what Bristol’s legal department really thinks about Professor Crawley’s antics? Presumably they, and she, hope to avoid further high-profile pratfalls. Maybe that accounts for the unexpected display of honesty in this latest paper.
**********
Long delay from trial to publication
Now I don’t give Professor Crawley much or any credit here for integrity. Had there been a way to make these stinky findings smell like lavender or disappear completely, I imagine it would have been found. In any event, the record does not suggest any particular sense of urgency with regard to publication. The trial recruited participants from 2015 to 2018, but the draft wasn’t submitted to the European Journal of Pediatrics until October of 2023. That’s an extremely long time for a team of investigators to spend analyzing data and writing up results.
What was going on all that time? Were earlier drafts rejected elsewhere because they presented doctored data or because no journals were interested in publishing null findings? Or did Professor Crawley just want to prevent the bad news from becoming public as long as conceivably possible? She must know these results would be an enormous embarrassment for her and the entire GET/CBT paradigm. This debacle certainly aligns with the 2021 ME/CFS guidelines from the UK’s National Institute for Health and Care Excellence, which rescinded the agency’s prior recommendation for GET.
This is still an Esther Crawley study, however, so something has to be wrong. Indeed, the conclusion manages to sneak in the unjustified suggestion that the “lack of improvement in physical function may be explained by the low intensity of therapy sessions.” In this context, “low intensity” appears to mean that participants attended fewer sessions than initially expected. The authors believed that participants in both arms would likely seek between eight and 12 sessions. As it turns out, the mean number of sessions in the GET and AM arms were, respectively, 3.9 and 4.6.
It seems that the participants did not find the sessions as useful as anticipated. The authors provide no evidence to indicate that greater intensity—that is, more sessions—would have produced signs of improvement. The mention of “low intensity” as a possible explanatory factor is perhaps a subtle appeal for yet more funding to explore ways to encourage participants to increase the number of sessions. Granting any further funding to Professor Crawley for this or any other research would certainly be unwise, but UK funding bodies have shown repeatedly that they are stupid enough to do just that.
This study shared an unusual feature with the Professor Crawley’s fraudulent Lightning Process study. In both, Professor Crawley conducted a feasibility study and then folded those participants into a full-fledged trial while changing outcome measures at that point in time. It is hard to understand why anyone would consider this to be an acceptable way to conduct a trial; it is disturbing that any ethical review board would approve it.
The point of a feasibility trial, if there is one, is to test the feasibility of conducting a full-scale trial. Based on the data received, you then select outcomes and conduct a completely separate and larger trial. You don’t get to wave a wand and somehow transform your feasibility trial participants into full trial subjects while selecting your final outcome measures at half-way through. Has anyone ever done such a thing besides Professor Crawley?
In the Lightning Process study, she and her colleagues pretended they hadn’t done what they did; the trial report, as a result, was concocted of lies. It was a flagrant example of research misconduct, so I found BMJ’s decision not to retract the paper an astonishing abrogation of responsibility. Nonetheless, such an episode would be mortifying for any investigator, even in the absence of a retraction, and it made Professor Crawley and her colleagues look like fraudsters.
I assume Professor Crawley might have been inclined to engage in similar shenanigans with the MAGENTA trial report if that previous effort hadn’t been exposed. Instead of lying about the study’s peculiar feasibility-trial-into-full-trial design, they acknowledge it and concede that it is a limitation. Here’s what they write:
“Whilst we registered MAGENTA before starting recruitment during the feasibility phase, we did not confirm the primary outcome measure until the first full trial protocol…; we recognise this invites the accusation that we used the data collected during the feasibility phase to select the primary outcome.”
Well, yes, exactly. Why shouldn’t it invite that accusation? And why didn’t the authors pre-designate their outcome measures in the first place? I mean, a key aspect of a well-designed clinical trials is that you declare what you’re studying before you start. If you start by collecting data, it is understandable that you would be accused of using feasibility trial data to select the primary outcome. Indeed, the accusation would have been entirely appropriate and necessary had MAGENTA’s primary outcome produced anything other than null results.
As it is, this unacceptable study design ensured that the many hundreds of thousands of pounds spent would be a waste no matter what the results. Someone at the funding agency, the UK’s National Institute for Health and Care Research, should answer for the perplexing decision to fork over money for such ridiculousness.