By David Tuller, DrPH
Calling out a Trudie Chalder paper is way too easy. It’s also old hat for Virology Blog—going back to 2015 and my initial investigation into the now-discredited PACE trial, of which she was one of three lead investigators. She is a professor of “cognitive behavioural psychotherapy” at King’s College London, so she researches the impact of CBT on whatever illness is at hand. That’s all Professor Chalder does; she’s a one-trick pony.
But even one-trick ponies should be able to adequately execute their one trick. In Professor Chalder’s case, it is still remarkable to me that so much of her work is so riddled with nonsense. She seems to stumble even with her one designated trick.
Professor Chalder’s most recent contribution—“Post stroke intervention trial in fatigue (POSITIF): Randomised multicentre feasibility trial” in the journal Clinical Rehabilitation–does not seem to include the sorts of factual errors and data distortions that have marred previous research. In this case, she (as senior author) and her colleagues appear to be accurately recording their unimpressive findings. But they seem unwilling or unable to recognize that the results are disastrous for claims of benefits from CBT.
If the funding world functioned normally, this sort of report about a feasibility study for an intervention would undermine the likelihood of obtaining support for a full-scale trial. Funders presumably want to invest in research that is likely to provide actionable findings. And there wouldn’t seem to be much point in pumping money into a larger, grander version of a study that has produced null results across the board.
But that’s just my own opinion. I assume Professor Chalder’s web of supporters among UK decision-makers will ensure that she continues raking in grants—just like Professor Esther Crawley, Bristol University’s methodologically and ethically challenged pediatrician. It wouldn’t surprise me to learn that Professor Chalder has already or will soon secure money for a full-scale version of POSITIF. (On a side note, can’t someone do something about this need to slap trials with catchy acronyms? Sometimes it’s just so irritating.)
As we know from Professor Chalder’s body of work, she seems to think every form of fatigue is amenable to correction by CBT. The form of CBT deployed in her trials is often said to have been shaped to address the particulars of the illness being treated. In the PACE trial, of course, the form of CBT was designed to impact the “unhelpful” cognitions hypothesized as central to the perpetuation of the fatigue and other symptoms.
In another study for which Chalder was the senior author–the CODES trial of CBT for so-called “psychogenic non-epileptic seizures”–the intervention was described as targeting the psychological factors presumed to be triggering the episodes. That study had null findings for its primary outcome—reduction of seizure frequency. Some secondary outcomes showed modest improvements. The investigators suggested that seizure reduction might not be the right primary outcome after all, rather than that their intervention might be worthless. (I wrote about the CODES trial here and here.)
With the most recent paper, the target is post-stroke fatigue. True to form, the CBT is said to be adapted for the condition:
In our model of post-stroke fatigue, we proposed that pre-stroke fatigue, depressive symptoms, anxiety, low self-efficacy, passive coping, reduced physical activity, sleep problems, and inadequate social support might all contribute to the development and/or maintenance of post-stroke fatigue. We then developed an intervention, targeting those factors that were potentially reversible. Our intervention was informed by CBT principles.
The intervention consisted of seven telephone-delivered CBT sessions, each lasting up to an hour. Both those who received the intervention and those who did not were given information about fatigue. Here’s how the paper describes the telephone-delivered package:
The intervention we provided in this trial focused on the potentially reversible nature of fatigue, encouraged participants to (a) overcome fears about physical activity, (b) increase physical activity using diary monitoring and activity scheduling, (c) achieve a balance between activities, rest and sleep and (d) address unhelpful thoughts related to fatigue and low mood if present.
Poor results all around
The results were poor. Recruitment and retention were not great, and there were no differences at the end between those who received the specially tailored CBT intervention, and those who did not. Apparently patients were not that interested in the intervention. And the intervention itself did not produce benefits in any of the domains measured.
Let’s look at the dismal numbers. Of 886 stroke survivors from three Scottish clinics invited to participate, only 76 ended up in the final sample—less than 10%. Of the 39 assigned to the intervention, only 23 (59%) attended at least four sessions; eight of the 39 attended no sessions. At six months, there were no statistically significant benefits on any of six outcome measures—fatigue, anxiety, mood, quality of life, social participation, and return to work.
In the discussion, the investigators grasp at a few rationales for the intervention’s poor performance—maybe the manual was too long, maybe video would have been better than telephone calls. Or maybe, they suggest, “participants had not understood the need for active participation and ‘homework’ between sessions.” In future trials, the investigators added, “we would need to make it clearer from the outset that active involvement and ‘homework’ between sessions is expected of participants.” This effort to explain away the trial’s failures smacks of desperation and feels like patient-blaming. (But maybe that’s just me.)
Perhaps adjusting some trial parameters would have made a marginal difference. But the reality is that the intervention bombed. Like, completely–the findings hold barely a shadow of a hint of positive news. Tinkering with the intervention around the edges does not seem likely to produce the hoped-for effects. Certainly no one should grant funding for a full-scale trial of this modality, given these reported results.
A rational or objective observer would suggest that it is time to shift gears. But what do Chalder and her colleagues conclude? This, according to the abstract:
“Patients can be recruited to a trial of this design. These data will inform the design of further trials in post-stroke fatigue.”
Wow. Ok. So maybe patients could be recruited to a trial of this design, although the data presented here are clearly not robust on that score. But what would be the point, given the null outcomes? And if the data inform future research, hopefully the message will be that further trials in post-stroke fatigue should pursue a completely different approach and be conducted by researchers less devoted to their own assumptions. However, I doubt that is what Professor Chalder and her colleagues intended to convey.
The conclusion that patients can be recruited to participate in useless trials reminds me of a review I once read about a Toni Collette movie. What an amazing actress! And while the reviewer was really into her, they were much less kind to the movie she was starring in. (I can’t remember the name of the movie or reviewer.) The first sentence of the review made me laugh. It went something like this: “Toni Collette can probably do anything she wants as an actress, but that doesn’t mean that she should.”
To translate for the current context: Patients can probably be recruited for a study like “Post stroke intervention trial in fatigue (POSITIF)”—but that doesn’t mean that they should be.
(Originally posted on Virology Blog.)