By David Tuller, DrPH
I recently wrote about a Dutch study published a few months ago in the journal Clinical Infectious Diseases–“Efficacy of Cognitive-Behavioral Therapy Targeting Severe Fatigue Following Coronavirus Disease 2019: Results of a Randomized Controlled Trial.” The study, nick-named ReCOVer, found that unblinded trials relying on subjective outcomes will produce modestly positive reports in the group receiving the purportedly helpful intervention. (The senior and corresponding author was Professor Hans Knoop, a member of the CBT/GET ideological brigades.)
In this case, the intervention succeeded in prompting patients to somewhat improve their answers on questionnaires about “fatigue,” as well as about secondary domains. These results were predictable and essentially meaningless, given the bias inherent in an unblinded trial relying on self-reported measures. Nonetheless, the study has been touted by credulous observers as “evidence” that cognitive behavior therapy (CBT) is effective in preventing or reducing a core symptom associated with long Covid. Perhaps these observers are not bothered by the investigators’ decision to withhold the information that the intervention did not increase participants’ level of physical activity—the trial’s one objective outcome.
To recap: For 14 days at baseline and at the end of therapy, as outlined in the trial protocol, ReCOVer participants wore small devices measuring physical activity levels. The omission of these actigraphy results from the published paper indicated that they likely showed no difference in physical activity between the study arms and therefore did not bolster claims that the intervention was effective.
In their response to correspondence, Professor Knoop and his co-authors acknowledged as much–they had null results for actigraphy, although they provided no specifics. But they offered laughable justifications—what I earlier called “dog-ate-my-data” excuses–for having omitted these data from the paper. In the correspondence, they defended their preference for subjective indicators by arguing that “proposed alternative outcomes, like physical activity assessed with actigraphy or physical fitness are no[t] reliable markers of fatigue.”
This argument—that these outcomes are essentially irrelevant in assessing fatigue–is transparently self-serving. If people hold the absolutist position that the only valid measure for “fatigue” is a self-reported questionnaire in an unblinded trial, then of course they will reject as unreliable any null results for objective measures of physical fitness and physical activity. But it’s hard to imagine any serious investigator stooping to such ridiculousness in an effort to explain away inconvenient results. It’s an embarrassment.
In ReCOVer, after all, the CBT program is called Fit After Covid–a name that is itself an acknowledgement that seeking to improve physical fitness is an integral aspect and goal of the intervention. This acknowledgement is inconsistent with the assertion that “physical fitness” and actigraphy are not “reliable markers of fatigue.” Moreover, Fit After Covid includes a module on graded exercise, an approach grounded in the assumption of a relationship between physical activity and fatigue. The insistence on the part of Professor Knoop and his colleagues that results for physical activity have no relationship to fatigue cannot be taken seriously—except as an attempt to downplay or bury findings that raise questions about their claims regarding the effectiveness of CBT.
It is worth pointing out that Professor Knoop himself took the opposite view about the relationship between fatigue and physical activity as a co-author of a 2013 paper titled “Relationship between objectively assessed physical activity and fatigue in patients with rheumatoid arthritis: inverse correlation of activity and fatigue.”
This 2013 study offered this context: “A few other studies have investigated the association between physical activity and fatigue, and none of these studies included patients with RA [rheumatoid arthritis].” The study found that, “among patients with RA, a higher level of daily physical activity was associated with reduced levels of fatigue.” Ok, then!
And here are some other quotes from the 2013 study on RA:
*“It is important to note that decreased physical activity has been associated with increased fatigue in patients with CFS, Sjogren’s disease, and breast cancer.”
*“Fatigue is generally associated with low physical activity in patients with various chronic medical conditions.”
*“Among other patient groups [that is, non-RA patient groups], including patients with multiple sclerosis, increased physical activity (measured objectively) has been associated with decreased fatigue.”
Interestingly, the authors of the study on CBT for long Covid did not cite this 2013 publication co-authored by Professor Knoop when justifying the omission of objective outcome data on the grounds that they were irrelevant in assessing fatigue.
A 2010 study as grounds for dismissing links between fatigue and physical activity
How is it possible to justify the premise that markers of physical activity are unreliable or irrelevant when it comes to fatigue? Well, Professor Knoop and colleagues enshrined this notion as an actual thing—an academic finding!—in a 2010 paper.
In the 2000s, three separate Dutch trials of CBT for what was then being called chronic fatigue syndrome (CFS) reported positive results for subjective outcomes but null results for actigraphy. In all three cases, the initial trial reports omitted mention of the objective results and presented CBT as effective based on the other measures. This selective reporting led to an incomplete–and false–public understanding of the outcomes. When Professor Knoop and colleagues finally revealed the poor actigraphy findings from all three papers in 2010, they dismissed them as having no relationship to fatigue.
The 2010 paper was called “How does cognitive behaviour therapy reduce fatigue in patients with chronic fatigue syndrome? The role of physical activity.” Accepting the positive subjective fatigue findings at face value, the investigators analyzed the lack of comparable benefits on actigraphy. The study concluded that physical activity is essentially irrelevant when measuring fatigue: “Although CBT effectively reduced fatigue, it did not change the level of physical activity…The effect of CBT on fatigue in CFS is not mediated by a persistent increase in physical activity.”
Scientists beyond the grip of this particular form of groupthink might have interpreted the results differently. They would likely have suggested that the null results for actigraphy across all three CBT trials raised questions about whether the self-reported reductions in fatigue were partly or largely artifacts of the bias built into the design of the studies. But these Dutch investigators took the opposite view.
Here’s a key section of their 2010 paper: “Our study was the first one to show that the severity of fatigue in patients with CFS is not reduced by CBT because patients have become more physically active at the end of their treatment. Based on these findings, physical activity programmes can better be understood as a way to facilitate change in other mechanisms which are more directly related to a change in fatigue. Among these mechanisms, a change in illness-related cognitions is likely to play a crucial role in CBT for CFS and should therefore be monitored closely during treatment.”
Per this interpretation, in other words, the level of physical activity doesn’t matter when it comes to fatigue, so physical activity programs should focus less on actual physical activity and more on inducing “a change in illness-related cognitions” and “other mechanisms…more directly related to fatigue.” These recommendations would seem to undermine the rationale for long-standing treatments like graded exercise therapy and physical activity programs. But such contradictions do not seem to trouble Professor Knoop and his team.
It remains perplexing that well-regarded investigators would advance such an untenable argument, whether in 2010 or this year, to justify withholding key objective information from the public record. But that’s where we are. As I wrote last month on Twitter (now X) after having seen a production of The Crucible, the arguments for witchcraft in the play were more persuasive than the gibberish offered by authors of the recent CBT-for-long-Covid study to justify their flawed decision-making.
(Originally posted on Virology Blog.)