Dutch CBT Study for Long Covid Proves that Unblinded Studies with Subjective Outcomes Generate Positive Reports

By David Tuller, DrPH

Three years ago, I wrote a blog post about a problematic Dutch study that had been funded by a major health agency and was being led by Hans Knoop, a professor of medical psychology at Amsterdam University Medical Centers. The study sought to test whether a course of cognitive behavior therapy starting months after a bout of acute Covid-19—rather than years later— could reduce levels of reported fatigue and prevent it from becoming chronic.

Professor Knoop is a long-time colleague of the authors of the now-discredited PACE trial. In a Lancet commentary accompanying the publication of the PACE results, he and a colleague declared that many patients had met “a strict criterion for recovery”—a ridiculous and untrue claim. 

This new study—like so much of the research from Professor Knoop and his colleagues in the world of psychosomatic medicine—was unblinded and relied on subjective, self-reported outcomes. This design is fraught with potential bias. Given the design, the study was destined to produce positive results on these subjective outcomes—and now, not surprisingly, it has, with the results published in the journal Clinical Infectious Diseases

Also not surprisingly, the results for the one objective outcome included in the protocol—actigraphy to measure levels of physical activity at baseline and right after treatment—were not reported or mentioned in the paper. Professor Knoop has deployed this strategy before—most recently in 2017 when he and colleagues published positive subjective outcomes but failed to report null actometer results in a study of CBT for treating fatigue after Q-fever; these null results were published two years later and ignored. And in a similar fashion a dozen years ago, Professor Knoop and several colleagues buried disappointing actigraphy results from three trials of CBT for ME/CFS.

Twitter threads from @lucibee and @anilvanderzee point out some of the major issues with the new study, called “Efficacy of cognitive behavioral therapy targeting severe fatigue following COVID-19: the results of a randomized controlled trial.” The study was also the subject of a lively discussion on the Science For ME forum. The CBT course, called “Fit After Covid,” included online modules along with in-person or online contact with a therapist.

The study’s 114 participants were all suffering from what was identified as severe fatigue three to 12 months after their acute infections. They were randomized into a group receiving the CBT  program and group receiving care as usual (CAU)—a design that undermines the claim in the article title that the study was “controlled.” 

Yes, there was a comparison arm. But the mean number of interactions between therapist and patient in the CBT arm was almost 12, and the study did not offer members in the CAU group a parallel amount of time and attention.

If participants know they are receiving an active treatment in a trial—a full course of therapeutic encouragement, for example—and they are told that this treatment has been found to be successful in other circumstances, it stands to reason they would be more likely to report benefits than people who know they did not receive the possibly helpful treatment. The authors mention this imbalance between the groups as a limitation but nonetheless still call the trial “controlled,” even though they are not controlling for this important factor.

**********

CBT targets seven domains of thought and activity

The trial was based on a “cognitive-behavioral model” and the CBT was specifically designed to target seven perceived domains that could perpetuate the fatigue. These were: a disrupted sleep-wake pattern, unhelpful beliefs about fatigue, a low or unevenly distributed activity level, perceived low social support, problems with psychological processing of COVID-19, fears and worries regarding COVID-19, and poor coping with pain. The CBT group received the intervention for 17 weeks. 

Let’s be clear. Everyone who has been sick could benefit from someone—a good social worker, a life coach, grandma, a counselor, or a CBT therapist—offering them common sense advice about sleep and activity levels and the need to find more friends or call their siblings if they’re feeling low. Of course it is useful to help people address fears and worries about the pandemic. All these things are likely to make them feel better, emotionally and physically, especially compared to people who are not offered anything comparable. 

And of course that will make them more likely to answer more positively in general on questionnaires, including fatigue questionnaires. That doesn’t mean you’re treating anyone’s fatigue—just that you’re providing human and social support of the kind we all could benefit from in tough times. It should be expected that this would be reflected in modest improvements in questionnaire scores—especially in an open label trial in which people know they are getting a “treatment” that they believe could help them get better.

The authors acknowledged this limitation but noted that members of the CAU group were able to access other treatments that could have helped provide some balance to the experiences of the separate arms. The paper noted, for example, that “most participants in the CAU group members of the CAU group received care, including exercise.” 

Uh, oh! The study makes no mention of post-exertional malaise (PEM), a core characteristic of ME/CFS and in many cases of long Covid. If some CAU group members had PEM and were being encouraged to exercise, that could explain why they reported worse fatigue at the end. It might also explain why the CBT group had fewer reported adverse events than the CAU group.

Besides fatigue, the participants were required as an entry requirement to demonstrate limitations in physical function by a low score on a questionnaire and/or limitations in “social functioning” on a different scale. This raises the possibility that some were suffering from primary depression rather than being physically disabled. Any such patients might well have benefited from a course of CBT, which is frequently prescribed treatment for depression.

The trial’s primary outcome was the difference in the means between the two groups right after treatment and six months later on the fatigue severity sub-scale of the Checklist Individual Strength (CIS), a 20-item fatigue questionnaire. The fatigue severity sub-scale has eight items, with each one rated on a 1-7 scale, with higher scores indicating greater fatigue. The total score ranges from a low of eight to a hight of 52. 

**********

No objective findings but modest subjective results

The study showed—wow!—that those who received the intervention said they were less tired than those that didn’t get the intervention. The overall mean for the CBT group’s CIS scores right after treatment and six months later was 8.8 points lower than the the mean for the CAU group. This is not a huge gap on a 52-point scale. The mean score for the CBT group at six months—31.5—still represents significant fatigue. The difference between the groups seems well within what one might expect from any bias stemming from the study design.

Secondary measures—all subjective—also favored the intervention.

But the absence of the actigraphy results casts doubt on even these modest reported subjective benefits. The actigraphy results would have revealed participants’ objective levels of physical activity. The protocol called for participants to wear actigraphs around their wrists to monitor their activity for 14 days, both at the start of the trial and right at the end of the course of therapy. (For unexplained reasons, the protocol did not call for actigraphy at the six-month post-therapy time point.)

Beyond not providing the results, the authors didn’t explain why they decided not to provide them. This omission of salient data aligns with past practice by Dutch investigators in this domain—including Professor Knoop. Besides the 2017 Q-fever study, in three previous Dutch studies for psycho-behavioral interventions for ME/CFS that included actigraphy as an outcome, the authors published the positive subjective outcomes in the initial papers but left out the objective outcome. 

Finally, years later, they published the actigraphy results from all three papers in a single article. The actigraphy had null results in all three studies. “Although CBT effectively reduced fatigue, it did not change the level of physical activity,” concluded the authors, including Professor Knoop. In other words, the CBT appeared to improve reporting on fatigue questionnaires but did not lead to any real change in how much people do physically. This apparent conflict between subjective and objective outcomes did not shake the authors’ confidence in the accuracy of the former.

In this case, it is hard to imagine that Professor Knoop would have omitted the actigraphy results from the published report had they documented that CBT conferred objective measurable benefits. What reason would there be? But if they showed no change in the CBT group, or worse, and Professor Knoop publishes those data two years from now, no one will notice. The study has already been published with modestly positive subjective findings, and that’s already sucked up the attention. 

(View the original post at virology.ws)


Categories: