By David Tuller, DrPH
Slate recently ran a piece by a young journalist and Stanford neuroscience graduate student, Grace Huckins, about purported links between long Covid and mental illness. I found it problematic. For one thing, in the same sentence it linked to both a story of mine in Codastory.com and one from The Atlantic‘s Ed Yong, and asserted that both of our articles “suggested that linking depression and long COVID is tantamount to accusing all long COVID sufferers of being malingerers.”
This was not remotely the point I was trying to make; I can’t speak for Ed, but I didn’t read his article that way either. My piece was about physicians smacked by long Covid who have been told categorically that depression, anxiety and what-not are the cause of all their devastating symptoms and that absolutely no pathophysiological processes are implicated. I highlighted this point and a couple of others on Twitter and suggested that the journalist and I meet up to discuss the issues, since we’re both in the San Francisco area. (I would have DM’d her if that had been possible on Twitter.)
In response, she offered to DM me. She also indicated that she had been “deeply troubled by some of your writing (I have read quite a lot of it), which in my eyes goes against the scientific evidence.” As of now, I haven’t heard from her, so I remain curious about what I have written that she views as antithetical to the science.
Perhaps this concern involves my clearly negative view of a Dutch study of cognitive behavior therapy for long Covid. My piece included some harsh words about this study, which was still ongoing at the time. The study results were published earlier this year. The Slate piece highlighted the positive reported findings as legitimate evidence for the effectiveness of this sort of intervention. I think the study stinks–not least because the authors have acknowledged after-the-fact that they had null results for their sole objective outcome–activity as measured by actometer worn for a week or two.
In retrospect, I should have made clearer that much of my objection to the study related to its provenance. Professor Hans Knoop, the senior author, is an unreliable narrator when it comes to his own study findings. There are good reasons not to take his work at face value. Just one example is how he and a senior colleague wrote in a 2011 Lancet commentary that PACE participants met a “strict criterion for recovery”—an absurd statement. It was self-evident that the trial was designed in a way that would almost guarantee positive results.
But I certainly did not suggest there was no link between depression and long Covid. You’d have to be dense, clueless or stupid to make that argument. (I am, of course, very capable of being all three of those. Just ask my ex-boyfriends!)
Anyway, after that, I posted a long thread about some other aspects of the Slate article. Someone kindly unspooled it for me, so I’m posting it here.
******
The recent article in @Slate by @grace_huckins attracted a lot of attention. The article highlighted the self-evident links between mood/psychological states and somatic symptoms. No argument there–no one seriously disputes the links. 1/
But the article relies heavily on the construct of functional neurological disorder without noting how FND experts have misrepresented their own field for more than a decade, as I have recently reported.2/
The top experts in the field have routinely disseminated false information about prevalence from a seminal study in their field, insisting that it showed that 16% of neurology outpatients had FND and that it was the #2 diagnosis. This claim is nonsense.3/
The 2010 study, Stone et al, found that 5.5% had conversion disorder, now known as FND–not 16%. At that lower rate, it was the 8th-most-common diagnosis, not #2. This is indisputable, as evidenced by the forthcoming correction in a major journal. 4/
This correction will necessitate further corrections in literally dozens of papers. The #2 diagnosis claim has become a meme–even though the paper cited showed no such thing. The others included had “functional” disorders but no evidence of the specific Dx of FND.5/
As the Slate article notes, an FND Dx requires the presence of positive findings on clinical “rule-in” signs–it is purportedly a positive diagnosis, while so-called “functional” disorders are considered diagnoses of exclusion. 6/
The Slate article notes that “there are specific clues that doctors can use to identify FND.” A major problem is that the studies about these clues–the “rule-in” signs–do not tell us very much, as I recently documented about Hoover’s sign 7/
Hoover’s sign is the “poster-sign” for FND, first described a century ago as a way to identify hysterical leg weakness/paralysis from the “organic” version. It is routinely claimed that it is 100% specific, or close to. But the main study finding tells us very little about FND.8/
In this decade-old study, the authors found Hoover’s sign in less than 20 patients previously diagnosed with FND (or conversion disorder) and didn’t find it in the comparison group. So why doesn’t this mean it is 100% specific in identifying FND? 9/
Because all the FND patients had a positive Hoover’s sign as part of their diagnostic work-up in the first place–in other words, it was part of why they were given the Dx. So it is not surprising that they would have a second positive Hoover’s sign. /10
In other words, the study proved that one positive Hoover’s sign predicts another–nothing more. The authors noted the circularity of the argument as a limitation. But it is more than a minor limitation–it renders meaningless the purported specificity of Hoover’s for FND. 11/
The authors themselves called for more studies of Hoover’s sign, including of inter-rater reliability studies. But neither they nor others have conducted these further studies. So we are left with proof that a positive Hoover’s sign predicts another positive Hoover’s sign. 12/
It is known that other conditions with known pathophysiological processes can lead to positive Hoover’s signs. And yet based on this meager set of data, it is also said to be close to 100% specific for FND. And Hoover’s is the most studied of the signs. 13/
It is hard to take at face value the claims of experts who have spent a decade misrepresenting key data from their field of expertise and over-hyping the “high specificity” of clinical signs studied in papers with circular study designs.14/
A 2021 paper on these signs included this statement: ““There is a need to further test the specificity, sensitivities and inter-rater reliability of the growing range of positive functional signs compared to other neurological populations…15/
…particularly given that statistical properties for some signs have been only tested in a single cohort.” In fact, almost all of the signs identified to test motor FND have been tested in only a single cohort. (My above-linked blog post contains all links and references). 16/
A 2022 paper included a table of 41 “validated positive motor signs” used to rule in the motor FND diagnoses 34 of these–or 83%–were shown as tested in only a single cohort. Five were tested in two studies, and only two signs were tested in more than two.17/
It is self-evident that mood states/depression/anxiety impact the body in incredibly complicated ways. No one seriously disputes that these can cause and exacerbate a range of conditions. No one can seriously dispute that psychotherapy can be helpful in a great many ways. 18/
But the CBT promoters in the long Covid field, like the senior author of the study cited favorably in Slate, are not honest brokers, just like the CBT promoters for ME/CFS are not honest brokers. A close looks at the Dutch study for CBT for long covid makes that clear. 19/
That study–like almost every CBT study in this domain of illnesses with non-specific symptoms like ME/CFS–relied for its claims of success solely on subjective outcomes. In an unblinded study, relying on subjective outcomes is a recipe for an enormous amout of bias. 20/
It is self-evident, or should be, that patients who receive loving attention from a therapist for months are more likely to respond more positively on questionnaires than patients who received nothing. Hello!! Can anybody seriously argue the opposite? 21/
Anyone who receives a course of CBT from a compassionate person is likely to report some benefit, whether they have an illness or not. To argue from this that modest reported benefits demonstrate the efficacy of the treatment requires a problematic suspension of skepticism.22/
Beyond that, the senior author has a history of hiding null or poor results on a key objective measure of movement–actigraphy readings from a device worn for days or a week by participants. I reported on this in a recent blog.23/
Three major Dutch studies of psycho-behavioral interventions for ME/CFS all had positive subjective findings but null objective actigraphy findings. And all the papers were published without the objective findings and touted as proof the treatments worked.24/
Only years later did these authors, including Knoop, publish their null objective results. But of course by then no one cared or paid attention. In the recent LC study of CBT, the protocol indicated that actigraphy would be done at baseline and three months. 25/
So where are these data? The published report doesn’t mention them. I think it’s fair to assume that if they supported the subjective results, the authors would have included them. Their absence suggests that, like in past CBT studies, they contradict the subjective outcomes. 26/
There are other issues with this study, as noted in a recently published response to it. Citing this as serious evidence that CBT works for long covid is really unwarranted.
The Slate article criticized an article I wrote about clinicians with long Covid. In that article, I mentioned that this Dutch CBT study was underway and criticized it. The point is not that I reject all research into the links between long covid and depression/anxiety/etc. 28/
The linkages are obviously there, depression and anxiety and constant stress response are obviously harmful to physiological processes. But I strenuously object to researchers who have a history of problematic reporting of their results. 29/
That includes investigators who have spent a decade misrepresenting a seminal study in their field of research, who over-hype the specificity and discriminatory value of clinical signs, and who hide salient objective results from their own studies. 30/
This means I also tend to have objections to journalism articles that rely on these claims. The Slate article seems to me much more nuanced in the end from related articles in New York and The New Republic. The journalist appears more open to dialogue. 31/
I continue to be open to having that dialogue with her, and with other journalists engaged in these issues. Certainly I hope in future those tackling this issue take a sharper look at some of the studies they are citing and the robust critiques of those studies. 32/
That’s all for now on this. I might have more thoughts on it later.
Oh, one more point–the Slate author makes it clear that association is not causation, that investigations into biomedical causes are critical, etc. In many ways, it is a nuanced piece. But the piece overlooks that the approach of the CBT experts in this domain is different.
In general, their argument has been that these anxiety/depression are the sole causes of all the non-specific symptoms. Patients are not told generally they have associated depresion/anxiety but that those are THE causal factors. That’s the issue.