By David Tuller, DrPH
A few days ago, I wrote a post about yet another atrocious paper from Professor Trudie Chalder—this one called “Chronic fatigue syndrome and occupational status: a retrospective longitudinal study.” Professor Chalder and her colleagues seem constitutionally incapable of writing anything that isn’t marred by massive flaws. In this case, as I noted the other day, besides the fact that she and her colleagues misrepresented their findings by messing up their denominators in expressing percentages, they omitted any reference to the null findings on employment from the PACE trial—an egregious lapse.
They also seemed confused about whether they diagnosed patients using the 2007 guidelines from the National Institute for Health and Care Excellence for CFS/ME, or using the 1991 Oxford CFS criteria. They mentioned NICE in the text but the cited reference was the 1991 paper. The differences between the case definitions are significant, so this confusion is jarring.
Given these failures, the paper itself is an unreliable and unacceptable report on the issue of employment status. The omission of the highly salient PACE data also raises serious questions about the integrity of the research team.
But the paper is full of other methodological problems. Overall, the results indicate that the interventions have little if any impact. According to the paper, of the 316 who provided both baseline and follow-up data, 88% experienced no change in employment status.
However, these figures are hard to interpret, since the authors lumped full-time employment and part-time employment together in the “employed” category. For all we know, many of those said to be working went from full-time to only part-time employment—but such a trend, if it exists, would be masked by the way the data are presented.
As Mark Vink pointed out on twitter, many of the outcomes—including fatigue, fear avoidance, catastrophizing, and other domains–yielded null results. Others, like physical function, demonstrated only insignificant changes. The paper itself does not mention any of these poor findings in the text but only includes them in a table. Keith Geraghty has noted other issues with the paper and analyses. Certainly the final, post-treatment scores on the Chalder Fatigue Scale and the SF-36 questionnaire for physical function represent continuing severe disability.
Here are some other issues that need to be addressed.
The study seems to have had a major loss-to-follow-up—of the 508 participants at the start, only 62%–or 316—provided follow-up data. That is a huge drop-off, and the authors tell us almost nothing about these people other than that they are possibly in worse health than those who provided follow-up data, as tends to be the case with drop-outs.
That in turn means that the baseline averages presented for all 508 participants likely represent worse health than the baseline averages of just the 316 who provided follow-up data. For unexplained reasons, the authors did not provide the baseline data for the 316 alone. That would have allowed for an assessment of whether their health actually got worse during the time period, according to various variables, or simply didn’t get better.
The decision not to include these baseline data for the 316 participants who provided follow-up data is rather unusual. It suggests that the authors might have known that the pre- and post-treatment comparison of outcomes for the 316 participants who provided follow-up data would not look very attractive.
The authors make much of the “optimism” they see in the fact that 9 % of the 316 participants were not working at baseline but had returned to work by follow-up. But this number is offset by the 6% who stopped working between baseline and follow-up. And of course, again, we have no idea if these are full-time or part-time positions, so assessing the changes is challenging. Besides that, the authors provide no tests of significance for these results—another bad sign.
The authors continue to suggest that “unhelpful beliefs”—that is, patients’ notions that they have an ongoing disease that is exacerbated with excess activity–play a role in perpetuating the syndrome. They can’t stop flogging this dead horse, even after decades of flogging it.
Here’s what they write: “Unhelpful beliefs such as fear of activity and exercise and concerns about causing damage, combined with all or nothing behaviour and behavioural avoidance, were associated with not working and are specifically targeted in CBT and, to some extent, GET.” They suggest that the interventions should include more focus on employment issues.
Given that the factors presumed to be impeding recovery are already the focus of the interventions that Professor Chalder has advocated for decades, the evidence from this study is clear: These treatments are essentially worthless. Overall, they largely appear to fail to improve health and work outcomes. It is astonishing that Professor Chalder and her colleagues do not seem to realize this.
Professor Chalder and her two PACE besties, Professor Michael Sharpe and Professor Peter White, have recently published another paper purporting to make the case for CBT and GET as effective “evidence-based” treatments. That article is also a piece of crap. Too bad they did not incorporate these disastrous findings into that analysis. Instead, they continue to spout drivel and nonsense. When will this end?
9 responses to “More on that Disastrous Employment Paper from Professor Chalder and Colleagues”
Besties = best friends (for our international friends).
Mixing up CFS, ME, and ME/CFS makes it hard for readers to find all the papers that are relevant and to compare them.
How does Occupational Medicine decide to peer-review – and then accept – papers? Because this journal seems to have missed the whole brouhaha. Are our authors looking for obscure places to drop their little missteps, to pad their resumes and call them current?
Has OM been asked to retract the paper? Will they? If the levees are breached ANYWHERE, the floodwaters rush through the hole and make it wider.
As usual, thanks for the careful statistical analysis. We affected patients need your work.
You say “And of course, again, we have no idea if these are full-time or part-time positions, so assessing the changes is challenging.”
Don’t forget casual – which is far more sporadic and transient that part time
It matters how they decide people are employed. If you’re employed but off sick, are you employed or not? It also matters how the work is done. I went from working full time (37.5hours a week / 7.5hours a day) out in offices and other locations and rarely working from home to working part time (17.5hours a week / 3.5hours a day) working entirely from home, flexible hours, while laying down. Those are seriously not equivalent, but it sounds like Trudie’s paper would record them as the same.
This study is on a time scale to make it irrelevant to most people’s ME related employment issues. I was a number of years into my ME when I went part time at work, and the major relapse I experienced some seven years post onset resulted in over a year’s sick leave, when I was nominally in work, before my consequent enforced ill health retirement.
Subsequently I have tried other approaches to being self employed over the decades, but each was scuppered by subsequent and possibly resultant relapses of increasing severity.
This paper also does not consider the possibility that reducing work hours or stopping work altogether could for some be a positive step, avoiding significant deterioration and also avoiding ultimately higher health and social care costs due to significant deterioration in health and independence from struggling beyond sustainable activity thresholds. Someone long term unemployed perhaps doing some voluntary work but able to largely undertake self care costs society far less that someone bed-bound requiring round the clock support.
Measurements of my employment status even over several year periods would produce totally contradictory results depending on where in the nearly thirty years of my ME they were taken.
This coterie of researchers are constantly seeking to demonstrate short term improvement on usually uninterpretable or ambiguous measures, but never admit that the condition can get worse and apparently lack any interest in the overall course and variation in the condition. They are more interested in their personal beliefs than actually studying the condition they claim to want to treat.
I’m so tired of these kind of people. So arrogant, lack self insight, always blame others and love power. Only psychopaths fit this profile.
Volunteering to closely read and analyze this dreck is above and beyond the call of duty. Thank you Dr Tuller for taking on this onerous chore.
The authors intend for no one to actually read the paper and probably not even the abstract. Only the press release written by their public relations hacks is supposed to be read. It’s like those 2000 page bills voted on by Congress. Nobody knows what is really in there, and that is a feature, not a bug.
Looks like ‘nul points’ to me. This can’t bode well for Blighty on the international stage.
Most scientific disciplines have research methodologies that include a key concept at their core: The notion of falsifiability. The research is based on a question that is asked such that the answer can be judged true or false. Unless it’s able to be proven false it is not testable. Competent scientists go out of their way to disprove their own hypotheses, and include ways of demonstrating this in their research plans.
The greater part of a whole year of a postgrad student’s schooling is spent training them as scientists to come up with decent research questions based on this notion–along with why it is important that it be falsifiable. It’s been that way since the mid 20th Century. It doesn’t matter whether the discipline is genetics, psychology, physical chemistry, biology, economics, sociology, particle physics, what ever legitimate natural or social science being investigated is based on hypotheses that are logically capable of being proven false.
The members of the BPS “coterie” (I like “brigades,” too), however, seem to have skipped this part of grad school. Their methodologies are all over the place. They mix patients together from different schemes of diagnostic criteria, so experts really don’t know what disease(s) they have. They rewrite research questions on the fly, based on who shows up to conduct research on. They select patients who meet the recovery criteria (cured!) before the research even begins. Here a prominent BPSer is describing a tiny handful of patients who returned to work after exposure to her pet CBT/GET therapies, yet we have no idea whether they returned to a full-time 32-45 hour work week, or a 2 hour stint performing work they arranged on fivr.com. She ignores the big story–the 88% who saw no improvement–to lavish attention on the (net) 3% who did.
At this point it’s hard to tell whether they (not just the current example) are showing the consequences of a poor education in research methods, or whether these “research” projects actually flow from an intentional program of antisocial, sociopathic behaviors (on their part) based on lies, dishonesty, deceit. Maybe it started as a bit of both. Either way, they have been repeatedly warned. I believe it is now suitable to consider it as research misconduct, and it’s time to retract all of their journal articles that do not conform to methodological standards for any legitimate scientific discipline.
But here we are now. Are they purposely causing sadistic harm to people with ME and/or CFS by continuing to promote GET and gaslit-CBT? Do they get joy out of our suffering? How does one classify people who do that?
The criticism of scientific papers with appropriate detail is a boon to academic progress, just as the eradication of teleological drift is also warranted.
Even if it seems to be impolite!
I like Dr. Tuller’s approach (On The Origin Of The Home Of COVID-19 – 27).