New Study Promotes “Bespoke” Hospital Rehab Program for Kids with ME/CFS

By David Tuller, DrPH

The International Journal of Environmental Research and Public Health recently published a paper called “Key Features of a Multi-Disciplinary Hospital-Based Rehabilitation Program for Children and Adolescents with Moderate to Severe Myalgic Encephalomyelitis/Chronic Fatigue Syndrome ME/CFS.” The investigators retrospectively reviewed the records of 27 children and young people (CYP) who were treated in a ward-based rehabilitation program in 2015 and later discharged from the ME/CFS service.

In laying out the rationale for the study, the paper does reference the new ME/CFS guidelines issued a year ago by the UK’s National Institute for Health and Care Excellence. But it does not mention the core finding of the guidelines—that cognitive behavior therapy and graded exercise therapy should no longer be recommended as treatments for the illness. (This determination represented a complete reversal of the recommendations in earlier guidelines issued in 2007.) Nor does the paper mention that NICE assessed the quality of the evidence for the CBT/GET approach as “very low” or merely “low.”

And that’s a real problem, because this sentence appears right after the reference to NICE: “The evidence in CYP for treatment is limited, although there is some evidence that cognitive behavioural therapy (CBT) may be beneficial.” Hm. Isn’t NICE’s finding relevant for this claim? Why do the authors not mention what NICE has concluded? It is of course inappropriate to ignore salient but inconvenient facts. Suggesting that CBT “may be beneficial” without mentioning that it has officially been found not to be is unacceptable and suggests a certain lack of academic integrity.

(Interestingly, the study includes this revealing sentence: “In adults, studies have shown <10% recovery to pre-morbid levels during the period of follow up.” This statement appears to undermine the PACE trial, essentially throwing its bogus claims of 22% recovery rates under the bus. The statement more closely corresponds to the findings of the PACE reanalysis published in BMC Psychology, which found that all four groups had “recovery” rates—per the PACE protocol criteria—in the single digits, with no statistically significant differences between the group. I was a co-author of that paper.)

At the ME/CFS service, we’re told, “occupational therapy, physiotherapy, specialist and ward nurses, and mental health and medical teams work closely together to provide bespoke, multi-faceted rehabilitation.” In assessing outcomes, the authors examined four domains: mobility/activity, education, sleep, and involvement in social or recreational activities. They developed their own scales by dividing the first three domains into multiple stages of improvement; the fourth was treated as a binary construct. All well and good, but as they themselves note in the paper’s limitations section, this “outcome framework” has not been previously explored or validated. Is it reliable? Who knows?

And here’s another catch: “Outcomes were measured by reviewing and scaling capabilities in these four areas as described by clinicians and therapists during their initial assessments in clinic and at discharge.” In other words, the assessments under this “outcome framework” were based on interpreting clinicians’ interpretations of what was said by the patients—not directly on what patients themselves said. That means we have to trust the clinicians’ perspective and have no way to vet that it actually corresponds to how patients answered.

This certainly introduces the possibility or likelihood of an unknown amount of bias. After all, clinicians, like patients, are likely to interpret data in ways that match their conscious or unconscious interests—in this case, their interests in believing their work as clinicians is effective and that patients do in fact improve. That, of course, is on top of the bias likely to influence patients’ own responses to questions and questionnaires. Given that the reported results are fairly modest, there is plenty of room to question the credibility of the data.

So, what were the results? Here’s what the paper states: “Upon discharge from our service, 23/27 (85%) CYP showed improvement in one or more domains over their period of ward-based therapy. In total, 15 of the 23 (65%) CYP who were not in full-time education showed improvement. Overall, 12 of the 24 (50%) who had difficulties with sleep improved their sleep patterns during treatment. Further, 19/27 (70%) demonstrated improvement in physical ability and 16/27 (59.2%) showed improvement in their socialising abilities. Finally, 8/27 (30%) patients demonstrated improvement in all four areas.”

And the study’s conclusion?  “This study demonstrates how our multi-disciplinary approach with day case and admissions for intensive therapy effectively addresses the individual needs of CYP with moderate to severe ME/CFS.”

The study, of course, shows nothing of the kind. Without a control or comparison group, it becomes impossible to draw any meaningful conclusion. We have no idea whether or not these purported improvements have any relationship to the interventions. Perhaps the findings just reflected natural fluctuations of the illness or spontaneous recovery. So the use of any causal language—such as the claim that the findings “demonstrate” that the intervention “effectively addresses” anything at all is unwarranted and unjustifiable.

In general, the available data suggest that kids with the illness are more likely to recover than adults. In this case, the results–if credibly reported, and that’s a huge “if” with the members of the CBT/GET ideological brigades–indicate that patients included in the sample experienced overall some improvement during the time covered by the retrospective analysis. But the sample itself was just a fraction of the hundreds of patients seen that year by the service. Are they representative of the patient population? We don’t know.

The authors indicate the lack of a control group as a limitation. But they do not explain why they did not fashion a control group. Nor do they mention that the study design prevents them, or should prevent them, from making any claims or inferences related to causality. This is an account of researchers interpretations of clinicians’ interpretations of treatment outcomes based on a previously untested schema. As a matter of science and logic, that’s all that can be said about it.

This is not the first time I have critiqued the work of the senior author, Terry Segal, a physician at University College London. Three years ago, in what seemed to be a transparent effort to influence the development of the new NICE guidelines, she was the main author of a review of pediatric treatments for ME/CFS. In the abstract, the review highlighted only one intervention—the woo-woo intervention known as Lightning Process (LP)—as having been “shown to be effective.” Her citation for this alarming claim was, of course, Professor Esther Crawley’s pediatric LP study —an astonishing and blatant case of research misconduct, in my opinion. That LP paper was packed with false assertions about the study methodology, suggesting it could also perhaps be described as potentially fraudulent.

At the time, I wrote to Dr Segal and noted the issues with the LP study. (The journal, Archives of Disease in Childhood, had not yet published its 3,000-word correction, which confirmed what I had documented about the paper.) Dr Segal failed to take account of this information, which to me indicates a serious lack of professional ethics. Anything she writes should be viewed with the utmost caution, and nothing she asserts should be taken at face value.




(View the original post at virology.ws)