The BMJ Corrects REGAIN Study’s Expansive Claims; Results Only Applicable to Post-Hospitalized Long Covid Patients

By David Tuller, DrPH

In February, The BMJ published a study called Clinical effectiveness of an online supervised group physical and mental health rehabilitation programme for adults with post-covid-19 condition (REGAIN study): multicentre randomised controlled trial.” (Post-Covid-19 Condition, or PCC, is one of many current definitions for Long Covid.)The study, led by a team from the University of Warwick, suffered from some serious flaws. Its claims cannot be taken at face value—as many observers noted and as I outlined here. I also organized a letter of concern to the journal’s editor, co-signed by a dozen other experts.

 A key issue was that the trial was unblinded and relied solely for its claims of success on subjective outcomes—a combination of elements that is guaranteed to generate an unknown amount of bias. Beyond that, the authors misrepresented the findings by declaring that the intervention was “clinically effective.”  In fact, the results for the primary outcome fell below the currently recommended level for the “minimal clinically important difference” (MCID) for their primary outcome, so the intervention should not be framed as having demonstrated “clinical effectiveness.” The investigators essentially lied about the MCID issue in the Methods section of the paper, and then contradicted themselves deeper in the text.

Another major issue involved a different kind of misrepresentation of the findings—in this case, how broadly the results could be applied.  This was a trial of patients who had been hospitalized for Covid-19. However, the vast majority of PCC, or Long Covid, patients have not been hospitalized—rendering the study of questionable relevance for them. And yet, the investigators omitted this key detail in prominent sections of the paper that readers are most likely to see. The detail was not mentioned in the paper’s title. And in the conclusion of the abstract as well as in the “What This Study Adds” box, which accompanied the text and highlighted key messages, the findings were extrapolated to all PCC patients. (To reiterate, any claim of clinical effectiveness or benefit is itself an over-statement, given that the primary outcome results did not meet the currently recommended MCID threshold for that measure.)

Hospitalized patients often have a different set of subsequent medical issues from those who have not been hospitalized, perhaps because their cases were more severe in the first place, or because of the impacts of the hospitalization itself, or other reasons. In this case, therefore, findings from hospitalized patients cannot be easily or automatically extrapolated to those who were not hospitalized. Whether the findings apply to non-hospitalized populations with PCC is a question that might be explored in future research. But it is certainly not appropriate to assume that the answer is positive—nor is it appropriate to disseminate this assertion as if it were an accurate interpretation of the research.

The mismatch between the study sample and the larger PCC population is self-evident, so it is hard to understand how the investigators could have misrepresented their findings so dramatically. This oversight could have been either a deliberate effort to hype and bolster the apparent relevance of the findings or a sign of incompetence and cluelessness about the proper reporting of science.

The letter I organized and sent to the journal editor noted, among our other concerns, that “it is inappropriate for the authors to extrapolate findings from patients who were hospitalized with covid-19 to the much larger number of patients with prolonged symptoms who were not hospitalized.” Similar criticism also appeared in online comments about the article.

In a rapid response posted on The BMJ’s website on April 11th, the investigators rebutted the criticism that they had inappropriately extrapolated their findings by noting that they had provided the correct information in various sections, including the paper’s conclusion–which is not the same as the conclusion of the abstract. Their defense, while technically correct, missed the point. As far as I know, no one argued that the text of the study did not include the necessary information. The substance of the criticism was that, by omitting the detail in very prominent locations–such as the conclusion of the abstract–the paper had misled readers about the significance of the findings.

Apparently, someone somewhere at The BMJ agreed with this negative assessment. On May 1st–less than three weeks after the investigators’ rapid response–the journal actually published a correction regarding this misrepresentation. The correction did not include an explanation for why the investigators had already rejected the need for such action. Here’s the text of the correction:

“Several sections of this paper by McGregor and colleagues…have been updated for clarity of the study population. The conclusion in the abstract and the first and second points of the ‘What this study adds’ subsection of the box should have made it clear that in adults with post-covid-19 condition ‘at least three months after hospital discharge for covid-19’ the online, home based, supervised, group physical and mental health rehabilitation programme (REGAIN) showed clinical benefits and lack of harm.”

It’s always laudable when journals and investigators agree to correct errors. But they do not deserve much credit for correcting false or obviously bogus statements that never should have been made in the first place. The study largely framed its findings as widely applicable to all PCC patients, so this major correction knocks the stuffing out of any such expansive declarations. But it does not fix the problems created by the initial publication. As far as I know, the media outlets and online influencers who touted the findings in the first place have not corrected the public record. So the false impression created by the study remains—and unfortunately will have an impact on the advice doctors provide and on the care patients receive.

In fact, it seems the journal does not really seem to have taken the correction seriously. What leads me to that conclusion? Because an editorial published alongside the trial, which was commissioned by The BMJ and , features the same omissions that have now been corrected in the paper itself.  The editorial touts the reported findings but only mentions in the last paragraph that the study participants had been hospitalized, with this sentence: “Trial inclusion criteria required a history of hospital admission for covid-19, and it is unknown if findings can be generalised to patients with milder infection who do not require admission.”

As with the paper itself, this salient limitation should have been mentioned in the most prominent parts of the editorial–and especially in the first reference to the study sample. As it is, the detail is presented as something of an afterthought. In other words, the journal is leaving intact in the editorial the same sort of expansive language that has now been corrected in the paper. Shouldn’t the editorial also be corrected, or at least updated, to align with the current version of paper? Perhaps someone should alert the editor to this discrepancy.


Categories:

,