By David Tuller, DrPH
The recent publication of a review of pediatric “CFS/ME” that promoted the Lightning Process as “effective” has triggered renewed concern–well, ok, I’ve triggered much of that renewed concern–about the 2017 study on which this specious claim is based. That study, from an experienced team from Bristol University, was published by Archives of Disease in Childhood, a BMJ journal, even though the investigators violated multiple ethical and methodological principles of scientific research.
I have been concerned that some members of the committee currently developing new guidelines for ME/CFS, under the auspices of the National Institute for Health and Care Excellence, might somehow consider this “science” involving the Lightning Process to be trustworthy. First, it appeared in a purportedly high-quality journal. Second, it was heavily promoted by the UK’s Science Media Centre, which appears to have widespread credibility among the credulous. People who should have known better–example-in-chief: Professor Dorothy Bishop, the Oxford University neuropsychologist–donated their integrity to the cause by publicly praising the study. Positive articles about these “scientific” findings appeared in many respectable British media outlets.
I first busted the Lightning Process study in a long post on December 13, 2017, after having read some smart comments and observations on the Science For ME forum. Five days later, I posted a blog–“My Questions for the Science Media Centre”–about how the SMC pushed it out into the media-sphere. Now that the paper is under much more intense scrutiny, I thought it would be interesting to re-post that blog. The questions I asked at that point generally seem to hold up. The SMC has never provided answers, as far as I know.
**********
Trial By Error: My Questions for the Science Media Centre
December 18, 2017
On September 20, 2017, a BMJ Publishing Group journal, Archives of Disease in Childhood published the SMILE trial. This trial investigated an intervention called the Lightning Process as a treatment for kids with CFS/ME (as the study called the disease). The lead investigator was Professor Esther Crawley, the University of Bristol pediatrician and a well-known researcher in the field. The trial’s full title: “Clinical and cost-effectiveness of the Lightning Process in addition to specialist medical care for paediatric chronic fatigue syndrome: randomised controlled trial.”
Not surprisingly, the Science Media Centre played a key role in presenting the SMILE trial findings to the news media, trotting out cautiously supportive statements from experts like Professor Dorothy Bishop, the well-known Oxford University developmental neuropsychologist. The folks at the SMC certainly viewed the trial quite differently than I did, not to mention the many patients, advocates, scientists and others who have criticized it. If the SMILE trial comes under greater scrutiny, as I suspect it might, the SMC could find itself under pressure to answer some tough questions.
Last week, I explained in a post about the SMILE trial how the investigators were able to report that the Lightning Process was an effective treatment. They swapped their original primary and secondary outcomes after more than half of the participants in what became the full sample had already provided data for an earlier feasibility trial. These feasibility trial data had been reviewed to inform the development of the full trial protocol. The data were then analyzed according to this full-trial protocol that was itself based on them.
This circular approach to analysis allowed SMILE to report positive results for self-reported physical function as its primary outcome rather than null results for school attendance at six months. That in turn led to much better press coverage than would have been the case without benefit of the outcome-swapping.
In this post, I’ll briefly review the SMC’s promotional strategy for SMILE, present once again the evidence-based conclusions of my own analysis, and then list some of the questions I have for the SMC. These questions are directed toward the SMC as an institution and also specifically to chief executive Fiona Fox and senior press officer Edward Sykes. Dr. Sykes has a PhD in evolutionary biology and heads the SMC’s mental health and neuroscience operations.
Not incidentally, Dr. Sykes has also been involved with the executive board of the CFS/ME Research Collaborative. Professor Crawley is deputy chair of the collaborative’s executive board and this year spearheaded the group’s valiant but failed bids for two major grants. The SMC and Professor Crawley have significant mutual reputational interests staked on the CMRC’s success. Both the SMC and Professor Crawley also have longstanding and close connections with the PACE investigators and Professor Sir Simon Wessely, who is an SMC trustee.
Over the years, the SMC has worked closely with this group and their colleagues to disseminate the narrative that they are heroic investigators conducting rigorous research in the face of harassment by a dangerous cabal of anti-science patients. In 2016, this approach failed to impress a First-Tier Tribunal panel, which ordered the release of some of PACE’s anonymized trial data. The panel found that the PACE defense team’s“assessment of activist behaviour was, in our view, grossly exaggerated and the only actual evidence was that an individual at a seminar had heckled Professor Chalder [one of the three principal PACE investigators].”
Still, given the continuing strength of the professional bonds between the SMC and this cohort of researchers, it is understandable that the organization could find it challenging to objectively assess the value of Professor Crawley’s work. The available facts suggest that to be the case.
In a recent blog post triggered in part by news coverage of the SMILE trial, Fiona Fox declared that the SMC maintains “a passionate belief in the integrity and power of great scientists communicating top quality research science openly, honestly and without spin.” Yet any serious and comprehensive examination of the full SMILE trial record would yield the evidence-based conclusion that it should not be categorized as “top quality research.”
Moreover, Professor Crawley’s actions and behavior do not resemble those of “great scientists” presenting their findings “openly, honestly and without spin.” The reported findings were dizzy with spin, and both openness and honesty were in short supply.
**********
The SMC’s Promotional Strategy for SMILE
The SMILE trial report was published on September 20, 2017, in Archives of Disease in Childhood. (It was definitely not in the Journal of Archives of Disease in Childhood, as the SMC website repeatedly misnamed the publication.) Professor Crawley and her colleagues reported that the Lightning Process was effective in treating kids with ME/CFS, based on the findings for the primary outcome—self-reported physical function.
In presenting information about SMILE on its website, the SMC mildly noted the controversies surrounding the Lightning Process itself. For some reason, the website chose not to mention that Phil Parker, the creator of the Lightning Process, had previously trained spiritual healers in the use of tarot cards and auras as tools to help diagnose peoples’ problems.
Next, the SMC appeared to be endorsing the perspective that Professor Crawley has embraced in recent public presentations: that she, like her PACE colleagues, is waging a battle against anti-scientific zealots. (Others would call these zealots “critics raising concerns.”) The website gave a nod to the frame advanced by Professor Crawley, explaining that “researchers decided to test the robustness of this treatment so, despite activists trying to stop them, they ran its first ever trial.”
The SMC website further highlighted the researchers’ “strong initial skepticism” that the Lightning Process would prove effective, implicitly praising their apparent willingness to struggle with their own prejudices and preconceptions. However, given the outcome-swapping that occurred after more than half the sample had provided data, it is hard to understand why Professor Crawley and her colleagues would have been surprised at the results they were able to report.
In promoting the trial, the SMC adopted a three-pronged approach that led to generally favorable news coverage. First, Professor Crawley herself presented the findings in a briefing. In addition, the SMC released a round-up of statements from well-known experts. These experts endorsed the findings overall, albeit with various reservations.
In particular, Professor Dorothy Bishop noted that neurolinguistics programming, a key component of the Lightning Process, “has long been recognized as pseudoscience.” Yet that recognition did not prevent her from declaring that the reported benefits “do seem solid” and that patient allocation and statistical analysis appeared to adhere to “a high standard.”
Poor Professor Bishop! What did she know before she agreed to invest some of her well-deserved reputational capital in saying nice things, however hedged, about the SMILE trial? Did the SMC inform Professor Bishop that Phil Parker, in addition to his expertise in using tarot and auras for spiritual healing, had also previously developed an “ability to step into other people’s bodies over the years to assist them in their healing with amazing results”?
And regarding that purported “high standard” of patient allocation and statistical analysis, did Professor Bishop know that more than half the participants were providing data before the trial was even registered? Did the SMC inform Professor Bishop that the data from these feasibility study participants were being analyzed based on a protocol approach generated after their results had already been reviewed?
Along with Professor Crawley’s briefing and the round-up of expert opinion, the third element of the SMC’s promotional package was a sort of summary and review of the trial methodology. Edward Sykes, the SMC’s lead on mental health and neuroscience, touted this as an “independent stats analysis” in a tweet. This characterization was misleading. The analysis was prepared for the SMC so it could be published on the SMC website, as part of a series slugged “before the headlines.”
Here’s how the SMC explains the series: “‘Before the headlines’ is a service provided to the SMC by members of the Royal Statistical Society (RSS) and Statisticians in the Pharmaceutical Industry (PSI) and experienced statisticians in academia and research.” This descriptions leaves questions about how an “analysis” with this provenance could reasonably be described as “independent”? “Independent” of what, and of whom?
Moreover, this impressive-sounding “independent stats analysis” does not list any specific authors. The SMC website includes a list of all current contributors to the “before the headlines” series—a list that includes more than 50 names. The decision not to attach any of those 50+ names or any other names to the SMC’s “independent stats analysis” means that no one in particular has to take responsibility for its claims.
As it turns out, the analysis is mostly a rehash of what the trial itself and the investigators have claimed, with minimal evidence that much “independent” judgment was applied in reaching these conclusions. Per these statistics experts, everything in SMILE, including the section on study limitations, appears to have been done just right. As an example, the analysis includes this: “The study has been reported and analysed in line with its own published protocol and statistical analysis plan…All the methods used are appropriate to a trial of this type.”
According to this statement, the statistics reviewers at least read the published protocol in addition to the trial report itself in Archives of Disease in Childhood. This poses a problem. The abstract of the published SMILE trial protocol clearly lists the date of the trial registration as July 31, 2012 (The application for registration occurred the previous month.) Yet the trial report itself carries a start date of September, 2010.
This discordance in dates should have alarmed any statistician or researcher who carefully read both the full-trial protocol and the Archives of Disease in Childhood paper. If none of the anonymous analysis writers noticed that the trial officially began almost two years before the trial registration date, then the review can only be characterized as less than thorough. On the other hand, it would be equally damning if someone did notice the discordance in dates but failed to raise questions about it.
This major lapse indicates that the claims made in the “independent stats analysis” cannot be taken at face value. In fact, the analysis offers an excellent demonstration of why in such circumstances those seeking to render judgement should be provided with as many trial-related documents as possible.
In this case, in addition to the published SMILE paper and the full-trial protocol, anyone involved should also have reviewed the feasibility trial protocol, the feasibility trial report, the trial registration, and the application for a substantial protocol amendment. Absent such a review, it should not be surprising that even distinguished experts might make ill-advised statements that cannot withstand scrutiny
**********
My Own Conclusions
SMILE was an open-label trial with a subjective primary outcome—self-reported physical function. This study design—the same one used in PACE—is highly vulnerable to bias, even if perfectly conducted. In any event, as I reported last week, a review of the SMILE trial documentation reveals that it was fraught with methodological and ethical problems. (As I also mentioned, smart patients noticed the first clues, not me. I had the chance to follow up.)
Here’s a recap of my main findings:
*More than half the participants in the SMILE trial—56 out of 100–were apparently participants in the earlier feasibility study. That means all were recruited and provided data to the researchers before the full trial had an officially assigned registration and before the primary and secondary outcomes were swapped in the published full-trial protocol. Since the researchers lumped together these earlier data with those from participants recruited later, the full-trial report in the Archives of Disease in Childhood* was not an independent investigation of the findings originally generated by the feasibility trial. (*In this sentence, the title was originally miswritten as Archives of Childhood Disease; corrected on 12/19/17)
*Based on the results of the feasibility trial, Professor Crawley swapped her primary and secondary outcome measures. The original primary outcome in the feasibility trial—school attendance at six months—was relegated to the status of a secondary outcome. The subjective measure of self-reported physical function at six months, which was a secondary measure for the feasibility trial, became the primary outcome for the full trial.
*In an unexplained discrepancy, the trial registration listed self-reported fatigue at six months as another primary outcome, along with self-reported physical function. Confusingly, the full-trial protocol listed self-reported fatigue as both a primary outcome in the abstract but as a secondary outcome in the text. In the full-trial paper, all the fatigue results were reported as secondary outcomes. These inconsistent changes do not inspire full confidence in the choices ultimately made.
*Swapping the outcomes based on the feasibility study findings while simultaneously converting the feasibility study into the full study could have introduced significant bias in the final paper. How much bias cannot be ascertained at this point, since Professor Crawley has not provided a separate analysis of the feasibility study results for physical function and school attendance. That bias would have added to the bias already generated by the reliance on a subjective outcome—self-reported physical function—in an open-label trial.
*Professor Crawley promised to seek verification of self-reported school attendance by requesting official school attendance records. Although she mentioned this in the protocols for both the feasibility trial and the full trial, these school records are not mentioned anywhere in the full-trial report. Nor did she discuss the feasibility of accessing these records in the logical place–the feasibility trial report. There are two possible explanations for the omission: Either Professor Crawley failed to obtain the records, or she obtained the records and chose, for some reason, to omit them from her reports.
*The trial registration indicated that SMILE was a prospective study. But the registration application date of June 7, 2012, coincided almost exactly with the end of the recruitment time frame for the feasibility trial, which provided more than half of those who ended up being included in the final sample. The full-trial paper did not mention that more than half the participants were from the feasibility study and that their data led to a decision to swap the outcome measures. By definition, a prospective trial must not include data from previously assessed participants. If it does, it is should not be registered as a prospective trial.
*Based on the revised primary outcome of self-reported physical function, the full-trial paper reported that the Lightning Process combined with specialist medical care was effective in treating kids with CFS/ME. The full-trial paper also reported that self-reported school attendance at six months produced null results. Thus, the outcome-swapping that occurred after more than half the full-trial sample had already been followed in the feasibility study allowed Professor Crawley to report more impressive results than had she retained the six-month school attendance measure as the primary outcome.
*Not surprisingly, media reports focused largely on the positive results for the self-reported physical function outcome and not the null results for the original primary outcome. Media reports also failed to mention the outcome-swapping. These omissions were not surprising, given the spin from the SMC’s own “independent stats analysis” and the testimonials from the experts.
**********
My Questions for the SMC, chief executive Fiona Fox and senior press officer Edward Sykes
Given the above findings, here are some of the questions about the SMILE trial that I’d like to ask the SMC, Ms. Fox and Dr. Sykes:
*Did the SMC review the supporting documentation mentioned above before deciding to promote this study? Did the SMC provide Professor Dorothy Bishop and the other experts with this documentation, so they could gain a full understanding of the study’s complicated background before commenting? If not, why not?
*Did the statistics experts who prepared the “independent stats analysis” review the supporting documentation beforehand? Given that this report by unnamed authors was prepared for the SMC for publication by the SMC, does the SMC agree that it was misleading to promote it as an “independent stats analysis”?
*Does the SMC understand that its service on the executive committee of the CMRC, of which Professor Crawley is deputy chair, creates concern about a conflict of interest with regards to promoting the SMILE trial? What assurances can the SMC provide that its scientific assessments have not been biased by shared reputational interests at stake?
*Does the SMC stand behind the methodology used in this study, in which feasibility trial participants were folded into the full trial and their data analyzed based on a protocol derived from their own initial results? Does the SMC believe the investigators should have disclosed these interesting details about the study design in the SMILE trial paper?
*Does the SMC think it is appropriate to swap outcome measures in a manner that improves the reported primary outcome results, based on data already provided by more than half of a study’s participants? Does the SMC deny that such an approach would bias the findings?
*Does the SMC believe that it is appropriate to identify a study as “prospective” in a trial registry after more than half the sample has already been recruited and assessed?
*The trial registration also explained that the feasibility trial would be converted into the full trial. Does the SMC agree that this statement contradicts the same document’s claim that the trial was “prospective”? If not, can the SMC explain how a prospective trial can include data from the feasibility study that shaped the full-trial proposal?
*Does the SMC believe it is acceptable for researchers to promise in protocols that they will seek objective data—in this case, official school absence records—and then not mention these data in their published trial reports? What assumptions does the SMC think might be reasonable to draw about such an omission?
*The SMILE trial paper states that study recruitment began in September, 2010—that is, the same month as the feasibility study. Yet the trial wasn’t assigned registration until July, 31, 2012, and the registration lists the “overall trial start date” as August 1, 2012. Does the SMC think it is a problem for a trial to have two reported start dates?
*Does the SMC agree that the decision to swap the outcome measures made it easier to highlight the positive findings for physical function rather than the null findings for school attendance at six months? Does the SMC agree that this outcome swap therefore made the trial appear more successful than would have been the case otherwise?
*Can the SMC deny that the decision to swap outcomes led to better press coverage of the trial than would have been the case had the outcomes not been swapped?
*Does the SMC believe that last week’s Virology Blog post on the SMILE trial was libelous or defamatory? If so, can the SMC identify errors that require correction? Does the SMC believe last week’s Virology Blog post was vexatious or a form of harassment? If so, on what grounds?
*Does the SMC agree that these methodological and ethical concerns about the conduct and reporting of the SMILE trial deserve a considered and detailed response from Professor Crawley? If so, will the SMC request that she soon produce one?
Comments
2 responses to “Re-Visiting My Questions for the Science Media Centre about Bristol’s LP Study”
Thanks for the reminder. These questions certainly need answering and sooner rather than later would be good, for the sake of all the kids who may be at risk in the meantime.
GoodMorning,
Did we get your attention?
Imagine us filling out 10,000 to 20,000 contact forms for you each week on sites for your desired niche with your persuasive message promoting your business.
How many leads and sales would that give you? Our process is proven.
Please reply back with your email and phone number to discuss our service.
It is extremely low priced.
Important: Please reply back with your email and phone number.