by David Tuller, DrPH
[June 25, 2017: The last section of this post, about the PLoS One study, has been revised and corrected.]
I have tip-toed around the question of research misconduct since I started my PACE investigation. In my long Virology Blog series in October 2015, I decided to document the trial’s extensive list of flaws—or as many as I could fit into 15,000 words, which wasn’t all of them—without arguing that this constituted research misconduct. My goal was simply to make the strongest possible case that this was very bad science and that the evidence did not support the claims that cognitive behavior therapy and graded exercise therapy were effective treatments for the illness.
Since then, I have referred to PACE as “utter nonsense,” “complete bullshit,” “a piece of crap,” and “this f**king trial.” My colleague and the host of Virology Blog, Professor Racaniello, has called it a “sham.” Indeed, subsequent events have only strengthened the argument against PACE, despite the unconvincing attempts of the investigators and Sir Simon Wessely to counter what they most likely view as my disrespectful and “vexatious” behavior.
Virology Blog’s open letters to The Lancet and Psychological Medicine have demonstrated that well-regarded experts from the U.S, U.K. and many other countries find the methodological lapses in PACE to be such egregious violations of standard scientific practice that the reported results cannot be taken seriously. In the last few months, more than a dozen peer-reviewed commentaries in the Journal of Health Psychology, a respected U.K.-based academic publication, have further highlighted the international dismay at the study’s self-evident and indisputable lapses in judgement, logic and common sense.
And here’s a key piece of evidence that the trial has lost all credibility among those outside the CBT/GET ideological brigades: The U.S. Centers for Disease Control still recommends the therapies but now insists that they are only “generic” management strategies for the disease. In fact, the agency explicitly denies that the recommendations are related to PACE. As far as I can tell, since last year the agency no longer cites the PACE trial as evidence anywhere on its current pages devoted to the illness. (If there is a reference tucked away in there somewhere, I’m sure a sharp-eyed sleuth will soon let me know.)
It must be said that the CDC’s history with this illness is awful—another “bad science” saga that I documented on Virology Blog in 2011. In past years, the agency cited PACE prominently and has collaborated closely with British members of the biopsychosocial school of thought. So it is ridiculous and—let’s be frank—blatantly dishonest for U.S. public health officials to now insist that the PACE-branded treatments they recommend have nothing to do with PACE and are simply “generic” management strategies. Nevertheless, it is significant that the agency has decided to “disappear” PACE from its site, presumably in response to the widespread condemnation of the trial.
Many of the PACE study’s myriad flaws represent bad science but clearly do not rise to the level of research misconduct. Other fields of medicine, for example, have abandoned the use of open-label trials with subjective outcomes because they invite biased results; Jonathan Edwards, an emeritus professor of medicine from University College London, has made this point repeatedly. But clearly large segments of the psychological and psychiatric fields do not share this perspective and believe such trials can provide reliable and authoritative evidence.
Moreover, the decision to use the very broad Oxford criteria to identify patients is bad science because it conflates the symptom of “chronic fatigue” with the specific disease entity known often as “chronic fatigue syndrome” but more appropriately called “myalgic encephalomyelitis.” This case definition generates heterogeneous samples that render it virtually impossible for such studies to identify accurate information about causes, diagnostic tests and treatments. Although a 2015 report from the National Institutes of Health recommended that it should be “retired” from use, the Oxford definition remains in the published literature. Studies relying on it should be discredited and their findings ignored or dismissed. But that’s probably as far as it goes.
Many definitions of “research misconduct” exist, but they generally share common elements. In Britain, the Medical Research Council, the main funder of PACE, endorses the definition from Research Councils U.K., an organization which outlines its principles in a statement called “Policy and Guidelines on Governance of Good Research Conduct.” In exploring this question, I will focus here on just two of the planks of the definition cited by the MRC: “misrepresentation of interests” and “misrepresentation of data.”
Let me be clear: I am not trained as a bioethicist. I have never been involved in determining if any particular study involves research misconduct. And I am not making any such claim here. However, when a clinical trial includes so many documented flaws that more than 100 experts from around the world are willing and even eager to sign a letter demanding immediate retraction of key findings, the question of whether there has been research misconduct will inevitably arise. Although people with different perspectives could clearly disagree on the answer, the final and authoritative determination will likely not emerge until the PACE study and the details involved in its conduct and the publication of the results are subjected to a fully independent investigation.
In the meantime, let’s look at how research misconduct is defined and examine some of the possible evidence that might be reviewed. For starters, the cited definition of “misrepresentation of interests” includes “the failure to declare material interests either of the researcher or of the funders of the research.”
I have repeatedly pointed out that the investigators have misled participants about their “material interests” in whether the trial reached certain conclusions—namely, that CBT and GET are effective treatments. The three main investigators have had longstanding links with insurance companies, advising them that rehabilitative approaches such as the interventions under study could get ME/CFS claimants off benefits and back to work. No reliable evidence actually supports this claim—certainly the PACE results failed to confirm it. And yet the investigators did not disclose these consulting and/or financial ties in the information leaflets and consent forms provided to participants.
Why is that a problem? Well, the investigators promised in their protocol to adhere to the Declaration of Helsinki, among other ethical guidelines. The declaration, an international human rights document enacted after WWII to protect human research subjects, is very specific about what researchers must do in order to obtain informed consent: They must tell prospective participants of “any possible conflicts of interest” and “institutional affiliations.”
Without such disclosures, in fact, any consent obtained is not informed but, per Helsinki’s guidelines, uninformed. Investigators cannot simply pick and choose from among their protocol promises and decide which ones they will implement and which ones they won’t. They cannot decide not to disclose “any possible conflicts of interest,” once they have promised to do so, even if it is inconvenient or uncomfortable or might make people reluctant to enter a trial. I have interviewed four PACE participants. Two said they would likely or definitely not have agreed to be in the study had they been told of these conflicts of interest; in fact, one withdrew her consent for her data to be used after she had already completed all the trial assessments because she found out about these insurance affiliations later on and was outraged at not having been told from the start.
The PACE investigators have responded to this concern, but their answers do not actually address the criticism, as I have previously pointed out. It is irrelevant that they made the appropriate disclosures in the journals that published their work; the Declaration of Helsinki does not concern itself with protecting journal editors and journal readers but with protecting human research subjects. The investigators have also argued that insurance companies were not directly involved in the study, thereby implying that no conflict of interest in fact existed. This is also a specious argument, relying as it does on an extremely narrow interpretation of what constitutes a conflict of interest.
Shockingly, the PACE trial’s ethical review board approved the consent forms, even without the disclosures clearly mandated by the Declaration of Helsinki. The Lancet and Psychological Medicine have been made aware of the issue but have no apparent problem with this breach of research ethics. Notwithstanding such moral obtuseness, the fact remains that the PACE investigators made a promise to disclose “any possible conflicts of interest” to trial participants, and failed to honor it. Case closed. In the absence of legitimate informed consent, they should not have been allowed to publish any of the data they collected from their 641 participants.
Does this constitute “misrepresentation of material interests” within the context of the applicable definition of research misconduct? I will leave it to others to make that determination. Certainly the PACE authors and their cheerleaders—including Sir Simon, Esther Crawley, Lancet editor Richard Horton and Psychological Medicine editors Robin Murray and Kenneth Kendler—would reject any such interpretation.
Turning to the category of “misrepresentation of data,” the MRC/RCUK definition cites the “suppression of relevant findings and/or data, or knowingly, recklessly or by gross negligence, presenting a flawed interpretation of data.” One of the PACE trial’s most glaring problems, of course, is the odd fact that 13% of participants met the physical function outcome threshold at baseline. (A smaller number, slightly more than one percent, met the fatigue outcome threshold at baseline.) In the Lancet study, participants who met these very poor outcome thresholds were referred to as being “within normal ranges” for these indicators. In the Psychological Medicine paper, these same participants were referred to as being “recovered” for these indicators.
Of course, it was obvious from the papers themselves that some participants could have met these thresholds at baseline. But the number of participants who actually did meet these thresholds at baseline became public only after the information was released pursuant to a freedom-of-information request. (This was an earlier data request than the one that eventually led to the release of all the raw trial data for some of the main results.)
The decision-making behind this earlier release remains a mystery to me, since the data make clear that the study is bogus. While the bizarre nature of the overlap in entry and outcome thresholds already raised serious questions about the trial’s credibility, the fact that a significant minority of participants actually met both the “disability” and “normal range”/”recovery” thresholds for physical function at baseline certainly adds salient and critical information. Any interpretation of the study made without that benefit of that key information is by definition incomplete and deficient.
Given the logical impossibility of meeting an outcome threshold at baseline, it is understandable why the authors made no mention of the fact that so many participants were simultaneously found to be “disabled” and “within normal range”/“recovered” for physical function. Any paper on breast cancer or multiple sclerosis or any other illness recognized as a medical disease would clearly have been rejected if it featured such an anomaly.
The PACE team compounded this error by highlighting these findings as evidence of the study’s success. At the press conference promoting the Lancet paper, Trudie Chalder, one of the three principal investigators, touted these “normal range” results by declaring that twice as many people in the CBT and GET groups as in the other groups “got back to normal”—even though some of these “back-to-normal” participants still qualified as “disabled” under the study’s entry criteria. Moreover, the PACE authors themselves were allowed a pre-publication review of an accompanying Lancet commentary about the PACE trial by two Dutch colleagues. The commentary argued that the “normal range” analyses represented a “strict criterion” for recovery and declared that 30 percent of the participants had met this recovery standard.
Yet this statement is clearly preposterous, given that participants who met this “strict criterion” could have had scores indicating worse health than the scores required to demonstrate disability at trial entry. The ensuing headlines and news stories highlighted both Professor Chalder’s statement that CBT and GET were effective in getting people “back to normal” and that 30 percent had “recovered” according to a “strict definition.” This misinformation has since impacted treatment guidelines around the world.
I have previously criticized the authors’ attempts to explain away this problem. They have essentially stated that it makes no difference if some participants were “recovered” on one “recovery” threshold at baseline because the study included other “recovery” criteria as well. Moreover, they point out that the “normal range” analyses in The Lancet were not the main findings—instead, they have argued, the comparison of averages between the groups, the revised primary outcome of the study, was the definitive evidence that the treatments work.
Sorry. Those excuses simply do not wash. The inclusion of these overlapping entry and outcome thresholds, and the failure to mention or explain in the papers themselves how anyone could be “within normal range” or “recovered” while simultaneously being sick enough to enter the study, casts doubt on the entire enterprise. No study including such a bogus analysis should ever have passed peer review and been published, much less in journals presumed to subject papers to rigorous scientific scrutiny. That The Lancet and Psychological Medicine have rejected the calls of international experts to address the issue is a disgrace.
But does this constitute “misrepresentation of data” within the context of the applicable definition of research misconduct? Again, I leave it to others to make that determination. I know some people—in particular, the powerful cohort of PACE supporters—have reviewed the same set of facts and have expressed little or no concern about this unusual aspect of the trial.
[This section about the PLoS One study has been revised and corrected. At the end of the post, I have explained the changes. For full transparency, I have also re-posted the original paragraphs for anyone who wants to track the changes.]
Now let’s turn to the PLoS One paper published in 2012, which has been the subject of much dispute over data access. And yet that dispute is a distraction. We don’t need the data to determine that the paper included an apparently false statement that has allowed the investigators to claim that CBT and GET are the most “cost-effective” treatments from the societal perspective—a concept that factors in other costs along with direct health-care costs. PLoS One, like the other journals, has failed to address this concern. (The journal did post an “expression of concern” recently over the authors’ refusal to share data from the trial in accordance with the journal’s policies.)
The PACE statistical analysis plan included three separate assumptions for how to measure the costs of what they called “informal care”–the care provided by family and friends—in assessing cost-effectiveness from the societal perspective. The investigators promised to analyze the data based on valuing this informal care at: 1) the cost of a home-care worker; 2) the minimum wage; and 3) zero cost. The latter, of course, is what happens in the real world—families care for loved ones without getting paid anything by anyone.
In PLoS One, the main analysis for assessing informal care presented only the results under a fourth assumption not mentioned in the statistical analysis plan—valuing this care at the mean national wage. The paper did not explain the reasons for this switch. Under this new assumption, the authors reported, CBT and GET proved more cost-effective than the two other PACE treatment arms. The paper did not include the results based on any of the three ways of measuring informal care promised in the statistical analysis plan. But the authors noted that sensitivity analyses using alternative approaches “did not make a substantial difference to the results” and that the findings were “robust” under other assumptions for informal care.
Sensitivity analyses are statistical tests used to determine whether, and to what extent, different assumptions lead to changes in results. The “alternative approaches” mentioned in the study as being included in the sensitivity analyses were the first two approaches cited in the statistical analysis plan—valuing informal care at the cost of a home-care worker and at minimum wage. The paper did not explain why it had dropped any mention of the third promised method of valuing informal care—the zero-cost assumption.
In the comments, a patient-researcher, Simon McGrath, pointed out that this claim of “robust” results under other assumptions could not possibly be accurate, given that the minimum wage was much lower than the mean national wage and would therefore alter the results and the sensitivity analyses. In response, Paul McCrone, the King’s College London expert in health economics who served as the study’s lead author, conceded the point.
“You are quite correct that valuing informal care at a lower rate will reduce the savings quite substantially, and could even result in higher societal costs for CBT and GET,” wrote Professor McCrone. So much for the paper’s claim that sensitivity analyses showed that alternative assumptions “did not make a substantial difference to the results” and were “robust” no matter how informal care was valued.
Surprisingly, given this acknowledgement, Professor McCrone did not explain why the paper included a contradictory statement about the sensitivity analyses under alternative assumptions. Nor did he offer to correct the paper to conform to this revised interpretation he presented in his comments. Instead, he presented a new rationale for highlighting the results based on the assumption that unpaid informal care was being valued at the mean national wage, rather than using the other assumptions outlined in the protocol.
“In our opinion, the time spent by families caring for people with CFS/ME has a real value and so to give it a zero cost is controversial,” Professor McCrone wrote. “Likewise, to assume it only has the value of the minimum wage is also very restrictive. In other studies we have costed informal care at the high rate of a home care worker. If we do this then this would show increased savings shown [sic] for CBT and GET.”
This concern for patients’ families is certainly touching and, in a general sense, laudable. But it must be pointed out that what they did in earlier studies is irrelevant to PACE, given that they had included the assumptions they planned to use in their statistical analysis plan. Moreover, it does not explain why Professor McCrone and his colleagues then decided to include an apparently false statement about the sensitivity analyses in the paper.
Another patient-researcher, Tom Kindlon, pointed out in a subsequent comment that the investigators themselves chose the alternative assumptions, which they were now dismissing as unfair to caregivers. “If it’s ‘controversial’ now to value informal care at zero value, it was similarly ‘controversial’ when they decided before the data was looked at, to analyse the data in this way,” wrote Kindlon. “There is not much point in publishing a statistical plan if inconvenient results are not reported on and/or findings for them misrepresented.”
Whatever their reasons, the PACE investigators’ inclusion in the paper of the apparently false statement about the sensitivity analyses represents a serious lapse in professional ethics and judgement. So does the unwillingness to correct the paper itself, given the exchanges in the comments. Does this constitute “misrepresentation of data” within the context of the MRC/RCUK definition of research misconduct?
As I have said, I will leave it to others to make that determination. I look forward to the day when an international group of experts finally pursues a thorough investigation of how and why everything went so terribly wrong with this highly influential five-million-pound trial.
A post-script: I did not contact the PACE authors prior to posting this blog. After my initial series ran in October 2015, Virology Blog posted their full response to my concerns. Since then, I have repeatedly tried to solicit their comments for subsequent blog posts, and they have repeatedly declined to respond. I saw no point in repeating that exercise this time around. I also did not try to solicit a response from Professor McCrone, since he has not responded to multiple earlier requests seeking an explanation for why the PLoS One paper contains the apparently false statement about sensitivity analyses.
However, I would be happy to post on Virology Blog a response of any length from any of the investigators, should they decide to send one. I would of course also correct any documented factual errors in what I have written, which is something I have done whenever necessary throughout my journalism career. (June 25, 2017: Of course, I have now made such corrections, per my professional obligations.)
**********
Next post: The Lancet’s awful new GET trial
**********
*Explanation for the changes: In the original version, I should have made clear that my concerns involved an analysis of what the investigators called cost-effectiveness from the societal perspective, which included not only the direct health-care costs but other considerations as well, including the informal costs. I also mistakenly wrote that the paper only presented the results under the assumption that informal care was valued at the cost of a home-care worker. In fact, for unexplained reasons, the paper’s main analysis was based on none of the three assumptions mentioned in the statistical analysis plan but on a fourth assumption based on the national mean wage.
In addition, I mistakenly assumed, based on the statistical analysis plan, that the sensitivity analyses conducted for assessing the impact of different approaches included both the minimum wage and zero-cost assumptions. In fact, the sensitivity analyses cited in the paper focused on the assumptions that informal care was valued at the cost of a home-care worker and at the minimum wage. The zero-cost assumption also promised in the protocol was not included at all. I apologize to Professor McCrone and his colleagues for the errors and am happy to correct them.
However, this does not change the fact that Professor McCrone’s subsequent comments contradicted the paper’s claim that, per the sensitivity analyses, changes in how informal care was valued “did not make a substantial difference to the results” and that the findings were “robust” for the alternative assumptions. This apparently false claim in the paper itself still needs to be explained or corrected. The paper also does not explain why the investigators included the zero-cost assumption in the detailed statistical analysis plan and then decided to drop it entirely in the paper itself.
**********
Here is the original version of the section on the PLoS One paper, for anyone who wants to compare the two and track the changes:
Now let’s turn to the PLoS One paper published in 2012, which has been the subject of much dispute over data access. And yet that dispute is a distraction—we don’t need the data to determine that the paper included an apparently false statement that has allowed the investigators to claim that CBT and GET are the most “cost-effective” treatments. PLoS One, like the other journals, has failed to address this concern, despite an open letter about it posted on Virology Blog last year. (The journal did post an “expression of concern” recently over the authors’ refusal to share data from the trial in accordance with the journal’s policies.)
The PACE statistical analysis plan included three separate assumptions for how to measure the costs of “informal care”–the care provided by family and friends. The investigators promised to provide results based on valuing this informal care at: 1) the average wage paid to health-care workers; 2) the minimum wage; and 3) at zero pay. The latter, of course, is what happens in the real world—families care for loved ones without getting paid anything by anyone.
In PLoS One, the main analysis only presented the results under the first assumption—costing the informal care at the average wage of a health-care worker. Under that assumption, the authors reported, CBT and GET proved more cost-effective than the two other PACE treatment arms. The paper did not include the results based on the other two ways of measuring “informal care” but declared that “alternative approaches were used in the sensitivity analyses and these did not make a substantial difference to the results.” (Sensitivity analyses are statistical tests used to determine whether, and to what extent, different assumptions lead to changes in results.)
Yet in the comments, two patient researchers contradicted this statement, pointing out that the claim that all three assumptions would essentially yield the same results could not possibly be accurate. In response, Paul McCrone, the King’s College London expert in health economics who served as the study’s lead author, conceded the point. Let me repeat that: Professor McCrone agreed that the cost savings would indeed be lower under the minimum wage assumption, and that under the third assumption any cost advantages for CBT and GET would disappear.
“If a smaller unit cost for informal care is used, such as the minimum wage rate, then there would remain a saving in informal care costs in favour of CBT and GET but this would clearly be less than in the base case used in the paper,” wrote Professor McCrone. “If a zero value for informal care is used then the costs are based entirely on health/social care (which were highest for CBT, GET and APT) and lost employment which was not much different between arms.” So much for the paper’s claim that sensitivity analyses showed that alternative assumptions “did not make a substantial difference to the results.”
Surprisingly, given these acknowledged facts, Professor McCrone did not explain why the paper included a completely contradictory statement. Nor did he offer to correct the paper itself to conform to his revised interpretation of the results of the sensitivity analyses. Instead, he presented a new rationale for highlighting only the results based on the assumption that unpaid informal care was being reimbursed at the average salary of a health-care worker.
“In our opinion, the time spent by families caring for people with CFS/ME has a real value and so to give it a zero cost is controversial,” Professor McCrone wrote. “Likewise, to assume it only has the value of the minimum wage is also very restrictive. In other studies we have costed informal care at the high rate of a home care worker. If we do this then this would show increased savings shown [sic] for CBT and GET.”
This concern for patients’ families is certainly touching and, in a general sense, laudable. But it must be pointed out that what they did in earlier studies is irrelevant to PACE, given that they had included the alternative assumptions in their own statistical analysis plan. Moreover, it does not explain why Professor McCrone and his colleagues then decided to include an apparently false statement about the sensitivity analyses in the paper.
One of the commenters, patient-researcher Tom Kindlon from Dublin, pointed out in a subsequent comment that the investigators themselves chose the alternative assumptions that they were now dismissing as unfair to caregivers. “If it’s ‘controversial’ now to value informal care at zero value, it was similarly ‘controversial’ when they decided before the data was looked at, to analyse the data in this way,” he wrote. “There is not much point in publishing a statistical plan if inconvenient results are not reported on and/or findings for them misrepresented.”
Whatever their reasons, the PACE investigators’ inclusion in the paper of the apparently false statement about the sensitivity analyses represents a serious lapse in professional ethics and judgement. So does the unwillingness to correct the paper itself to reflect Professor McCrone’s belated acknowledgement of the actual results from the sensitivity analyses, rather than the inaccurate results reported in the paper. Does this constitute “misrepresentation of data” within the context of the MRC/RCUK definition of research misconduct?
**********
If you appreciate my work, please consider supporting my crowdfunding effort, which ends June 30th.
Comments
34 responses to “Is PACE a Case of Research Misconduct?”
Who would be the best person to explore the idea of research misconduct? Would it be a Harvard Lawyer who has a focus on Bio Ethics that might be in a position to consider a class action suit?
I too look forward to the day (hopefully sooner rather than later) “when an international group of experts finally pursues a thorough investigation of how and why everything went so terribly wrong with this highly influential five-million-pound trial.”. I just worry for their sanity… whoever has to trundle through all the twists and turns that the PACE PI’s have provided will need abs of steel..if only to stop laughing in incredulity. Maybe they should have some CBT therapists available.. I fear such exposure might lead to serious PTSD.
But truly..thank you…we need this out there. Tbh the UK needs this out there. The UK is turning itself into an academic laughing stock… our universities used to be the best in the world, and now look at us?: ethics committees with no sense of the ‘ethical’, peer review with no sense of ‘review’, journal editors with no sense… just no sense. Certainly no morality. PI’s with egos the size of their entourage. Entourages who need spec savers because they can’t see the Emperor is stark ***** naked.
David Tuller, I do not have the words to even begin to express how very thankful I am for the work you have done to expose all that is wrong with the PACE trial. I have read nearly everything you have written about the trial, & as an ME/CFS patient, it means so very much to have you on our side. What has long felt like a hopeless battle against some very entrenched viewpoints & so-called treatments pushed by a powerful group that many seem to believe can do no wrong is finally getting shown to be blatantly inaccurate. Hopefully, the huge amount of harm that has been perpetrated against some incredibly ill people will finally be understood to be as bad as it is, and will be stopped in the future. Thank you, thank you, thank you!!!!
Thank you once again David for working on this issue and speaking up for us genuine ME sufferers. I come from a sporting background of triathlons, gymnastics, Advance Personal Trainer & Sports Therapist. I have always known and loved sports and still do. Only difference is I now have to watch when health allows instead of competing.
After contracting ME after serval bouts of viral meningitis, I was willing to try anything to get my health back. I was given graded excercise and PACE, several weeks into this I had a major relapse and never recovered to my mild ME baseline and became stuck with severe ME.
I am not alone, there are thousands of adults and children saying the same as me. I am incredibly angry that idiots are still recommending PACE to our NHS, I try and voice concerns to be told we’re attacking Simon Westley and his colleagues. This is not the case we are factual. No one understands this disabling illness more so than us long term sufferers. We should be allowed to sue those responsible who have disabled us with PACE as our voices are heard but have been ignored.
Enough is enough lives are being ruined, health destroyed and futures robbed.
This must end for the sake of future generations.
The infamous PACE Trial Study not only devestated PWME in the UK where the study was hatched but Authors were able to sell the theory to every other Country. Yes it was misconduct. Not only did Insurance Corporations & Big Pharma manipulate it to suit their agenda but it killed by exacerbating the disease to a point of no return. These Authors tried their best to bury Myalgic Encephalomyelitis by using the term Chronic Fatigue Syndrome. I’m mystified how a Neurological Disease that causes Inflammation of the Brain & Spinal Cord could be renamed a Psychiatric Syndrome. The 2015 IOM million dollar baby separated ME from CFS and added it to the American ICD tabbed under Neurological Diseases as G93.3 cautioning that it be ruled out before placing the patient under the SEID Umbrella where CFS was tucked away. They said there was insufficient research to prove inflammation in CFS. 5000 published studies wasn’t enough to convince a panel of Medical Professionals, including some who signed the ICC 2011. No one but Patient Stakeholders and their Dr’s even considered Mold induced CFS when it came time to name a new bucket for an unexplained disease. Their voices went unheard.
If we are not properly diagnosed using a high quality criteria backed up by good science then we are forced to live in purgatory tucked out of sight. I have a list of hundreds of victims who died from insufficient medical care to the point where someone with ME or CFS became too ill to fight heart disease or cancer or even worse – suicide. I know of one victim who starved to death because he was unable to get out of bed but his Dr said it was all in his head. Another wheelchair victim was tipped into a pool and told to swim. A young woman was forcibly removed from her home and cut off from her family, GP & lawyer so psychiatry could decondition her. Another was sanctioned into psychiatric care using GET to the point where she became severely bedbound. She was sent home where she died 2 yrs later.
This study angers the ME Community because not only are we forced to educate ourselves, research for good science, advocate from our beds, advise one another and emotionally support ourselves. We are angry because the tools of this Study; Graded Exercise Therapy and Mandatory Cognitive Behaviour Therapy, are weapons of death for us.
We have a lot to say to the Authors of PACE but the most important is “First Do No Harm”.
Thank you very much, David Tuller, for helping those of us suffering with ME/CFS. Too sick to write more.
Thanks so much David! this studied broke so many rules for conducting trials that you have uncovered. And how can recovered patients be the same as people who are sick. I don’t know how this has not been resolved. It is ethically wrong @ clinically unworthy. And great question about the CDC. I wanted to do a freedom of info on them to see how they can still promote CBT and GEt. I would love to see their clinical data. If they follow the same rules of pharma, yoibneed to submit (it has been a long time) either one or two robust studies to make the claims. So again I shake my head to lousy data by the CDC and pace/white. It is so wrong! This would never be appropriate in any other “normal” disease! Thanks for all your hard work! What happen to the medical oath of first due no harm
Thank you so much. ME/CFS patients desperately need the record to be set straight.
9000 practitioners in my country got MSD manual by Merck & Co. which says for CFS (my best attempt at translation):
“Sometimes, formal and structured programs of physical rehabilitation are helpful. Permanent or long-lasting rest definitely needs to be avoided as it makes deconditioning and progressive helplessness worse”
I believe I am bedridden today, even though my ME/CFS started as very mild, because I was always encouraged by my doctors to push and exercise and not let myself rest.
Keep up the pressure, your work is immensely important and deeply appreciated
Thank you again David.
To me, it should be obvious to everybody that the PACE trial authors are guilty of both “misrepresentation of interests” and “misrepresentation of data” (cited by the MRC) and, as such, definitely guilty of research misconduct.
(Also as far as I can recall, they did not initially disclose their conflicts of interests in their publication, until it was pointed out. Then the Lancet either republished the PACE trial acknowledging this or did an edit on it. Perhaps somebody can verify this.)
The fact that people have died as a result of their ‘beliefs’ and that it doesn’t bother them one iota tells the type of people one is dealing with here. People of no morals, no ethics and no humanity.
Possibly the most famous person was John Brynmore, the labour MP for Pontypridd in South Wales. Mr. Brynmor was following the exercise regime (that sufferers can exercise their way out of this illness) and he died immediately and suddenly after exiting the House of Commons gym. He was 53 years old.
There are lots more people we know of and lots of others who have been left bedridden, all mostly young people.
Their interpretations of their statistics is in their own interests. They’ve gained while patients lost their lives, their livelihoods, their health. They knew their trial was flawed. They knew it was a sham. They knew their statistics were flawed. However, they made their careers out of it, they made their money, prestige, they got honoured, they got titles. They have been a law onto themselves and still think they are. But they are now mistaken.
And it goes further. They ruled the media. There have been so many biomedical studies done which never saw the light of day in the U.K. If they did, it was a couple of lines, followed by a whole article on the types of people these M.E. patients are, malingerers who issued death threats to researcher and also articles with headlines like: “It’s safer to insult the Prophet Mohammed than to contradict the armed wing of the ME brigade”.
The quote referring to the three types of falsehoods, “Lies, Damn Lies and Statistics”, doesn’t even begin to cover what these ‘humans’ are capable of. But they qualify for all three in that quote.
They are Liars, Damn Liars and high-class Fraudsters. They also have power as can be seen in the fact that The Lancet has still not retracted this debunked trial.
They are definitely guilty of research misconduct and a lot more to boot!
Just in case some don’t know, here in the Republic of Ireland, we have succeeded in having the NICE guidelines removed from our HSE (Health Service Executive) website and it is now under review. Our aim is to ensure that the proper guidelines are put in place.
Thank you so much again David.
Yes, thank you. As I read this blog article I was reminded of the strategy that non-violence resistance uses — the creation of so much tension that the situation cannot be ignored and must be resolved. It seems to me that you’re writing very carefully here and going right up to that line. It’s important that advocates are increasingly bringing up the question of basic human rights. It is on the medical establishment and the governments of countries that have endorsed GET/CBT to fully investigate all cases in which people have reported harms from these. I look forward to your future work as I understand from your crowd funding site that you intend to do make some of these stories known. I hope, too, that we’re at if not the beginning of the end of PACE-gate, that at least we’re at the end of the beginning.
Thank you! Your work is deeply appreciated!
Thanks a million!
If you cannot back up a claim of research misconduct, which you acknowledge above that you can’t , then why mention it at all? All that does is cause distress in a vulnerable group of patients and more ill- informed aggressive M.E. activism, and undermines valid criticism of the PACE trial.
There are various things not correct in this article, a lot of the things where the authors have already shown where you are incorrect, but also the whole thing on the economics paper is a mixture of error and ill-founded speculation. The Plos One article valued informal care at the opportunity cost based on national mean earnings and not at the average rate of a health care worker as you erroneously state above. You don’t need to conduct a sensitivity analysis to know that the lower the value you assign to family care, the less benefit CBT and GET show in terms of cost effectiveness, that is just common sense, so Professor Mc Crone is obviously perfectly correct in his response. The paper’s claim that the sensitivity analysis did not show a large impact on cost effectiveness for alternative assumptions is I think referring to the alternative assumptions discussed in the actual paper to value informal care i.e. minimum wage and unit cost of a homecare worker and not to the statistical analysis plan (so the zero value was not part of the analysis) so once again your point is ill-founded. It was never arguing that there was no difference in costs, it was comparing treatments and establishing which treatment had the greatest probability of being cost effective, so it is likely that changing the value of informal care as they did in the actual paper’s sensitivity analysis did not change these results and therefore their reporting is correct, there is no basis to think otherwise.
The authors acknowledge in their outline of limitations that there are uncertainties regarding which is the most appropriate method to value informal care. Are you seeking to argue above that the time families spend caring for people with M.E. should be valued at zero? On what basis do you suggest this?
You are also failing to report the most important comment by Professor McCrone in his responses to comments on the article. His main point was “What should be stressed above all else is that there is uncertainty around all of these cost and outcome estimates and therefore the acceptability curves are the more informative indicators of the relative cost-effectiveness of these interventions.” I acknowledge that it is a technical paper and I don’t understand all of it either but I feel it is highly irresponsible to be discussing it in the manner you are when it is obvious you do not understand it at all.
Thank you, your work is deeply appreciated.
The 2017 Cochrane reveiw panel were assisted by Peter White and have chosen to ONLY look at publications up until 2014…conveniently ignoring all publications raising the problems with PACE. Cochrane state that GET is not harmful….despite the lack of study on day to day harms and Cochrane reveow state that their findings are consistent with the NICE guideline….hardly surprising as Peter White has been instrumental in PACE, NICE and Cochrane reviews….Cochrane state that they looked at aerobic exercise…ignoring people too ill to walk/run/step climb…..ie the vast majority of patients. Currently NICE ignores all physiological abnormalities, and all the abnormalties listed in the CCC and the ICC….when are we going to demand the inclusion and treatment guidance for people with physiological abnormalites? With orthostatic intolerance? With abnormally low anerobic threshold? Chronotropic incompetence? Abnormal heart rate patterns on exertion? The Cochrane review will consolidate the harm caused by PACE. NICE also rely on Peter White and the same circular arguments….NICE will most likely say that they support Cochrane and PACE…..we know that orthostatic intolerance and heart rate abnormalities are present we know that the low anaerobic threshold is consistent with the heart patterns that look like those of an over trained athlete…(NOT deconditioned)… The game is moving on as Peter White convinces the mainstream to endorse PACE and NICE….The take home message for physios from the Cochrane review is that exercise doesn’t harm people with ME/CFS…..despite the fact that the physiological harm has NEVER been measured or looked at nor have patients even been asked about harm in the days post exercise/
“The inclusion of these overlapping entry and outcome thresholds, and the failure to mention or explain in the papers themselves how anyone could be “within normal range” or “recovered” while simultaneously being sick enough to enter the study, casts doubt on the entire enterprise. No study including such a bogus analysis should ever have passed peer review and been published, much less in journals presumed to subject papers to rigorous scientific scrutiny.”
You’ve nailed it again, David. This flaw alone should be enough to discredit the PACE trial’s findings. The issues with patient selection and non-disclosure add insult to this egregious injury – yet PACE still forms the basis of the “gold standard” CBT/GET treatment duopoly here in Australia and so many other countries, while ME/CFS trials using good science pass by virtually unmentioned.
Class action suits are not really a thing in the UK. They are much more limited. Essentially the only people that can pursue ‘misconduct’ in this manner are professional bodies of which they are a member. This might be ‘tricky’ – given that Simon Wessley is current head of the Royal College of Psychologists. In principle, also, the institution involved, which spent a quarter of a million pounds to avoid releasing PACE data.
I note Andrew Wakefield, who has arguably caused indirectly many thousands of deaths, and directly performed unauthorised research on children has as the sole sanction against him, the removal of his licence to practice medicine.
He was not fined, prosecuted, or sanctioned in any other manner.
If there is a court in which they face justice, it’s not going to be the courts temporal, but spiritual.
The only other sanction would be their papers all being retracted and them becoming laughing stocks. Which, to be honest, I would be quite happy at.
Thankyou Roger.
I am so grateful for your work. Thank you for doing this for us all. I hope many ME-sufferers and our loved ones support the crowdfunding those last days.
Thank you, your determination is much appreciated.
If you cannot back up a claim of research misconduct, which you acknowledge above that you can’t , then why mention it at all? All that does is cause distress in a vulnerable group of patients, leads to ill- informed aggressive M.E. activism, and undermines valid criticism of the PACE trial.
There are various things not correct in this article, a lot of the things where the authors have already shown where you are incorrect, but also the whole thing on the economics paper is a mixture of error and ill-founded speculation. The Plos One article valued informal care at the opportunity cost based on national mean earnings and not at the average rate of a health care worker as you erroneously state above. You don’t need to conduct a sensitivity analysis to know that the lower the value you assign to family care, the less benefit CBT and GET show in terms of cost effectiveness, that is just common sense, so Professor Mc Crone is obviously perfectly correct in his response. The paper’s claim that the sensitivity analysis did not show a large impact on cost effectiveness for alternative assumptions is I think referring to the alternative assumptions discussed in the actual paper to value informal care i.e. minimum wage and unit cost of a homecare worker and not to the statistical analysis plan (so the zero value was not part of the analysis) so once again your point is ill-founded. It was never arguing that there was no difference in costs, it was comparing treatments and establishing which treatment had the greatest probability of being cost effective, so it is likely that changing the value of informal care as they did in the actual paper’s sensitivity analysis did not change these results and therefore their reporting is correct, there is no basis to think otherwise.
The authors acknowledge in their outline of limitations that there are uncertainties regarding which is the most appropriate method to value informal care. Are you seeking to argue above that the time families spend caring for people with M.E. should be valued at zero? On what basis do you suggest this?
You are also failing to report the most important comment by Professor McCrone in his responses to comments on the article. His main point was “What should be stressed above all else is that there is uncertainty around all of these cost and outcome estimates and therefore the acceptability curves are the more informative indicators of the relative cost-effectiveness of these interventions.” I feel it is highly irresponsible to be discussing it in the manner you are when it is obvious you do not understand it .
Surprisingly, given these acknowledged facts, Professor McCrone did
not explain why the paper included a completely contradictory statement.
Nor did he offer to correct the paper itself to conform to his revised
interpretation of the results of the sensitivity analyses. Instead, he
presented a new rationale for highlighting only the results based on the
assumption that unpaid informal care was being reimbursed at the
average salary of a health-care worker.
“In our opinion, the time spent by families caring for people with
CFS/ME has a real value and so to give it a zero cost is controversial,”
Professor McCrone wrote. “Likewise, to assume it only has the value of
the minimum wage is also very restrictive. In other studies we have
costed informal care at the high rate of a home care worker. If we do
this then this would show increased savings shown [sic] for CBT and
GET.”
This concern for patients’ families is certainly touching and, in a general sense, laudable.
That concern obviously did not extend to to using a real world example instead of an assumption in the value that informal care is valued at.
There is an official value placed on informal care, paid by the government to the carer.
https://www.gov.uk/carers-allowance/overview
The value is currently £ 62.70 a week, provided you care for at least 35 hours a week, and match other criteria, such as the person being cared for is claiming a qualifying benefit.
This amounts to £ 1.80 per hour, or roughly £ 3270.00 per year.
A home care worker, earning minimum wages, for the same 35 hour week, would earn £ 7.50 per hour, £ 262.50 per week, or £ 13,700 per year, A difference of over £ 10,400.
Using this would have solved his problem, while at the same time provided a more reasonable estimate of value instead of assuming that the carer somehow managed to earn the same as a professional.
I understand that not everyone or even most informal carers can claim the benefit, but I’m pretty sure that they would also see the assumption that they earn the same as a high rate home care worker as a unbelievable fantasy.
Maybe worth discovering whether the NHS are purchasing the CBT/GET rehabilitation interventions & whom from – sometime ago a German company by the name of PRISMA were mentioned. Money is the usual motivation for such things, and perhaps locating & exposing the benefactors could be fruitful.
Who in the UK Govt is sanctioning public money into these studies & is it above board? Why is research funding ONLY being funnelled into psychogenic interventions or is the Govt fully aware of this, especially when the likes of the NHS declare ME/CFS could be triggered by numerous factors?
Many are well aware of the CDC wrongfully diverting funds away from ME/CFS. Is the UK Govt approving all this?
We also have to be mindful the NHS are now referring to Medically Unexplained Symptoms (MUS) on their website: http://www.nhs.uk/conditions/medically-unexplained-symptoms/Pages/Somatisation.aspx
and ME/CFS is listed, built upon this erroneous evidence.
Can the NHS do this before the WHO has even created a diagnostic listing and approved making ME/CFS synonymous?
Behind the scenes, their agenda continues, along with the ominous sign MUS will be used to confuse matters further.
“They (the PACE team) changed the recovery measures because they realised they had gone too extreme and they would have the problem that nobody would recover.” (Simon Wessely, speaking at Goldsmiths, University of London, 29/3/17)
If a team of researchers realise that they’re going to get a null result (disproving the theories they’ve built their careers on and damaging their reputations in the process) and they subsequently change the recovery criteria to prevent this from happening – does that constitute research misconduct? Perhaps it doesn’t – but I really think it should.
Just discovered the GETSET trial excluded those with exercise contraindications. So what condition were they researching?
Surely excluding patients with cardinal ME/CFS symptoms, it seems they aren’t even studying the condition they claim to be. Is this not research misconduct or perhaps even research fraud?
https://twitter.com/AnilvanderZee/status/878666960363556864
Keep up that good, important and fantastic work you´re doing!
Sincere thanks David for your excellent work as always.
To me it feels that so many people have been caused so much real stress due to the severe misrepresentative influence of PACE, that must itself count as one item on the list of factors to be considered. This factor, though indirect, is not at all insignificant.
Moreover, of all the people who could have most effectively set the record straight for pwme, it would have been the PACE a authors. But not a peep, just more of the same. This deliberate failure to at least try and undo the damage caused, I find culpable in itself.
In focusing on the insurance connection I think you’re missing the greater conflict of interest which is the department of work and pensions role. The PACE trial was instigated and partly funded by the DWP and led by its long term adviser on the treatment of M.E patients. The connection with the BPS school, unum and the DWP is covered here. http://www.midmoors.co.uk/Unum/unum_in_uk.pdf failure to fully declare conflict of interests is most like due to the difficulty in summarizing the extent of them, but this is hardly an excuse. It’s important to realise that most people in the UK do not have health insurance and might not have been bothered even if informed of this conflict. However the DWP conflict would be much more relevant to participants decision to of whether to be involved or not.
It’s hardly surprising that the REC (England’s name for IRB) didn’t catch the failure to disclose conflict of interest. The genesis of ethics regulation was the Tuskegee studies. (There are plenty of other examples where research went totally off the rails. PACE is nothing by comparison.) Helsinki was an effort that had its roots in the Nuremberg codes, and things came together after US congress got involved due to public pressure. Prior to that, ethical researchers used the Nuremberg codes, which definitely precluded Tuskegee and other such studies. Committees focus on protection of the health and welfare of subjects. That is their area of expertise.
Ethics committees are not detectives, and they don’t go (and cannot afford the cost of) spelunking into the background of academic researchers to find possible conflicts of interest. And if they did so, how far in the past would it be? Also, academics are presumed free of conflict of interest except if there is commercial gain involved from their product. (In reality, the greatest real conflict of interest in academia is getting the next grant after this one.) In this instance, with no obvious commercial product to sell, why would anyone think they could have a conflict?
Unless the study authors stated that they had such a conflict, the committee would not know, and why would they ask since there is no obvious product, such as a drug? In addition, without seeing the actual submission, it’s not clear if there would be a conflict of interest visible even at the level described. Probably there would be. But, studies start with one set of participants, and people get added or switched out. Happens all the time, and those are handled as notifications that just get stuck into the file. Nobody thinks about them for conflict of interest.
PACE would be considered a low-risk study that did not give drugs as intervention. So the REC probably rubber-stamped it.
I say this because I don’t think we want to get ethics committees into the business of such detective work. All that would do is to clog up research even more, and make everything even more expensive. And it wouldn’t prevent more of this from happening.
There’s another matter with Helsinki. Nations have not accepted all of the revisions since the initial one. Check and find out what the revision of the Declaration of Helsinki is that England’s regs reference. And these days the law in the USA and EU and Japan is Good Clinical Practices (GCP). GCP does incorporate Helsinki by reference, but I don’t remember if there is a version or not. And that happened in the USA in 2008, so it depends on what the regs were at the time of the study’s approval in England when the REC looked at it. These are all technical points, but if you are going to get heavy on technicalities like that, it’s something to validate.
This is in fact very true and extremely relevant. I believe the DWP have never ever funded any other medical research, so why on earth did they help fund PACE?
When estimating the cost to society of CBT/GET they ought to be including the costs to society of all the people who are just being left to rot on ‘benefits’ because of the diversion of both funding and research effort to the psychobabble cult, and by their power to insist that further real diagnostic testing of people with ‘MUS’ must be deterred. To people who really care, this is easy to see as a form of both physical and psychological torture by deliberate neglect and disparagement.
On the valuing of work, it seems odd that a statistician would base calculations on a mean/average national wage, when it is well known that, due to the weight of astronomical earnings at the top end of the range, the average wage is much higher than many people could aspire to. Reports on national earnings that I can find, mostly seem to compare with the median income as a more realistic comparator. However, I have been surprised to discover that I cannot find any presentation that includes the most logical comparator, which would be the, ‘norm’–the wage earned by the largest number of people. So, it rather seems, that even our own ONS likes to guild the figures somewhat to make wages look higher than they are. The only case where I can see norms being presented is in comparing wage growth rates–and in that case, it is not at all clear what exactly it is the norm of! :/
It’s certainly laudable to put a value on unofficial care by families, but it ignores the real world situation that families tend to care unofficially because they don’t have the money to employ professional carers, and that the most any of them get – if they provide more than 35 hours of care a week – is the Carer’s Allowance benefit (approx. £62 a week). Otherwise, they get zilch. So is a scientific paper really the place for a sentimental gesture in valuing familial care the same as a top-notch professional care worker?
On the other hand, if they were calculating the cost to the economy of the carer’s lost employment hours, then certainly a value has to be given. In which case, given that the cross-section of ME patients stretches across the social spectrum, surely the sensible thing to do would be to use either the mean or median national wage…