FOI Response from Bristol about LP Study; Correction in BJGP about MUS

By David Tuller, DrPH


I have sent the University of Bristol’s FOI office a follow-up request. I cc’d Sue Paterson, the university’s director of legal services. Here’s what I wrote:

Dear FOI Office (and Ms Paterson)–

I appreciate the response to my questions from the above-referenced request. That request concerned the pediatric Lightning Process study conducted by Bristol University investigators, led by Professor Esther Crawley. The study was published by Archives of Disease in Childhood in September, 2017, and was called “Clinical and cost-effectiveness of the Lightning Process in addition to specialist medical care for paediatric chronic fatigue syndrome: randomised controlled trial.”

I am not dissatisfied with the responses, so this is not an appeal but rather a follow-up.

In particular, I am following up on the response to Question of my previous request. Specifically, I am seeking the set of responses that the investigators of the Lightning Process study provided to Archives of Disease in Childhood in response to the concerns raised by the journal about the conduct and reporting of the study.

In response to my previous request–that is, FOI19238–you indicated that the investigators had not provided university officials with a copy of these responses to Archives of Disease in Childhood. In that FOI submission, my request for the document itself was predicated on the answer to that question, so I understand why it was not provided.

In this request, I am seeking the document even though it was apparently not formally shared with university officials. The document is obviously held at Bristol by one or more members of the investigative team that conducted and reported on the study. Given that, I am seeking to be provided with a copy, whether or not university officials have formally seen it to date.

Thank you for your response.

Best–David Tuller

David Tuller, DrPH
Senior Fellow in Public Health and Journalism
Center for Global Public Health
School of Public Health
University of California, Berkeley


I recently filed a freedom of information request with the University of Bristol about the pile of dung often referred to as the Lightning Process study. To recap: The investigators, led by Professor Esther Crawley, recruited more than half of the participants before trial registration, swapped outcome measures based on these early results, and then failed to disclose these salient details in the published paper.

Given these violations of core ethical and methodological principles of scientific research, the study has no place in the published literature. As I have repeatedly noted, it needs to be retracted and then investigated as a possible case of research misconduct. So far, both BMJ and Bristol have ignored their oversight obligations, for reasons that I can only assume relate to concerns about reputational damage.

By choosing not to resolve the issues with this paper, these august institutions have demonstrated a singular lack of interest in promoting the health and well-being of children suffering from a debilitating and stigmatizing illness. They have allowed their distinguished brands to be used to endorse and promote a woo-woo intervention created by a Tarot expert who has claimed to diagnose illness through auras and to “step into other people’s bodies” to heal them. Ok, then.

In my FOI request, I asked the following:

1. Did the investigators inform the university that Archives of Disease in Childhood, which published the 2017 paper, had raised serious questions about the study?

2. Did the investigators provide the university with their formal responses to the journal, and if so could the university provide a copy?

3. Did the university launch an investigation or review once it learned of these concerns, and if so could it provide a copy of the findings?

I received a response last week. Here’s what I learned:

1. The university was informed on Feb 8, 2018, that the journal had raised serious questions about the study. (I documented the study’s failings in a post on Virology Blog in December, 2017. The following month, Professor Racaniello sent Virology Blog’s open letter, signed by 21 experts, to Archives of Disease in Childhood.)

2. The investigators, led by Professor Esther Crawley, did not provide the university with a copy of their final responses to the journal’s inquiries. That would presumably suggest that the university was not interested enough to ask to see them. (I plan to submit a new request for these responses; whether or not university officials have reviewed the document, it is obviously in the possession of one or more members of the team of Bristol investigators.)

3. Despite the seriousness of the questions raised, the university decided not to conduct its own investigation to determine what happened with this study. (Although included two questions, the answer was a single word: “No.” I interpreted that to mean the university did not conduct an investigation, not that it conducted one but was declining to provide me with the results.)

Bristol’s apparent lack of concern about whether faculty members follow proper procedures in conducting their research is seriously disturbing. Around the same time I raised questions about the Lightning Process study, I also documented major problems with Professor Crawley’s 2011 school absence study, published in BMJ Open, which exempted itself from ethical review on the specious grounds that it qualified as “service evaluation.” In other words, in the face of documented methodological and ethical violations in two major studies led by Professor Crawley, Bristol did not think any investigation into the issues was warranted.

As I have previously reported, Bristol has been conducting an investigation into a number of papers from Professor Crawley’s team. These studies, like the school absence project, exempted themselves from ethical review on similarly suspect grounds. But university officials only began that investigation after the Health Research Authority, the National Health Service arm that oversees research ethics, reviewed the matter at my request and pressed them to take action–not when I first raised the issues.

That review panel consists of two experts from Bristol not connected with Professor Crawley and her department, along with a third person with no Bristol affiliation. The review was scheduled to be done by the end of June—that is, by yesterday–and then turned over to the HRA. At that point, based on the findings, a determination will be made as to what happens next.

Last week, the HRA told me it had not yet received the Bristol report but expected to soon. Let’s hope those on the panel have felt an obligation to do their jobs properly and assess the failings objectively rather than protecting those who have engaged in unacceptable research practices. It should be impossible for Bristol to whitewash what happened here—but that certainly doesn’t mean the university won’t try, given its sorry record on this matter.


The British Journal of General Practice has now corrected the false statement in its 2017 editorial on the cost to the NHS of so-called “medically unexplained symptoms,” along with a notice indicating the correction.

The correction involves a mis-quotation of Bermingham et al, a seminal 2010 study, which found that the cost of providing care to the English working-age people diagnosed with MUS ate up about 10% of NHS expenditures for that age group. Professor Carolyn Chew-Graham of Keele University, the editorial’s lead author, and some of her colleagues have instead claimed repeatedly that these costs accounted for 10% of the total NHS budget (or 11% in the BJGP editorial)—a dramatic exaggeration. Since these folks have presented themselves as experts in this domain, their inability to accurately cite a key study is perplexing. It also raises questions as to why anyone should bother paying attention to their policy prescriptions about treating MUS.

These proposals include, for example, further expansion of the problematic program known as Improving Access to Psychological Therapies, or IAPT. The presumption behind this expansion is that shunting those who are diagnosed with MUS into cognitive behavior therapy and related interventions is not only medically indicated but will lead to significant savings. But we apparently cannot trust these people to quote statistics properly. Since Professor Chew-Graham ignored my previous letter about the mistake in the BJGP editorial, it is also clear we cannot rely on her and others to correct documented misstatements. So why should we believe that any of their claims about anything are based on valid data?

As I have said previously, I think the BJGP correction should have included an explanation for how this mistake occurred and why Professor Chew-Graham chose not to respond to the issue when I wrote her about it in January–especially since at that time I indicated that I would bring it to the attention of the journal. Nonetheless, I appreciate that Professor Roger Jones, editor of BJGP, did what he said he would do, and that the correction itself is highlighted at the top of the editorial.

I will try to use this development to request corrections of the same mistake in other venues. Quite a number of leaders in this field of inquiry have similarly demonstrated their inability to accurately cite a seminal study in their field of purported expertise. And the BJGP is not the only example of Professor Chew-Graham herself conveying this disinformation about MUS costs to a professional audience. Here, for example, is what she wrote in a blog post on the Keele website:

“MUS actually accounts for a considerably high proportion of NHS activity, with approximately 10% of total NHS expenditure being spent on services for the working age population in England with medically unexplained symptoms.”

Given that Professor Chew-Graham’s editorial has now been corrected, why has she not fixed this similar false statement on her university’s blog? (Why she mentioned 10% here and 11% in the BJGP editorial is unclear.) Professor Chew-Graham’s puzzling lapse stands in stark contrast to the responsible action taken by BJGP. Does she really want me to send out a new round of e-mails, cc-d to multiple people interested in the issue, in which I will have to point out that her apparent decision to leave the blog post untouched suggests she suffers from a deficit of professional integrity? Perhaps so…





5 responses to “FOI Response from Bristol about LP Study; Correction in BJGP about MUS”

  1. Lady Shambles Avatar
    Lady Shambles

    I’m rather hoping that in time some figures could be presented which illustrate the difference in £millions the misrepresenting of Bermingham et al makes as a result of Chew-Graham’s (and others) sleight of hand. Because this is all about numbers. This is not about a patient in front of a doctor requesting help and getting it. This is about saving the NHS shed-loads of money. So the detail about how this skewing of the figures impacts that money pot would be good to know. I expect the NHS Chief Executive might be curious too… unless this has been sanctioned from the top…??

  2. Couch Turnip Avatar
    Couch Turnip

    Lady Shambles – it’s £BILLIONS we’re talking about here with this error, not £millions. As you said – ‘this is about saving the NHS shed-loads of money’ – but importantly, in the context of the BJGP journal editorial, it seemed to me that this was also about persuading GPs to see their ‘MUS’ patients as a massive drain on the NHS budget, and therefore a likely major cause of the stress that they’re all under. I can’t think of any better sales pitch to get GPs to restrict biomedical care and send their ‘MUS’ patients (including those with ME/CFS) off to psychotherapy instead of investigating them properly and referring them to secondary care…… can you?

    In relation to the other issues David raised here, he really deserves a medal for pursuing this clean-up of medical science, especially in the face of such resistance. At least the BJGP did the right thing when he pointed out the (11%) error to them, well up to a point.

  3. Couch Turnip Avatar
    Couch Turnip

    Lady Shambles said – “So the detail about how this skewing of the figures impacts that money pot would be good to know.”

    How much money did the (now-corrected) mistake in the BJGP editorial (“11% of total NHS spend”) represent?

    I’ve just had a go at working that out. By my calculations and extrapolating to 2017/18 when total NHS spend in England was reportedly £125 billion, this mistake could represent an exaggeration of the MUS cost for that particular year of anything from about £1.25 billion (considering just the use of the incorrect 11% rather than 10% figure) to approaching £9.6 billion (IF the assumption is made that there was zero cost of MUS/somatization arising from the paediatric or elderly populations). In reality, we can probably safely say that the ‘size’ of this mistake (in relation to someone wanting to estimate the cost for 2017/18) lies somewhere in between, but since the literature points to lower MUS rates in the paediatric and elderly populations then I’d hazard a guess that the figure might be considerably up on the minimum figure of £1.25 billion. Whatever it is, it’s a ‘shed-load of money’ (as Lady Shambles would say), and as such I think it would have been good if the BJGP had done more to highlight this grave error to GPs in addition to publishing the correction.

    I can’t be more precise than that because, as far as I know, the estimated ‘MUS’ costs for the elderly or for paediatrics haven’t been studied. That’s why it was important for the authors of the BJGP paper to state their figures correctly according to what HAD been studied and the estimate that had been arrived at for the working age population in the Bermingham et al paper that they cited. If the mistake was genuine, then I’d have expected the lead/correspondence author to want to correct it as soon as they were made aware of it. (If anyone flagged up my mistakes I’d be grateful and keen to get them corrected ). In this case it appears that the correction came later at the instigation of the journal. So does the lead/correspondence author see a £billion plus error something to be concerned about or not?

    Another odd thing is that one of the co-authors Marta Buszewicz was among the authors who put the same MUS cost information over CORRECTLY in a February 2017 article –

    Warner, A., Walters, K., Lamahewa, K., & Buszewicz, M. (2017). How do hospital doctors manage patients with medically unexplained symptoms: a qualitative study of physicians. Journal of the Royal Society of Medicine, 110(2), 65–72.


    -that was published BEFORE the article in the BJGP. So how did she fail to spot the error in her other paper?

  4. deboruth Avatar

    Is it impossible to obtain a figure for NHS spend on working age people? One would then know what the 10% or 11% is of this, as compared with the percentage applied to the grander total NHS spend. Perhaps one of the parliamentary researchers could track it down.

  5. Couch Turnip Avatar
    Couch Turnip

    The estimation made by the authors of the Bermingham et al (2010) paper was that for the year 2008/9 around £3 billion was spent as a result of somatization in the working-age population. I’ve found figures online for the total English NHS budget for 2008/9 of £89.9 billion and £91.59 billion. If we use the lower figure (to give the highest percentage) that means that the £3 billion represented about 3.34 % of total NHS spend for that year, rather than 11 % as was originally stated in the BJGP paper. (It’s seems rather odd to me that the authors of the BJGP paper, and especially those in senior positions, apparently didn’t have a ‘ballpark’ idea of the size of the NHS budget).

    Because the Bermingham et al (2010) study didn’t estimate the costs in the paediatric and elderly populations we don’t know what these might have been, but I believe it’s mentioned in the literature elsewhere that MUS rates are lower in these populations, so the percentage of the whole NHS budget due to somatization would have been unlikely to reach 10%.