Spotila’s Take on NIH Grant Reviewers

By David Tuller, DrPH

Because of various developments in the UK and elsewhere, I’ve neglected goings-on back home. I’m working on a couple of things now but in the meantime I decided to post something typically insightful that Jennie Spotila published last week on her blog, Occupy M.E.

It’s a frustration with this project that I don’t have the time to look into every aspect I’d like to. So it’s great that people like Jennie are poking around and digging into documents and trying to understand what’s going on. I’m glad to be able to re-post this here (with her permission, of course).

**********

Who Reviews ME/CFS Applications for NIH?

By Jennie Spotila

*Note: After publishing this post, I discovered that I had inadvertently missed one meeting in 2017. This post was updated on February 12, 2019 to reflect all new calculations. The changes are not significant enough to alter any conclusions.

There is no question that NIH’s funding of ME/CFS research has been minuscule relative to the size of the public health crisis. Review of ME/CFS grant applications at NIH has drawn scrutiny from the public as one contributing factor. The public perception is that the grant review panelists have not been ME/CFS experts, and that this has led to the unfair denial of qualified applications.

That first point—that grant reviewers are not ME/CFS experts—has a factual answer. The second allegation—that the lack of experts has negatively impacted funding decisions—is harder to answer with publicly available information. Nevertheless, in 2013 I embarked on a project to gather the evidence and answer these questions.

This article will focus on the first issue: who is reviewing the applications. My analysis of the data points to two main conclusions:

1. A small subset of reviewers (experts and non-experts) wield disproportionate influence because they serve so many times.

2. NIH changed its approach to ME/CFS application reviews in November 2010. Since that date, NIH has primarily appointed ME/CFS experts to evaluate the applications.

Let’s begin by reviewing the basics of NIH’s grant review process.

**********

How NIH Reviews Grant Applications

When a grant application is submitted to NIH, a multi-level review process begins. In the first stage, a review panel of non-federal scientists with relevant expertise evaluates and scores the application on a variety of criteria.

The Center for Scientific Review (CSR) at NIH is responsible for selecting reviewers for the panels. CSR manages hundreds of these panels, which fall into two general categories: standing study sections and special emphasis panels. Special emphasis panels (or SEPs) are comprised of temporary members, selected specifically for the applications under review at a single meeting. Most SEPs are used once and then dissolved, but there are a dozen or so recurring SEPs for areas with an ongoing need for review. ME/CFS is one of those topic areas, and its recurring SEP has a new roster for each meeting.

Each study section and SEP is managed by a Scientific Review Officer (SRO). This is not a desk jockey job; the SRO has a substantive impact on the peer review process. The SRO is responsible for selecting scientists for the panel, monitoring potential conflicts of interest, and preparing summaries of the peer review scores and critiques.

Review panel members must have substantial relevant scientific expertise and knowledge of the most current science. SROs look for reviewers who have themselves received major peer-reviewed grants, and who understand the peer review process. The quality of grant application reviews is largely dependent on selecting the right scientists to review them.

The Methods of This Project

The obvious first step for my analysis was to gather all the SEP rosters and look at who served. Study sections and SEPs are federal advisory committees, and as such their membership must be made public. You might think that getting the rosters would be easy. You would be wrong.

In 2013, I looked for the rosters online, and found very few. When I asked NIH about it, I was told that the rosters were not posted publicly “due to threats some previous panel reviewers have received.” (this is an interesting story for another time) I was instructed to file a FOIA request for the rosters. NIH then denied that request, and to make a long story short, it took me two years of appeals to finally obtain the rosters. For several more years, NIH absurdly required me to file a FOIA request for each roster. It took intervention by Dr. Joe Breen in 2016 to finally change CSR’s policy on publishing the ME/CFS SEP roster.

Since one of my main objectives was to identify how many ME/CFS experts participated, I had to define who qualified as an expert. I did not assume that I knew all the experts and could simply rely on name recognition. For purposes of this analysis, I set the expertise bar very low. I defined an ME/CFS expert as anyone who—at the time they served on the SEP—had at least one publication on ME/CFS or had an NIH grant for ME/CFS research.

I compiled all the roster names for the SEP meetings from 2000 through 2018. I searched PubMed for each person’s ME/CFS publications at the time he or she served on a SEP. I also did my best to identify the scientific specialization of all the members by reviewing their institutional profile pages and CVs. Then I looked for the trends and patterns.

**********

Representation As A Whole

Between January 2000 and December 2018, the ME/CFS SEP met 62 times.* A total of 327 people served as reviewers. Of those 327 panelists, 58 (or 17.7%) qualified as ME/CFS experts under my liberal definition.

Half of all reviewers served more than once, and each roster varied between 5 and 36 members. To calculate the average number of times individuals served, I counted the combined roster seats across all the meetings: 836 seats. Of the total 327 panelists, each person served an average of 2.6 meetings. However, the 58 ME/CFS experts served a combined 207 seats, or 24.7% of the total seats. Those 58 experts served an average of 3.6 meetings each.

First finding: Between 2000 and 2018, 17.7% of the reviewers were ME/CFS experts, and they served 24.7% of the total roster seats.

The percentage of ME/CFS experts at each meeting varied between 0 and 100%. Eight meetings included no ME/CFS experts whatsoever, while four meetings were 100% experts. Over the entire time period, ME/CFS experts made up 20% or less of the rosters of 32 meetings.

Second finding: Just over half of the meetings included 20% or less ME/CFS experts, and eight of those meetings included no experts at all.

Of the 327 total individuals who served on the SEP, I identified 65 (20%) that have psychology or psychiatry degrees. Note that this includes researchers who are ME/CFS experts, such as Drs. Jarred Younger and Lenny Jason. Twenty-four people (7.3%) specialize in craniofacial diseases such as Temporomandibular Disorders. Fourteen (4.2%) are sleep researchers. There are six people who appear in more than one of these categories (such as a psychologist specializing in insomnia).

To measure the influence of these specialties, I looked at how many times these individuals served on the SEP. The 65 psychologists served a total of 214 times, or 25.6% of the total seats. Adding in the sleep and craniofacial specialists (and taking the overlaps into account), these three categories combined represent 29% of the total individuals, but 36.7% of the meeting seats.

Third finding: One-third of all reviewers specialize in psychology/psychiatry, sleep, and/or craniofacial areas, and occupied 36.7% of the meeting seats between 2000 and 2018.

As mentioned above, each reviewer served an average of 2.6 times. However, this is a bit misleading because 71% (233 people) served only once or twice, and the remaining 29% served three or more times. The reviewers who only served once or twice occupied just 36% of the review seats. That means 29% of the reviewers (experts and non-experts) occupied 64% of the seats. To be clear, this means just 94 people filled 534 seats between 2000 and 2018 because they served so many times.

Fourth finding: A minority of reviewers (29%) had a disproportional influence on the review process because they served so many times (64% of seats overall).

**********

The ME/CFS Experts

As I stated in the Methods description above, I used a very liberal definition of ME/CFS “expert.” I classified an individual as an expert if he or she had at least one ME/CFS publication or at least one NIH grant for ME/CFS research at the time of service on the SEP. It turned out that there are a few reviewers who served on the SEP prior to having a publication or grant in ME/CFS, and then served again afterward. I adjusted my analysis to take this into account. You can read the entire list of ME/CFS expert reviewers here.

A total of 58 out of 327 reviewers (17.7%) met the expert definition for at least one meeting served. Many of the names will be immediately recognizable as experts, but others may be a surprise. For example, Dr. Ila Singh published on XMRV and then left the ME/CFS field. Dr. Jordan Dimitrakoff co-authored a paper with his colleagues from the CFS Advisory Committee, but he is a pelvic pain specialist and has done no ME/CFS research.

Yet under my liberal definition, both are counted as ME/CFS experts. I was also surprised to find five people who were CDC employees when they served on the SEP: Dr. Jim Jones, Dr. Elizabeth Unger, Dr. Alison Mawle, Dr. Mangalathu Rajeevan, and Dr. Alicia Smith. I do not know if it is unusual for CDC employees to serve on NIH grant review panels.

Fifth finding: Using the most liberal definition of ME/CFS expert, only 17.7% of the reviewers qualified. Multiple people on the list were never involved in much ME/CFS research and/or left the field. Five individuals were CDC employees at the time they served on the SEP.

ME/CFS experts served an average of 3.6 meetings each, but this is misleading because 40% of the group served only once. When I removed the one-timers from the calculation, the remaining 35 reviewers served 184 times, which is 89% of the total number of expert seats. Concentrating grant review assignments to such a small number of scientists is risky. One person’s bias, expectations, preferences, and professional experience can shape the direction of NIH funded research, for better or worse. This is especially true for the reviewers who serve most frequently.

At the very top of that list are:

• Dr. Fred Friedberg, psychologist, 15 times;
• Dr. Jim Baraniuk, MD, pain and fatigue, 14 times;
• Dr. Italo Biaggioni, MD, pharmacology, autonomic dysfunction, 10 times as an expert (17 times overall); and,
• Dr. Maureen Hanson, genetics and cell biology, 10 times

These four reviewers served a combined 49 times, which is 23.6% of the total expert seats. The heavy influence of Dr. Friedberg is an example of the inherent risk of this approach. While he has worked in this field for more than fifteen years, and has received $3.9 million in NIH grants, he is a psychologist.

Proposals that rely on computational biology, cutting edge imaging, or immunology could be challenging for a behavioral psychologist to properly evaluate. There are other ME/CFS experts, including other psychologists like Dr. Jarred Younger, who may be better positioned to review these applications.
Sixth finding: Just 35 ME/CFS experts have served a combined 184 times (89% of expert seats). Just four experts (Friedberg, Baraniuk, Biaggioni, Hanson) have occupied 23.6% of those seats. They have likely wielded great influence on application scores and critiques.

Before and After November 2010

So far, I have presented my findings based on all the rosters from January 2000 to December 2018 combined. That is not the whole story, however. NIH changed its approach to reviewing ME/CFS grant applications in November 2010.

Prior to November 2010, the SEP reviewed grant applications related to Chronic Fatigue Syndrome, Fibromyalgia, and sometimes Temporomandibular Disorders (TMD). The rosters had titles like “CFS/FM SEP” and “CFS/FMS/TMD.” Beginning with the SEP meeting on November 2, 2010, NIH narrowed the focus of the panel to CFS only. The meeting titles changed to “Chronic Fatigue Syndrome” and “Myalgic Encephalomyelitis/Chronic Fatigue Syndrome.”

The name of the SEP was not the only difference. The types of reviewers appointed to the panels changed significantly. Pain researchers and dentists were out, and ME/CFS experts were in.

Before Nov 2010 After Nov 2010
Number of Meetings 36 26
Number of Seats 605 231
Meetings with No Experts 8 (22%) 0
Meetings with 1-20% Experts 23 (64%) 1 (4%)
Meetings with 21-50% Experts 5 (14%) 7 (27%)
Meetings with 51-99% Experts 0 14 (54%)
Meetings with 100% Experts 0 4 (15%)
Non-expert seats 538 of 605 (89%) 91 of 231 (39%)
Expert seats 67 of 605 (11%) 140 of 231 (61%)
Psych/sleep/craniofacial 275 of 605 (45.5%) 34 of 231 (14.7%)

As you can see, beginning with the November 2010 meeting the SEP rosters are almost directly opposite to the earlier rosters. The expert representation went from 11% to 61%, while non-expert representation dropped from 89% to 39%. I do not know why the shift was made at that particular time, but there is no doubt that it was. It seems unlikely that this was the sole decision of the SRO at the time, but I have no documentary evidence that points to how the decision was made.

Seventh finding: Beginning in November 2010, the focus and composition of the SEP shifted dramatically and included substantially more ME/CFS experts than any meetings prior to that date.

As good as things look after November 2010, there is one troubling trend. Eight of the 25 meetings had 50% or less ME/CFS experts. Seven of those meetings were held since April 2017, including the panel that reviewed the RFA proposals in July 2017.

The roster for the RFA review went through multiple iterations. The final version included 37% ME/CFS experts. This roster must have been difficult to put together because there were so many experts participating in one or more of the fifteen proposals reviewed at that meeting. The conflict of interest policy would have excluded many of them from service on the panel.

The panels for the meetings since July 2017 may signal a dangerous shift in approach. All four had less than 50% ME/CFS experts, with the April 2018 meeting including only one expert and seven non-ME/CFS experts. All four rosters were overseen by Dr. Jana Drgonova. What her approach will be going forward remains to be seen.

Eighth finding: The SEP that reviewed the RFA proposals included only 37% ME/CFS experts, possibly due to the conflict of interest policy excluding many reviewers. The use of experts on the normal SEP panels declined to less than 50% after July 2017, for reasons unknown.

**********

Summary

Rather than repeat the legend that ME/CFS grant applications are reviewed by dentists and psychologists, I set out to examine the data on who reviews these applications. My analysis points to two main conclusions.

First, there is an inside/outside club of reviewers. For ME/CFS experts and non-experts alike, a small subset wields great influence through service at multiple meetings. Among ME/CFS experts, 60% of the experts occupied 89% of the expert seats.

The top four individuals occupied 23.6% of the seats. Among non-ME/CFS experts, 48% of the reviewers occupied 78% of the non-expert seats. Given how these subsets wield out-sized influence through repeated appearances, one hopes that this is favoring high-quality reviews and not unreasonably negative ones.

Second, these data show that NIH adjusted its approach in November 2010. The reliance on ME/CFS experts jumped overnight, and the SEP was refocused on ME/CFS applications alone. However, the negative trend to use fewer experts in 2018 bears careful watching.

The real question is how these rosters impacted grant funding decisions. My next article will present that analysis.

**********

Recap of Findings:

1. Between 2000 and 2018, 17.7% of the reviewers were ME/CFS experts, and they served 24.7% of the total roster seats.

2. Just over half of the meetings included 20% or less ME/CFS experts, and eight of those meetings included no experts at all.

3. One-third of all reviewers specialize in psychology/psychiatry, sleep, and/or craniofacial areas, and occupied 36.7% of the meeting seats between 2000 and 2018.

4. A minority of reviewers (29%) had a disproportional influence on the review process because they served so many times (64% of seats overall).

5. Using the most liberal definition of ME/CFS expert, only 17.7% of the reviewers qualified. Multiple people on the list were never involved in much ME/CFS research and/or left the field. Five individuals were CDC employees at the time they served on the SEP.

6. Just 35 ME/CFS experts have served a combined 184 times (89% of expert seats). Just four experts (Friedberg, Baraniuk, Biaggioni, Hanson) have occupied 23.6% of those seats. They have likely wielded great influence on application scores and critiques.

7. Beginning in November 2010, the focus and composition of the SEP shifted dramatically and included substantially more ME/CFS experts than any meetings prior to that date.

8. The SEP that reviewed the RFA proposals included only 37% ME/CFS experts, possibly due to the conflict of interest policy excluding many reviewers. The use of experts on the normal SEP panels declined to less than 50% after July 2017, for reasons unknown.

*There was a meeting scheduled for February 22, 2011 but it was canceled. A meeting was eventually held on March 24, 2011 with a different roster. I have excluded the February meeting from this analysis.


Categories:

,

Tags:

Comments

2 responses to “Spotila’s Take on NIH Grant Reviewers”

  1. Sean Avatar
    Sean

    It would be interesting doing a similar review of research in the UK.

    Suspect the review panels for the MRC & NIHR are full of psychiatrists and/or friends of Esther Crawley and the ubiquitous Simon Wessely.

  2. David Tuller Avatar
    David Tuller

    “It would be interesting doing a similar review of research in the UK.”
    The results would possibly be the ultimate definition of “circle jerk.”