عمومی | Nature News & Comment

Prestigious European grants might be biased, study suggests

Funding panels are more likely to give prestigious European Union early-career grants to applicants connected to the institutions of some of the panellists, a study of the 2014 funding round suggests.

The effect seems to be limited to the life sciences, social sciences and humanities, and the results have not yet been peer reviewed. But given the high profile of the grants administered by European Research Council (ERC), “the findings should be taken seriously”, says study co-author Peter van den Besselaar, a social scientist at the Free University of Amsterdam.

Although previous studies have found evidence of favouritism in funding in various European countries, “I was surprised that the phenomenon has been recorded at a level as high as the ERC grants”, says Giovanni Abramo, the technology research director at the National Research Council (CNR) of Italy in Rome.

The preprint 1 was posted on 9 March on the academic networking platform ResearchGate.

ERC ‘starting grants’ are among the most prestigious early-career funding schemes in academia, providing up to €1.5 million (US$1.8 million) over five years. Van den Besselaar and Charlie Mom, a research consultant based in Amsterdam, conducted the latest study as part of a broader ERC-funded project to assess bias in funding allocations. It focused on the 2014 cycle, during which there were 3,207 applicants, of whom 375 received starting grants.

Close to home

The authors examined something they called the ‘nearby panellist effect’ — the influence of a panellist from an applicant’s ‘home organization’, the university or research institution where they are currently based, or the ‘host organization’ where they plan to carry out the research.

Van den Besselaar and Mom discovered that, across all disciplines, applicants who shared both a home and a host organization with one panellist or more received a grant 40% more often than average. These were mainly cases in which an applicant planned to use the grant at the institution they applied from. The effect seemed to be discipline-specific: further analysis revealed that the success rate for connected applicants was approximately 80% higher than average in the life sciences and 40% higher in the social sciences and humanities, but there seemed to be no discernible effect in physics and engineering. It was also limited to certain countries, including Finland, Sweden, Italy, Germany and the United Kingdom, and more prevalent in applications from men than from women.

The presence of a nearby-panellist effect might not be evidence of favouritism, says Van den Besselaar, because the best applicants tend to be concentrated at certain institutions. To test this, the researchers evaluated whether applicants with an institutional connection to a panellist scored better on measures of academic performance, which they calculated using variables such as previous grants, citations and number of publications. Their analysis showed that successful and connected applicants scored worse on these performance indicators than did funded applicants without such links, and even some unsuccessful applicants. “This nearby-panellist effect cannot be explained away by pointing at the performance of the applicants,” says Van den Besselaar.

By contrast, the connected applicants did seem to publish more often in high-impact journals and had more collaborations with researchers from high-ranking institutions. However, the authors classified these two measures as markers of reputation rather than performance.

According to ERC policy, if a panellist works in the same organization as an applicant, the ERC bars them — with some exceptions — from reviewing the proposal and requires them to leave meetings during which it is discussed. Van den Besselaar and Mom did not directly observe panels to monitor compliance with this rule.

In an e-mailed statement, the ERC said that it is unable to comment on the study, because it is not yet peer reviewed.

Differences by discipline

One limitation of the authors’ method, Abramo notes, is that they lumped applicants from broad disciplines together, even though factors such as number of publications can vary drastically depending on subfield. For example, he says, blood-disease specialists publish much more frequently than vascular surgeons, so if you measure performance in these groups by the same factors, “you introduce an enormous bias”.

Another shortcoming, according to Natalia Zinovyeva, an economist at the University of Warwick, UK, relates to how Van den Besselaar and Mom interpreted some of their performance measures. In some fields, journal impact factor is a clear indicator of research quality, she says. “From my perspective as an economist, this evidence seems to indicate that connected applicants are much better.”

Zinovyeva adds that the applicants might have qualities that panellists can pinpoint but that are not accounted for in these types of quantitative measure. For this reason, she says, it is important to look not only at the past success of grant recipients, but also at their future outputs. If a grant recipient who was connected to a panellist ended up performing worse than an unconnected applicant, that would indicate that something other than research quality had factored into the panellists’ decisions.

“Measuring the quality of an applicant is extremely difficult,” Zinovyeva says. These findings are “a warning that we should pay attention to”, she adds, “but I think it requires deeper analysis”.