Most health research in Australia is funded by the ³Ô¹ÏÍøÕ¾ Health and Medical Research Council (NHMRC), which distributes around through competitive grant schemes. An additional $650 million a year is funded via the , but this focuses more on big-picture “missions” than researcher-initiated projects.
Ten years ago, around 20% of applications for NHMRC funding were successful. Now, only about 10-15% are approved.
Over the same ten-year period, NHMRC funding has stayed flat while prices and population have increased. In inflation-adjusted and per capita terms, the NHMRC funding available has fallen by 30%.
As growing numbers of researchers compete for dwindling real NHMRC funding, research risks becoming ““. To fix it, we need to spend more on research – and we need to spend it smarter.
More funding
To keep pace with other countries, and to keep health research a viable career, Australia first of all needs to increase the total amount of research funding.
Between 2008 and 2010, Australia matched the average among OECD countries of investing 2.2% of GDP in research and development. More recently, Australia’s spending has fallen to 1.8%, while the has risen to 2.7%.
When as few as one in ten applications is funded, there is a big element of chance in who succeeds.
Think of it like this: applications are ranked in order from best to worst, and then funded in order from the top down. If a successful application’s ranking is within say five percentage points of the funding cut-off, it might well have missed out if the assessment process were run again – because the process is always somewhat subjective and will never produce exactly the same results twice.
So 5% of the applications are “lucky” to get funding. When only 10% of applications get funding, that means half of the successful ones were lucky. But if there is more money to go around and 20% of applicants are funded, the lucky 5% are only a quarter of the successful applicants.
This is a simplistic explanation, but you can see that the lower the percentage of grants funded, the more of a lottery it becomes.
This increasing element of “luck” is demoralising for the research workforce of Australia, leading to depletion of academics and brain drain.
The ‘application-centric’ model
As well as increasing total funding, we need to look at how the NHMRC allocates these precious funds.
In the past five years, the NHMRC has moved to a system called “application-centric” funding. Five (or so) reviewers are selected for each grant and asked to independently score applications.
There are usually no panels for discussion and scoring of applications – which is what used to happen.
The advantages of application-centric assessment include (hopefully) getting the best experts on a particular grant to assess it, and a less logistically challenging task for the NHMRC (convening panels is hard work and time-consuming).
However, application-centric assessment has disadvantages.
First, assessor reviews are not subject to any scrutiny. In a panel system, differences of opinion and errors can be managed through discussion.
Second, many assessors will be working in a “grey zone”. If you are expert in the area of a proposal, and not already working with the applicants, you are likely to be competing with them for funding. This may result in unconscious bias or even deliberate manipulation of scores.
And third, there is simple “noise”. Imagine each score an assessor gives is made up of two components: the “true score” an application would receive on some unobservable gold standard assessment, plus or minus some “noise” or random error. That noise is probably half or more of the current variation between assessor scores.
Smarter scoring
So how do we reduce the influence of both assessor bias and simple “noise”?
First, assessor scores need to be “standardised” or “normalised”. This means rescaling all assessors’ scores to have the same mean (standardisation) or same mean and standard deviation (normalisation).
This is a no-brainer. You can use a pretty simple Excel model (I have done it) to show this would substantially reduce the noise.
Second, the NHMRC could use other statistical tools to reduce both bias and noise.
One method would be to take the average ranking of applications across five methods:
- with the raw scores (i.e. as done now)
- with standardised scores
- with normalised scores
- dropping the lowest score for each application
- dropping the highest score for each application.
The last two “drop one score” methods aim to remove the influence of potentially biased assessors.
The applications that make the cutoff rank on all the methods are funded. Those that are always beneath the threshold are not funded.
Applications that make the cut on some tests but fail on others could be sent out for further scrutiny – or the NHMRC could judge them by their average rank across the five methods.
This proposal won’t fix the problem with the total amount of funding available, but it would make the system fairer and less open to game-playing.
A less noisy and fairer system
Researchers know any funding system contains an element of chance. One found they would be happy with a funding system that, if run twice in parallel, would see at least 75% of the funded grants funded in both runs.
I strongly suspect (and have modelled) that the current NHMRC system is achieving well below this 75% repeatability target.
Further improvements to the NHMRC system are possible and needed. Assessors could provide comments, as well as scores, to applicants. Better training for assessors would also help. And the biggest interdisciplinary grants should really be assessed by panels.
No funding system will be perfect. And when funding rates are low, those imperfections stand out more. But, at the moment, we are neither making the system as robust as we can nor sufficiently guarding against wayward scoring that goes under the radar.