MAKING SCIENCE FUNDING POLICY FAIR

Jamie Shaw

It is a fascinating time to study science funding policy. Since the end of the Second World War, at least, science funding agencies have used peer review to evaluate proposals to determine what science should be pursued. However, over the past couple of decades, research on peer review has raised suspicions about its practice. One finding of this research is that peer review is unreliable when performing its primary task; that is, distinguishing promising proposals from those that are far-fetched or ill conceived. On top of this, peer review requires an increasingly unsustainable number of resources. In 2021, one global study found that reviewing grant proposals, which is roughly one-sixth of the total costs of peer review, required 100 million working hours. That’s a lot of time and effort that could be better used elsewhere. Finally, peer review tends to truncate the innovation and creativity that science needs to thrive. The peer review paradigm, therefore, seems to be in a state of crisis.

Another concern that has gained a lot of attention recently is fairness. One troubling finding is that peer review doesn’t seem to work equally for everybody. Specifically, many biases for and against different social groups seem to pervade peer review despite our best intentions. While we are still figuring out who is discriminated against in peer review (and to what extent), it has become apparent that peer review lacks the objectivity that it is often portrayed as having. This is not only concerning for social, ethical, and political reasons, but it also negatively impacts the quality of science by decreasing diversity within the scientific community.

Because of these worries, some funding agencies and commentators have experimented with alternatives or supplementations to peer review. One of the most popular involves introducing lotteries, or elements of random chance, into science funding allocation mechanisms. Lotteries have been used by many funding bodies, from the Health Research Council in New Zealand to the New Frontiers Research Fund in Canada. Lotteries, it is frequently argued, avoid the problems that plague traditional peer review. The focus in my BJPS article is the claim that lotteries are fairer than traditional peer review. This means that lotteries, perhaps counterintuitively, are more likely to uphold the original ambitions of science funding policy by ensuring that there is fair access to funds.

While lotteries have generated a lot of excitement, a recent article by Carole Lee, Sheridan Grant, and Elena Erosheva (Lee et al. [2020]) has attempted to deflate the hype that lotteries will be fairer than peer review. The devil, they claim, is in the details. Based on some of their previous empirical work on peer review at the National Institute of Health, they claim that lotteries enter too late in the game to address the biases that make peer review unfair. Essentially, their argument is as follows: Peer review takes place in multiple stages. The most common use of lotteries does not get rid of peer review altogether, but uses it to sort applications into different piles. Some of those piles go into the lottery, while others are rejected. But biases that impact the fairness of peer review, they argue, are almost entirely present in this sorting stage. Lotteries, therefore, do not remove the source of biases and won’t do much (if anything) to make science funding policy more inclusive and fair.

In my article, I respond to this scepticism. I try to show that we can salvage much of the optimism behind the idea that lotteries can make science funding policies fairer. However, to do this, we need to practice lotteries in different ways than we have done so far. In particular, I consider versions of lotteries that skip the sorting phase altogether—sometimes called ‘pure lotteries’—as well as lotteries that give different grant applications different probabilities of winning the lottery—sometimes called ‘weighted lotteries’. To be clear, lotteries and how they work (or don’t work) in science funding policy are extremely new and we have more questions than answers at this stage. Because of this, I offer no guarantees. But there are some theoretical reasons, grounded in empirical research, to be optimistic that we can design lotteries in ways that enhance the fairness of science funding policy.

I entirely agree with Lee, Grant, and Erosheva’s claim that the devil is in the details. Because of this, I spend some time working through the details of how different versions of lotteries might work and how they might minimize the presence of biases in science funding policy. This analysis of the ins and outs about how lotteries actually work, or could work, in practice is necessary to regain optimism in their legitimacy at least insofar as fairness is concerned.

But the optimism I offer isn’t absolute. Even with the most carefully crafted lotteries, modified to avoid the worries levelled against them, I still think that biases will arise. A crucial part of this argument is that funding agencies must, by practical, political, and institutional necessity, perform ‘boundary work’ where they decide what research proposals fall within their jurisdiction. This boundary work is susceptible to biases and even profoundly so. Recent studies have suggested that some of the key and major contributors to funding gaps, where science is divided into haves and have-nots along demographic lines, involve the choice of research topic. Choices of research topics can be discriminated against during the boundary work phase, so even pure lotteries are unlikely to escape this source of bias.

I argue that this issue is symptomatic of a deeper, more fundamental issue; namely, that science funding policy will always involve some degree of judgement and that judgement, at least during grant peer review, is bound to be biased. This cynicism is grounded in attempts to weed out biases from their roots. Attempts by science funding agencies to reduce biases by making scientists take implicit bias tests, training reviewers in ‘diversity’ initiatives, and other similar efforts to make reviewers ‘objective’ or ‘free from bias’ have consistently failed to make noticeable dents in the bottom line. While I stop short of suggesting we abandon these efforts altogether, I argue that we should be pessimistic that lotteries—or any other kind of science funding policy, for that matter—will be entirely fair.

This, or so I argue, does not mean that we should give up on lotteries altogether. Not only are lotteries, at least in theory, more likely to be fair than traditional peer review, but I think that the more fundamental problem of the intrinsically biased nature of science funding policy requires shifting the goalposts. Instead of aiming for a holy grail or a funding allocation mechanism that leaves little to no room for bias, we should complement lotteries with affirmative action in science funding policy. There are lots of ways this can be done, and more research is needed to figure out the best ways to proceed. What I provide is the basis for the claim that science funding policies that directly try to increase the availability of funds for scientists who are usually left out of other funding initiatives are a more fruitful strategy than trying to de-bias peer review or lotteries. Our next goal, then, should be to find the balance between lotteries and affirmative action that best addresses inequalities in science funding. Only with this done can we be confident that our funding practices are equitable.

Jamie Shaw
Leibniz University Hannover
jshaw222@uwo.ca

References

Lee, C. J., Grant, S. and Erosheva, E. [2020]: ‘Alternative Grant Models Might Perpetuate Black–White Funding Gaps’, The Lancet, 396, pp. 955–56.

Listen to the audio essay

FULL ARTICLE

Shaw, J. [2027]: ‘Bias, Lotteries, and Affirmative Action in Science Funding Policy’, British Journal of the Philosophy of Science, 78, <doi.org/10.1086/730218>.

© The Author (2024)

FULL ARTICLE

Shaw, J. [2027]: ‘Bias, Lotteries, and Affirmative Action in Science Funding Policy’, British Journal of the Philosophy of Science, 78, <doi.org/10.1086/730218>.