Imagine a world where the flood of research funding applications becomes unmanageable, leaving brilliant ideas stuck in limbo. That's the reality the UK's research funding giant, UK Research and Innovation (UKRI), is facing. With applications surging by over 80% in recent years, while the number of funded grants has halved, something has to give. But here's where it gets controversial: UKRI is turning to artificial intelligence for help, specifically exploring whether generative AI can shoulder some of the burden of peer review.
UKRI, responsible for allocating a whopping £8 billion annually to research, is partnering with a team led by data scientist Mike Thelwall from the University of Sheffield. Their mission? To see if AI can accurately predict the scores and recommendations human reviewers would give to grant proposals.
Thelwall's team will have access to a treasure trove of data: the full text of 1,000 to 2,000 grant proposals, both funded and rejected, usually kept under lock and key. They'll feed these proposals into large language models (LLMs) and see if the AI can mimic human judgment.
Here's the catch: the AI won't know the actual scores or funding decisions. If it can consistently predict these outcomes with high accuracy, it could revolutionize the process. As Thelwall explains, "If AI can reliably predict scores, it could speed up reviews or support human reviewers."
But this is the part most people miss: this isn't about replacing human expertise. Thelwall envisions AI as a tiebreaker, an additional reviewer, or a tool for quickly identifying proposals unlikely to succeed. Think of it as a first-pass filter, freeing up valuable time for human reviewers to focus on the most promising ideas.
This isn't Thelwall's first rodeo with AI and peer review. He previously explored its use in assessing research articles for the UK's Research Excellence Framework. While initial results showed AI agreeing with human reviewers 72% of the time, Thelwall believes a 95% accuracy threshold is necessary for practical application.
But is AI ready for prime time in grant review? Mohammad Hosseini, an AI ethics researcher at Northwestern University, raises a crucial point: LLMs, trained on existing data, may struggle to identify truly novel ideas. "If AI can't generate groundbreaking concepts," he argues, "how can it recognize them in grant proposals?"
Another concern is transparency. If funding bodies don't disclose the criteria fed into the AI, researchers may feel the system is unfair. Conversely, if criteria are made public, applicants might start tailoring their proposals to game the AI, potentially stifling genuine innovation.
The la Caixa Foundation in Barcelona offers a glimpse into a potential future. They've been experimenting with AI-assisted grant review, with around 90% of proposals still undergoing full peer review by human experts. While the time saved may seem modest, it translates to significant hours freed up for experts to focus on the most promising research.
UKRI's experiment with AI is a bold step towards addressing the growing challenge of managing research funding applications. While questions remain about AI's capabilities and ethical implications, one thing is clear: the traditional peer review process is under strain, and innovative solutions are desperately needed.
What do you think? Can AI be a fair and effective partner in the grant review process, or does it pose too great a risk to the integrity of scientific evaluation? Let's continue the conversation in the comments.