- DEMENTIA RESEARCHER - https://www.dementiaresearcher.nihr.ac.uk -

Blog – What happens in a grant review panel

Grant applications can make or break a researcher’s career, particularly for early-career scientists. But what actually happens when these applications are reviewed? I had the opportunity to sit in on an (online) grant review panel meeting for ARUK. This was an experience I found extremely insightful and learned a lot from, and would recommend to any ECRs – whether you are intending to stay in academia and are looking towards postdoctoral fellowships and beyond, or are a PhD student looking for travel funding.

In this blog, I’ll shed some light what the process is like in these meetings, and what the typical strengths and weaknesses of various applications were which came up. The specifics of the applications and applicants are of course, strictly confidential, so I will only talk about these points in a general sense, and not refer specifically to which award panels I was watching or the people involved. I will also clarify that these panels occurred some time ago and have had their outcomes released.

What Happens to Your Grant: A Rapid and Rigorous Process

First, here was the set up: I joined a zoom meeting with the whole panel (around 15 people), camera and microphone off, simply observing. For any applications which involved people from my institution or anyone I had identified as collaborators, I was put back into a waiting room, to avoid any conflict of interest. The meeting kicked off immediately – straight to business, no faffing around.

The review process itself was extremely fast-paced, structured, and efficient.

Each application had been assigned to 2-3 members of the panel to read in detail prior to the meeting, based on their area of expertise, and then was given a 10-minute time slot for presentation, discussion, and scoring during the meeting. This time limit was fairly strictly adhered to, with a timer counting down for each application in the Zoom meeting. For certain grant application categories, this timing was even shorter, only 7 minutes per application.

At first, one of the assigned reviewers would give a detailed and objective summary of the proposed project, covering background, aims, methods, and outcomes. The reviewer would then give their own assessment of the strengths and weaknesses of the proposal, with the other assigned reviewers adding on any additional points as necessary. This would typically take around half of the allotted time, and was entirely verbal. I.e. the other members of the review panel largely understand the content of the grant proposal through the spoken summary of the assigned reviewer.

Following this, other members of the broader review panel would ask any of their own questions or raise any of their own concerns to the assigned reviewers, and some discussion would ensue. Finally, the assigned reviewers who had read the application in detail would give their opinions on scoring, before the entire panel voted on a final score.

And then, just like clockwork, it would be on to the next grant. A ruthlessly efficient, but not uncaring process. I will admit that as someone who has been through the process of putting together a fellowship proposal which took weeks to assemble, it was mildly confronting to see the rapidity with which these applications are assessed. This perhaps underscores the importance of having a grant application that stands out. Here is what I surmised from all of the comments during the meeting about what makes for a stronger or weaker application.

ALZHEIMER'S RESEARCH UK Logo

Alzheimer’s Research UK regularly allow ECRs to observe their grant review boards in action. Keep an eye on our website to find oppertunities.

Key Evaluation Criteria:

These were key themes that came up over and over again during the meeting. I’m not sure if they are part of a strict or formal rubric, but this is what reviewers were typically looking for or asking questions about:

Common Strengths in Successful Applications

Certain strengths frequently stood out in successful applications, or were the saving graces in otherwise poorer applications:

Common Weaknesses That Lower Scores

Some common pitfalls that dragged down scores or raised red flags for reviewers were as follows:

The image is a promotional graphic for an online event titled "Grant Review Board Insider Tips." The event is scheduled for 5th March at 12.00pm GMT. It features Professor Patrick Lewis from the Royal Veterinary College and Dr Rachael Kelley from Leeds Beckett University. The event is part of the "Salon" series by Dementia Researcher, and the logo of Dementia Researcher is positioned in the top left corner. The background is black with white text and a white border.

Visit our Community Website to watch our Salon Recording with Professor Patrick Lewis and Dr Rachael Kelley sharing their tips as grant reviewers.

Additional Insights from the Review Process

Beyond the above pointers, there were several other insights I gained from the observer position. The first was that, for better or worse, ‘big name’ researchers almost uniformly had very high scoring grant proposals. This is not necessarily an accusation of bias or favouritism on behalf of the review panel. Large, well-established labs typically have lots of pilot data, substantial expertise in advanced techniques, a track record of good mentorship, and have likely gotten to where they are because they are good at writing grants. It probably doesn’t hurt that many of these figures were well known and well-liked, minimising concerns about feasibility of the proposal and quality of training, but it should maybe come as no surprise that the best grants came from some of the best labs and raised few concerns. While this remains a merit-based system, there was something interesting about seeing the cycle of research success firsthand, how it can compound for some over time, and make it harder for others to break in. It also highlighted for me the utility of being in a big lab with a well-known and respected PI. The fact is that in these circumstances, some things do come easier.

Beyond this, I was pleasantly surprised by how much rigour was applied not just to the science of a proposal, but to the training environment of the lab in question, whenever students were involved. Plans and funding for secondments and extra training were far more common than I realised, and supervision teams were given a lot of scrutiny, with the welfare and future success of the student in mind. The quality of a named student could also sometimes offset a project of questionable feasibility. How much a trainee would actually learn that would be both new and useful to them for their career was also an important deciding factor for the panel, demonstrating funding bodies are not just focused on producing good science but producing a strong pipeline of researchers.

Some of the applications were also resubmissions – here, reviewers also didn’t miss a trick and were rigorous in checking why it had been returned the first time, and whether any previous concerns had actually been addressed in the resubmission. Nevertheless…

reviewers weren’t always, and in fact, were often not experts in the topic of a particular application.

This made justification of different methods, models, questions, and approaches, particularly important, and made the quality of writing critical.  

Final Notes:

Overall, I would strongly recommend sitting in as an observer in a grant review panel if you ever get the chance. Besides gaining insights onto what makes a good grant in the eyes of reviewers, it’s also an interesting crystal ball into where the field is going, and what directions other labs are interested in pursuing. The key take aways for grant writing remain clear, however:

Good luck!


Ajantha Abey Profile Picture

Ajantha Abey

Author

Ajantha Abey [1]is a PhD student in the Kavli Institute at University of Oxford. He is interested in the cellular mechanisms of Alzheimer’s, Parkinson’s, and other diseases of the ageing brain. Previously, having previoulsy explored neuropathology in dogs with dementia and potential stem cell replacement therapies. He now uses induced pluripotent stem cell derived neurons to try and model selective neuronal vulnerability: the phenomenon where some cells die but others remain resilient to neurodegenerative diseases.

Follow @ajanthaabey [2]