Guest blog

Blog – What happens in a grant review panel

Blog by Ajantha Abey

Reading Time: 10 minutes

Grant applications can make or break a researcher’s career, particularly for early-career scientists. But what actually happens when these applications are reviewed? I had the opportunity to sit in on an (online) grant review panel meeting for ARUK. This was an experience I found extremely insightful and learned a lot from, and would recommend to any ECRs – whether you are intending to stay in academia and are looking towards postdoctoral fellowships and beyond, or are a PhD student looking for travel funding.

In this blog, I’ll shed some light what the process is like in these meetings, and what the typical strengths and weaknesses of various applications were which came up. The specifics of the applications and applicants are of course, strictly confidential, so I will only talk about these points in a general sense, and not refer specifically to which award panels I was watching or the people involved. I will also clarify that these panels occurred some time ago and have had their outcomes released.

What Happens to Your Grant: A Rapid and Rigorous Process

First, here was the set up: I joined a zoom meeting with the whole panel (around 15 people), camera and microphone off, simply observing. For any applications which involved people from my institution or anyone I had identified as collaborators, I was put back into a waiting room, to avoid any conflict of interest. The meeting kicked off immediately – straight to business, no faffing around.

The review process itself was extremely fast-paced, structured, and efficient.

Each application had been assigned to 2-3 members of the panel to read in detail prior to the meeting, based on their area of expertise, and then was given a 10-minute time slot for presentation, discussion, and scoring during the meeting. This time limit was fairly strictly adhered to, with a timer counting down for each application in the Zoom meeting. For certain grant application categories, this timing was even shorter, only 7 minutes per application.

At first, one of the assigned reviewers would give a detailed and objective summary of the proposed project, covering background, aims, methods, and outcomes. The reviewer would then give their own assessment of the strengths and weaknesses of the proposal, with the other assigned reviewers adding on any additional points as necessary. This would typically take around half of the allotted time, and was entirely verbal. I.e. the other members of the review panel largely understand the content of the grant proposal through the spoken summary of the assigned reviewer.

Following this, other members of the broader review panel would ask any of their own questions or raise any of their own concerns to the assigned reviewers, and some discussion would ensue. Finally, the assigned reviewers who had read the application in detail would give their opinions on scoring, before the entire panel voted on a final score.

And then, just like clockwork, it would be on to the next grant. A ruthlessly efficient, but not uncaring process. I will admit that as someone who has been through the process of putting together a fellowship proposal which took weeks to assemble, it was mildly confronting to see the rapidity with which these applications are assessed. This perhaps underscores the importance of having a grant application that stands out. Here is what I surmised from all of the comments during the meeting about what makes for a stronger or weaker application.

ALZHEIMER'S RESEARCH UK Logo

Alzheimer’s Research UK regularly allow ECRs to observe their grant review boards in action. Keep an eye on our website to find oppertunities.

Key Evaluation Criteria:

These were key themes that came up over and over again during the meeting. I’m not sure if they are part of a strict or formal rubric, but this is what reviewers were typically looking for or asking questions about:

  • Feasibility: Does the applicant have pilot data? Do they have the necessary expertise, or at least a named PhD student or technician with relevant skills?
  • Novelty: Is the project innovative and pushing boundaries in its field?
  • Training Environment: Does the institution provide good mentorship and infrastructure for any potential students/trainees involved?
  • Data Sharing Plan: How will the applicant handle and share data, and eventually publish the work?
  • Impact Plan: Does the project have the potential to make a meaningful contribution, and do the applicants have a plan to achieve this? 

Common Strengths in Successful Applications

Certain strengths frequently stood out in successful applications, or were the saving graces in otherwise poorer applications:

  • Strong Training for Students and Opportunity for Secondment: Where PhD students were involved, there was a major focus on whether they would receive good training. In this regard, having the opportunity to collaborate or do some kind of internship or secondment was seen extremely favourably.
  • Interlinked but Independent Methods: Stronger proposals were designed such that if one approach fails, the project would not collapse.
  • Named Applicant with Experience: For grants involving a student, having a named and well credentialed student mentioned in the application was a big plus. Students with some research experience or publication track record provided a substantial boost in confidence in the feasibility of the project. Applicants who had strong references, first author papers, and ambition were major stand outs.
  • Complementary Skills from Supervisors or Lead Investigators: Ensuring a well-rounded team with different areas of expertise to cover the needs of the project was paid a lot of attention to, and the reviewers were typically familiar with applicants and their research areas.
  • Cutting-Edge Technology: Novel techniques are attractive but must be feasible.
  • Power Calculations and Good Experimental Design: Including information about statistical power made for more credible applications, though were by no means a golden ticket, and sometimes taken with scepticism. Well-designed experiments, and inclusion of multiple sexes in study participants for example, were also noted.
  • Flexible Budgeting: Proposals that account for potential adjustments were seen positively.
  • Mitigation Plans: A well-thought-out risk management strategy reassured reviewers.
  • Well-Written Proposal: High quality and clear writing and structuring of the proposal was frequently noted and appreciated. Clear and well-structured aims were particularly noted.
  • Ethics Approvals in Place: This could be a bit of a tick box rather than a major boost, but proposals without ethics in place already were treated with substantial uncertainty. 

Common Weaknesses That Lower Scores

Some common pitfalls that dragged down scores or raised red flags for reviewers were as follows:

  • Overambition: If the project seems too extensive for the time and resources available, reviewers were sceptical. This was especially the case for shorter duration projects and projects that lacked pilot data or existing expertise. That said, there was a clear balance to the drawn. Projects that lacked any adventurousness, which stayed too close to the pilot data, were also poorly looked upon.
  • Interdependent Aims: Having one aim rely on an earlier aim was a frequent pitfall for many projects.
  • Vague Budget: A poorly justified budget raised red flags.
  • Lack of Justification: Whether for techniques, models, or materials, applicants needed to explain their choices.
  • Supervisor’s Job Security: If a PI’s contract is up for renewal, this was a major concern. Sometimes, this wasn’t necessarily mentioned in the application itself but was known to the reviewers nonetheless.
  • Lack of Expertise in the Supervisory Team: If the mentors didn’t have relevant experience, the project’s feasibility came into question.
  • High-Risk, Expensive Techniques: Costly and unpiloted methods with uncertain success rates made reviewers wary, especially when the research group did not have a substantial track record in that technique, or there wasn’t appropriate budgeting.
  • Lack of Contingency Planning: Reviewers wanted to see how applicants would handle setbacks, and wanted to know there was a plan in place.
  • No Timeline: A Gantt chart or clear timeline helped assess feasibility and reassured the reviewers that an adequate plan was in place.
  • Weak Data Management Plan: This wasn’t usually a make or break issue, but a lack of consideration for how and where data and code would be stored often dragged down scores.
  • Dubious Scientific Premises: If literature contradicted the proposed approach, reviewers noticed.
  • Tenuous Links to Funders’ Priorities: Projects needed to align well with the funding organization’s mission.
  • Lack of Pilot Data: This was crucial. Without preliminary results, feasibility came substantially under question.
  • Unclear Future Plans: What would happen after the grant period ended? Ensuring there was a plan for where the research and applicants might go once the funding finished was surprisingly important.
  • Hard-to-Follow Writing: Clarity matters, especially for non-experts on the panel. Interestingly, some reviewers noted that excessive highlighting / bolding / underlining / italicising of text made writing harder to follow, rather than making important points clearer. 
The image is a promotional graphic for an online event titled "Grant Review Board Insider Tips." The event is scheduled for 5th March at 12.00pm GMT. It features Professor Patrick Lewis from the Royal Veterinary College and Dr Rachael Kelley from Leeds Beckett University. The event is part of the "Salon" series by Dementia Researcher, and the logo of Dementia Researcher is positioned in the top left corner. The background is black with white text and a white border.

Visit our Community Website to watch our Salon Recording with Professor Patrick Lewis and Dr Rachael Kelley sharing their tips as grant reviewers.

Additional Insights from the Review Process

Beyond the above pointers, there were several other insights I gained from the observer position. The first was that, for better or worse, ‘big name’ researchers almost uniformly had very high scoring grant proposals. This is not necessarily an accusation of bias or favouritism on behalf of the review panel. Large, well-established labs typically have lots of pilot data, substantial expertise in advanced techniques, a track record of good mentorship, and have likely gotten to where they are because they are good at writing grants. It probably doesn’t hurt that many of these figures were well known and well-liked, minimising concerns about feasibility of the proposal and quality of training, but it should maybe come as no surprise that the best grants came from some of the best labs and raised few concerns. While this remains a merit-based system, there was something interesting about seeing the cycle of research success firsthand, how it can compound for some over time, and make it harder for others to break in. It also highlighted for me the utility of being in a big lab with a well-known and respected PI. The fact is that in these circumstances, some things do come easier.

Beyond this, I was pleasantly surprised by how much rigour was applied not just to the science of a proposal, but to the training environment of the lab in question, whenever students were involved. Plans and funding for secondments and extra training were far more common than I realised, and supervision teams were given a lot of scrutiny, with the welfare and future success of the student in mind. The quality of a named student could also sometimes offset a project of questionable feasibility. How much a trainee would actually learn that would be both new and useful to them for their career was also an important deciding factor for the panel, demonstrating funding bodies are not just focused on producing good science but producing a strong pipeline of researchers.

Some of the applications were also resubmissions – here, reviewers also didn’t miss a trick and were rigorous in checking why it had been returned the first time, and whether any previous concerns had actually been addressed in the resubmission. Nevertheless…

reviewers weren’t always, and in fact, were often not experts in the topic of a particular application.

This made justification of different methods, models, questions, and approaches, particularly important, and made the quality of writing critical.  

Final Notes:

Overall, I would strongly recommend sitting in as an observer in a grant review panel if you ever get the chance. Besides gaining insights onto what makes a good grant in the eyes of reviewers, it’s also an interesting crystal ball into where the field is going, and what directions other labs are interested in pursuing. The key take aways for grant writing remain clear, however:

  • Be Clear and Concise: writing style, structure, formatting, etc.
  • Justify Everything: methods, models, approaches, etc.
  • Think About Feasibility: Ambitious but realistic and supported by pilot data and relevant expertise.
  • Plan for the Worst: Having mitigation and contingency plans can make or break your application.
  • Make Your Data and Impact Plan Strong: Clearly outline how data will be shared and managed, and where the project will take you and your science in the future.
  • Align with Funders’ Goals: Show that your project fits well with the funding organization’s mission.

Good luck!


Ajantha Abey Profile Picture

Ajantha Abey

Author

Ajantha Abey is a PhD student in the Kavli Institute at University of Oxford. He is interested in the cellular mechanisms of Alzheimer’s, Parkinson’s, and other diseases of the ageing brain. Previously, having previoulsy explored neuropathology in dogs with dementia and potential stem cell replacement therapies. He now uses induced pluripotent stem cell derived neurons to try and model selective neuronal vulnerability: the phenomenon where some cells die but others remain resilient to neurodegenerative diseases.

 

Leave a comment

Your email address will not be published. Required fields are marked *

Translate »