A person seen through privacy glass, unclear, they can't see in

Cold Comfort – learning from rejection in competitive schemes

As I type this about 80% of the people who applied for the Round 10 Future Leaders Fellows scheme are coming to terms with the rejection of their proposal. Most of these had already navigated incredibly competitive internal selection processes (and indeed you might be reading this as someone who wasn’t successful at this stage) which would have set the bar extremely high. In recent years I’ve run sessions for people who haven’t been successful with funding (as Head of Researcher Development and more recently as part of the InFrame project) so I thought I’d put down a few thoughts about coping with rejection, but with a focus on LEARNING.

As well as being Director of the FLF Development Network, I’m also a roving panel member. This means I lurk (camera off, slightly sinister) in the panel discussions ensuring that the brilliantly designed FLF selection process is followed. Criteria are clear, peer review is safeguarded, panel responsibilities are clearly spelt out. And I’m there to ensure that this all holds when people are tired, tempted to re-review (forbidden in the FLF because the candidate’s right to reply is a core principle) or biases or assumptions surface. The FLF model was quite different from other processes (including those run by the Research Councils that make up UKRI) so my first few years as a rover were busy, but panels have become so effective at following the gold standard process that they generally require little “course correction” from the rovers. I say this to reassure those of you who are in the “denial” phase of your processing.

I hope that you are going to pick your brilliant self and your brilliant idea up and have another go. Before you launch yourself at the next scheme taking the same approach that led to this point whilst expecting a different result, let’s take a breath and see what you can learn from this moment. Whatever you did, it didn’t get you what you wanted. Why do fellowship applications fail?

(I stuck all the notes I took during my panel observations into ELM, the Edinburgh Language Model, our institutional AI assistant designed around academic use, clarity, and institutional needs rather than broad consumer use. This is its summary of what I observed in discussions. For added assurance, there were NO references to individual candidates or applications in the material I put in – these are my notes on process and what influenced decision making.)

Dr ELM, over to you…

One of the clearest messages to come through from this round of panel feedback is that many applications were not ruled out because the underlying idea lacked promise. More often, they fell short because the case for the fellowship was not made strongly enough. In other words, panels were often persuaded that there was something interesting in the proposal, but not always that the application, as a whole, justified support through this particular scheme.

A recurring difficulty was the distinction between a good project and a good fellowship application. Some proposals read more like standard research grants, commercial development plans (SS: remember the FLF is open to commercial and charity based researchers and innovators) or bids for programme support than applications centred on the development of an individual. Where that happened, panels struggled to see why this funding route was the right one. They wanted a clearer explanation of what the fellowship would do for the applicant, why it was needed at this stage, and how it would create a meaningful step forward in independence.

That issue came up particularly often in applications from more established candidates. Seniority in itself was not treated as a problem, but panels did look closely at whether the applicant had explained what the award would add beyond their existing position, responsibilities or funding. Where that argument was missing, or only hinted at, the application was weakened. The same applied where a proposal looked less like a developmental opportunity and more like support for an already established line of work.

Host support was another area where applications frequently ran into difficulty. In many cases, panels were left uncertain about the level of institutional commitment behind the applicant. Sometimes the host contribution was vague. Sometimes funding arrangements were unclear. In others, there was too little detail about protected time, support structures or longer-term plans. These issues mattered because they affected confidence not just in the project, but in the applicant’s prospects of making the most of the fellowship.

A number of applications were also undermined by the way they were written. Panels felt more than once that proposals assumed too much prior knowledge, relied too heavily on specialist language, or failed to explain the significance of the work clearly enough for a broader audience. That did not necessarily mean the work itself was weak, but it did mean reviewers and panellists had a harder job in understanding its value. In a competitive process, lack of clarity can be costly.

There were also applications where the concerns were more substantive. Some were judged to have methodological weaknesses, unresolved technical problems or plans that were not fully worked through. Others seemed promising but not yet mature enough. In a few cases, the response to reviewers made matters worse rather than better. Panels repeatedly noted that rebuttals often failed to deal directly with the main points raised. Some responses were over-length (SS note – a glitch in the system for commercial applicants meant that they were advised on the length of their response, but it was possible to upload a longer one. Some did this. It did NOT serve them.), but still did not answer the questions that mattered most.

In many of the weaker applications, there was no single fatal flaw. The more common pattern was a combination of concerns: an underdeveloped fellowship case, unclear added value, vague host support, unanswered reviewer criticisms, and a proposal that was difficult to assess with confidence. That accumulation of smaller weaknesses often proved enough to push an application into the lower bands.

The broad pattern is fairly clear. The strongest applications were those that did more than present a good idea. They showed why the fellowship mattered, why it mattered now, what it would make possible for the applicant to succeed and how the host would help make that happen. The weaker ones left one or more of those questions unanswered.

Thank you ELM, the human is back now.

And if you WERE successful, well done. Take a look at the FLF Development Network site for more insights, including a blog on interview preparation written for you AND the people who will be supporting you.

(Image Credit: https://unsplash.com/@derekleej)

(Image Credit: https://unsplash.com/@derekleej)

Leave a Reply

Your email address will not be published. Required fields are marked *