Guest blog

Blog – What Is DORA in Research and Why It Matters More Than You Think

Blog from Dr Yvonne Couch

Reading Time: 8 minutes

I’ve got to the stage where I am now not imparting advice, I’m just telling you what I’m doing and hoping you’ll find it useful. Sadly, or happily if you’re the guy in charge of this collection of words, my new research assistant actually reads these things so there are currently no grand plans for ‘how to manage new staff’ because I feel she might worry. So today I’m going to assume you’ve got no idea what DORA is and how it might impact you, the way you think and the way you work so I’m going to take you through it.

The main reason for me taking you through it today is because of an embarrassing conversation I had. I shared some feedback with a wonderful colleague who told me that it ‘wasn’t particularly DORA friendly’. At which point I said ‘erm….what’s DORA?’. This resulted in many messages with multiple question marks asking how and why I had managed to get to this stage of my career without knowing what DORA was, followed by some very helpful links and explanations which I will now distil for you good people.

Let’s start with what DORA is. DORA stands for the Declaration On Research Assessment. It’s a set of principles created to address issues related to how we assess the impact of research. Direct from the DORA website, their mission is ‘to advance practical and robust approaches to research assessment globally and across all scholarly disciplines’. The origins of DORA lie in a 2012 meeting of the American Society for Cell Biology (ASCB), which happened in San Francisco. In fact the full title of DORA is the San Francisco Declaration On Research Assessment.

To understand why the DORA meeting happened we’ll need to indulge my need to give you a history lesson and go back a little further to times when the internet was just becoming a thing.

Back in the 90s, theoretical physicists were way ahead of the game. They developed a platform called arXiv, a preprint server designed for them to share manuscripts and work prior to publication. The reason they did this was because theoretical physics is a very fast paced and very collaborative field, lots of people working together all the time to rapidly move things forward. Even before the development of the arXiv platform, people working within the field would often just mail hard copies of new papers to each other for comments and thoughts. The explosion of the internet in the mid-90s meant that there was now an obvious space to put all of these papers and arXiv was born.

But the birth of the internet affected all fields of science, not just physics. Instead of receiving paper copies of journals to browse, or going to the University library to find the latest Nature Neuroscience, everything was available online all of a sudden. The computational revolution had made not only disseminating papers easier, but writing them was also easier. No more ruler and graph paper figures, no more mailing things to your collaborator in the states and waiting two weeks for a reply. This led to a boom in publications around the late-90s which then precipitated the change in attitudes towards publication.

In the early 2000s there was a general feeling that public funded enterprises should not be benefiting private companies. And the majority of research is publicly funded, so researchers began to object to it being behind paywalls and not accessible to the people who funded the research. The paywalls were also severely impacting libraries. With subscriptions to journals increasing rapidly and the number of journals available also increasing, costs to libraries were becoming untenable. One of the first reactionaries to this crisis was SPARC, the Scholarly Publishing and Academic Resources Coalition which was formed by the Association of Research Libraries in America. SPARC is primarily a policy group which pushed for an open access publishing model.

SPARC was shortly followed by some of the first fully open access journals. PLoS, the Public Library of Science, was one of the first fully open access journals developed specifically for the purposes of disseminating science to the public. The original drive behind PLoS was a petition by Harold Varmus, Patrick Brown, and Michael Eisen in 2000 which asked signatories to not submit their papers to journals that did not make the data freely available to the public soon after publication. The petition got many thousands of signatures but many of them failed to act on their promise (we’ll come back to this attitude later) so Varmus, Brown and Eisen started PLoS.

What Is DORA in Research and Why It Matters More Than You Think blog by Dr Yvonne Couch

Have you faced challenges balancing DORA principles with real-world pressures in academia? How do you assess research quality fairly without relying on metrics?

But while open access expanded who could read research, it didn’t really change how research quality was evaluated. Academic hiring, promotion, and funding decisions still leaned heavily on journal prestige and the journal impact factor as shortcuts for assessing quality. In fact, early open access journals like PLoS ONE were, and sometimes still are, viewed as lower status. As the number of researchers rose in the 2000s and competition intensified, the overemphasis on metrics and judging quality began to warp research culture. Prestige and power went to safe, strategic publishing in CNS journals (Cell, Nature, Science) over novel, left-field or interdisciplinary work. The latter of which might be ground-breaking, but was less widely understood and so less likely to ‘make it big’. By the early 2010s, researchers had had enough — particularly early-career scientists, which is what led to the original DORA meeting. Now, here is where I became the queen of the cold email. Bernd Pulverer, one of the original DORA crew, sent a wonderful description of the original meetings and said a lot of it was driven by Stefano Bertuzzi, CEO of the ASCB at the time. He said it started very impromptu, a discussion in a side room, as these things often do. Then it escalated and a group of around twelve of them met regularly to try and figure out what could be done because Stefano said (according to Bernd) that ‘we have to make people understand that it is tacky and stupid to use journal impact factors’. Wonderful. They apparently brought in Thomson Reuters to the next big meeting who turned up with six people, including statisticians, and refused to back down. But, after many years of back and forth since, DORA has emerged.

There were some recommendations in the original DORA which were aimed at different groups, they’re actually not that long so feel free to go and eyeball them yourself. Basically, they say that qualitative metrics and things other than journal impact should be used to judge quality.

You can see from this idea how more recent advances, like the use of the narrative CV, have come about but here is where we hit a snag with DORA. And it’s the snag I mentioned above which I said we’d come back to.

Everyone signs up to it and says ‘yes, that sounds like a great plan, I’ll definitely do it’ and then they look at your publication record and go ‘hmm….that’s not very good, is it?’.

Or they ask you where you generally publish and you tell them it shouldn’t matter but that your last paper was….wherever it went….and they grimace and thumb the ceiling, indicating you should be ‘aiming higher’. Even I am guilty of doing this. I have a paper under review at the moment in a journal I like publishing in. I like it because it’s field relevant and I like the work they share but my first instinct when I submitted my paper was ‘the impact factor of this has gone up, I don’t think I have a hope of getting in here with this quite small and not particularly interesting paper’. This stuff is just ground in and it’s really hard to wash out.

And with new PIs especially, where they’re trying to prove themselves, they feel like making a Nature paper, or a Cell paper is the only way. They have no examples from above them of nice, kind people who publish regularly in all kinds of different journals. They have successful PIs who got three Nature papers in a row and now bank millions because of it. So they feel that’s what they have to do too.

From a publications point of view, in theory we can all work together and think about how we evaluate outputs and teach the next generation of scientists. But from a hiring point of view, the DORA guidelines now make things extremely challenging. Don’t get me wrong, I am totally pro-DORA, I think it’s a great idea, but here is where we fall down a giant problematic hole, and it’s largely an issue with the current academic market.

Hiring without metrics is a great idea in principle, but in practice, it’s probably a nightmare.

For my most recent RA position I had over 100 applicants. For me, excluding people was relatively easy because, despite having essential criteria, a lot of people applied who simply didn’t meet the basic needs. A lot of ‘whilst I don’t have any experience with rats, I feel my experience with yeast would benefit me whilst I learn’. Applause for trying, but no. But for a lectureship you really just need someone who has taught at University level, brought in some funding and published some papers and most people five or so years into an academic career will have ticked all those boxes to some degree.

At which point how do you discriminate between applicants? Your options are to start saying ‘your papers or your money aren’t enough’. And we know those things are dictated by luck and circumstances, because there have been many, many studies. Or you have to interview all 100 and figure out who you want. With no obvious way to discriminate between multiple candidates of equal worth, and no practical means of interviewing all of them, hiring panels using all the things that DORA warns us against using, impact factors and money.

So where does that leave us? Stuck between a rock and a hard place. I think the real work we have to do is convincing the next generation of academic scientists that they need to figure out how to assess quality in a metric-free way. Good luck with that one. Until then, DORA remains a good starting point, but an enormous hurdle for those in academia.


Dr Yvonne Couch Profile Picture

Dr Yvonne Couch

Author

Dr Yvonne Couch is an Alzheimer’s Research UK Fellow at the University of Oxford. Yvonne studies the role of extracellular vesicles and their role in changing the function of the vasculature after stroke, aiming to discover why the prevalence of dementia after stroke is three times higher than the average. It is her passion for problem solving and love of science that drives her, in advancing our knowledge of disease. Yvonne shares her opinions, talks about science and explores different careers topics in her monthly blogs – she does a great job of narrating too.

@dryvonnecouch.bsky.social

Leave a comment

Your email address will not be published. Required fields are marked *

Translate »