Careers, Guest blog

Blog – The Impact of a Paper

Blog from Dr Yvonne Couch

Reading Time: 7 minutes

This week I’m inspired (or not, as you will see shortly) by a new student I have in the lab. From a country that shall remain nameless, this student was telling me about the promotion system for medical doctors. Basically, more papers = more likely to be promoted. This resulted in what the student described as ‘a lot of trash science’. So, this week we’re going to consider the publication industry, how it works, how papers become meaningful and have impact and what that might mean for your next paper. This feels a bit like coals I’ve raked over before but I’ve found some new stats and some new numbers to entertain you with.

The horrendous conversation about the promotion system in the nameless country went on. Into how the authors often didn’t actually have time to understand the subject so would just get some data and pass it to a junior colleague to turn into a paper. Into how often the senior authors, the ones who would benefit from the paper, often didn’t even understand the field they were working in. I quizzed another friend who originates from said nameless country to confirm the student’s description. They expanded saying that often the prestigious journals are the most highly sought after and so new students will be told ‘this is the result we want’ and are sent off to make that result.

I was a bit horrified but also depressingly unsurprised. Randy Schekman, Nobel laureate and creator of eLife, had said the idea behind the journal was to try and move away from impact factors to discourage the paper-mill attitude that seemed to be becoming prevalent in certain areas of science.

And all of this led me to be sat at my desk this morning Googling ‘how many papers are uncited’. Which is a fascinating thing to do for no reason beyond the fact that the numbers I found in various articles, blogs and studies ranged from 0.3% to 90%. Clearly some of these are wrong but it was quite challenging to find out which. No offense to the guys on there, but I suspect some of the angry posts on Quora were not necessarily backed up by facts.

A decent place to start seemed to be an article by Richard Van Noorden in Nature from 2017 called ‘Science That’s Never Been Cited’. In it he points out some obvious factoids. Increasingly niche fields are less likely to be cited than popular fields – sorry engineers but you’re cited less than biologists – papers that don’t show an advance but rather show something that doesn’t work are less likely to be cited. But he also states some stats which I ran away with a little.

The number that most studies seem to have landed on is probably slightly less than 4%. Which sounds great. It’s a small number, single figure and all that. And they got to it by looking over a long period of time. For normal, run-of-the-mill papers, rather than big splashy ones, citations are not going to happen instantly, so they looked at papers that nobody had cited five years after their publication. Still sounds great. But there are over a million papers published every year in the biomedical sciences field. Four percent of that is about 40,000 papers.

Van Noorden also takes a long view and looks at all papers appearing on Web of Science from 1900 to 2015. Findings suggest that up to 21% of all papers are uncited. That’s around 8 million articles with no citations whatsoever. Granted now we need to take into account the fact that the nineteen papers written as a series between 1948 and 1952, all of which contain one graph (hand drawn) and which are only accessible in some obscure archive on the networks of about three Universities, might be more challenging to effectively cite. But still. Eight million is a lot of papers.

So how do papers get cited?

Impact factor is commonly used to evaluate the relative importance of a journal within its field and to measure the frequency with which the “average article” in a journal has been cited in a particular time period. Journal which publishes more review articles will get highest IFs.

Impact factor is commonly used to evaluate the relative importance of a journal within its field and to measure the frequency with which the “average article” in a journal has been cited in a particular time period. Journal which publishes more review articles will get highest IFs.

To be honest, I have no idea. In the Nature article, an economist called Dahlia Remler is quoted as saying “Even highly cited research could be a game that academics play together that serves no one’s interest”. And it’s a Catch-22 game that works if you’re already a highly cited researcher, or you’re good at playing the game and being loud. If you Google ‘stroke extracellular vesicles’ on a random computer (I made a friend who doesn’t work in research do this to make sure Google wasn’t just stroking my ego) my paper is on the first page. Admittedly there are probably tons of algorithms involved here that I don’t understand but the paper has been cited only 39 times. I have reviewed papers on stroke and extracellular vesicles that don’t cite this paper. Don’t worry, I’m not one of those reviewers that goes in and tells the authors they must cite it. But it does make me feel a little depressed.

And this lack of citations, as well as making one feeling a little miserable, can be the start of a slippery slope.

If you go and read the Van Noorden article there’s a box in the middle which has ‘tales of the uncited’ in it. The last of which tells a story of a researcher answering a question which was simply not popular. ‘A blind direction’ as he described it. This meant that the work was not cited, and he failed to get funding to carry it on and was, from the sound of it, working as a lecturer until an opportunity arose to carry on the work. Others found that they worked in such a niche field that the peer reviewers assigned didn’t understand the impact of their question and so reviewed it poorly. It was published somewhere ‘small’ and not cited until years later when the field picked up speed. Both stories highlight the need for papers to progress careers, even in countries where this is not necessarily a spoken rule.

All of this pressure, especially in places where publications are seen as essential for jobs, or as a statement of a scientists ‘impact’ on the field, can encourage poor behaviour.

We’ve covered the importance of publishing negative data before and it has been highlighted by many excellent researchers. But to get many, and frequent, positive results one can….manipulate things. An article in PLoS ONE by Daniele Fanelli found that almost 2% of researchers admitted to falsifying, fabricating or modifying data. Again, if we go back to our original number of a million papers a year, that means that up to 20,000 papers could contain data that simply isn’t true.

In a Lancet article on how much medical research may be untrue, Richard Horton summarised it eloquently. ‘Part of the problem is that no-one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative.’ A symposium report from many UK funders from 2015 echoed this sentiment stating that ‘The scientific community is incentivised to generate novel findings and publish in journals with a high-impact factor, which can introduce further bias. Some evidence suggests that journals with a high-impact factor are particularly likely to overestimate the true effect of research findings’.

And all this is exceedingly depressing. And we’ve been harping on about it for years and very little seems to be shifting. If anything, things seem to be getting worse. The reduced number of zero-citation papers may simply be due to the fact that papers are easier to find now the internet exists.

But let’s try and end on a happier note. Impact is often very hard to define and celebrate and reward. The well-supported students often don’t appreciate a good supervisor until they experience a bad one. The easy experiment with the neat data is dismissed in favour of the shiny one that took blood sweat and tears to make. In academic publishing, bibliometric impact is one of the few measures we have and we’re leaning on it like a crutch. But think of the number of papers you found where you found an adjustment to a method that helped you with an experiment. The number of papers where you wanted technical details so you emailed the authors. The number of papers you read and talk about in journal clubs, or mention in a talk as a way to back-up your own work. The number of papers you read and simply use as inspiration for your next set of experiments or your next grant.

So, people may not be citing your work but they may be reading it and absorbing it and finding it useful. Of course they also may not, what you might be doing might be of interest to nobody but yourself, but right now we have no way of knowing.


Dr Yvonne Couch

Author

Dr Yvonne Couch is an Alzheimer’s Research UK Fellow at the University of Oxford. Yvonne studies the role of extracellular vesicles and their role in changing the function of the vasculature after stroke, aiming to discover why the prevalence of dementia after stroke is three times higher than the average. It is her passion for problem solving and love of science that drives her, in advancing our knowledge of disease. Yvonne shares her opinions, talks about science and explores different careers topics in her monthly blogs – she does a great job of narrating too.

 

Leave a comment

Your email address will not be published. Required fields are marked *

Get all the support you need sent straight to your inbox. Research news, oppertunities, blogs, podcasts, jobs, events, funding calls and much more – every friday!

No Thanks

Translate »