For the first time in a long time I opened one of these documents and actually had something to write about that didn’t feel desperate. The trouble I’m having this time is that the thing I have to write about is controversial and as I sit and write this opening, I actually don’t know what my opinion is and I’m hoping that by doing a bit of reading and talking to you, I’ll figure it out. Today we’re going to talk about how academics might be using chatGPT, whether it’s levelling the playing field, whether it’s fair and ethical, whether it’s sustainable and how it might affect science and science communication going forward.
I’ll start by saying I absolutely do not profess to being any kind of AI or LLM expert. In fact, on complaining to a very smart friend recently that I was bored of chatGPT not being able to do word counts he explained, quite patiently, that it was because that was not was LLMs were designed to do. But right now, it’s one of my major uses for them. I have 1000 words of narrative CV I need to fit into a box of 250? ChatGPT can fix it. I have an abstract of 300 words that needs to be 250? ChatGPT can fix it.
And this, dear reader, is where we run into my ethical quandary. I actually quite like editing things. I like it when students send me almost finished documents, or first drafts of things, and I get to sit and tweak and move things around. I like writing a draft of a grant proposal, being quite pleased and then having someone crush me by saying ‘this particular thing really doesn’t come across well’ and sulking for two days and then going back and taking it apart and adding things. The art of seeing the picture or the story take shape and find flow I just find incredibly relaxing and rewarding but now? We’ve got chatGPT.
I think my problem with this is two-fold. Who is using it and how it is being used.
Let’s start with young people. In theory, with the exception of some of my lovely mid-life friends, I suspect most of you reading or listening are early career researchers. I am of the firm belief that you absolutely should learn to write on your own at the early stages of your career. The meme I saw to describe my thinking here compared chatGPT to weights in a gym. When you go to a gym all you’re really doing is moving weights from one place to another and frankly the easiest way of doing that is by bringing in a small forklift truck but that is not the point of going to the gym. The point is to test and stretch yourself, to grow and get stronger.
Once you have grown and gotten stronger, at that point you could dig your garden by hand because you have the muscles for it, but you could also just hire a digger (are you enjoying my mixed metaphors?). Because at this point you know how to write, you know what good writing looks like and you can spot the fakes or the grifters. So for me, I can feed a bunch of introduction I’ve written into chatGPT and say ‘write me a 150 word lay summary’ and it can do it in the blink of an eye, where it might take me half an hour or so. But the important next step is that I can then read that 150 words and edit, and move and decide what I think sounds good or doesn’t sound good. Actually confession here, the lay summary is the one bit I genuinely enjoy writing so I would almost certainly never do this but you can see my point.
It’s making a lot of the grunt work of grant writing slightly lighter. ‘Here are the lines and numbers from my excel spreadsheet of budget with some minor additional details, please write me a justification of resources section’. Done. Rapido. No thought whatsoever.
And that, is where I am struggling to justify my use of these things.
I’m not saying we have to all suffer for our art or anything melodramatic like that. But grant writing and fellowship writing is (and always has been) quite hard work. It’s why a lot of my contemporaries left academia; they have no interest in doing it. Now, you can basically get a machine to write your grant or your paper for you if you just come up with an idea or feed it the data. What used to take weeks now takes days. Yes, you still have to have the idea and you still have to be able to read and edit it, but everything is much easier now and what we’re fated to run into is a volume issue.
The first thing this is likely to lead to is a massive increase in paper mill type publishing. For those of you unfamiliar with papermills, they are illegal businesses that produce and sell fake scientific manuscripts to researchers who want to increase their publication counts. If you’re interested Christine Ro and Jack Leeming wrote a great piece in the Nature Careers column investigating paper mills. As a relatively experienced researcher it should be fairly easy to spot these kinds of papers but they are getting better and more devious at hiding their fabrications, and chatGPT is only making that easier. Again, if you’re interested in finding out how to spot them as a junior reviewer check out Stephanie Melchor’s article ‘Five ways to spot when a paper is a fraud’ in Nature.
The second thing this excessive use of chatGPT is likely to lead to is grant applications. In an already stretched funding scheme, applicants who were overburdened with teaching and might not be able to find the thirty or so days it might have previously taken them to write a grant, can now take that time down to three to five days which can easily be flung in as prompts in between lectures.
This increase in grant applications is likely to do a couple of things. First, in an already strained funding landscape in the UK it’s going to challenge a system which already doesn’t have enough money to feed the masses, and it’s going to stretch the funding bodies’ capability to simply read all of those applications. A friend of mine who sits on a funding board that shall remain nameless said she’s seen standard application numbers go from 50-70, to upwards of 130 per round.
But beyond the simple numbers game there is research to suggest that LLM use might actually shift the kind of science we do. A 2026 arXiv article by Qian and colleagues showed that LLM use increased after 2023 in NIH grant applications and that it seemed to be associated with positive proposal success and publications from said proposals. But importantly, what the authors also found was that the gains in productivity at the NIH were in what they described as ‘non-hit’ papers.
Which brings us back to the age old ‘publish or perish’ model of science where researchers have to shove stuff out the door as often as possible to look ‘productive’. This, of course, leads us on to the ‘quality not quantity’ argument where people in nice permanent jobs say that you have to ‘aim high’ whilst all the chatter in the background is saying we should not take journal metrics into account for anything. The fundamental background of academia is broken and chatGPT is not going to help it get any better.
What it is apparently doing is slightly levelling the playing field a little. Non-native speakers have found their applications and papers to be more successful, and have found themselves feeling more confident about their research now they can get help with the writing. Indeed, in an Elsevier editorial Brent Sinclair compares this use of LLM to things like a spell-checker or a stats package. Yes, they can do the work for you but if you don’t know whether you’re supposed to be using a one-way ANOVA or a t-test then the package is not actually going to help you with that. Garbage in garbage out, as the saying goes.
So where am I left at the end of this? Have I made a decision on whether this is good or bad? No. But largely because I think this is a grey issue, rather than a black and white one. Sinclair says that LLMs are simply tools, and tools require the ‘users to think carefully and critically about both input and output, and take responsibility for both’. So by all means use it to help you, but read and check what it says and don’t feed it stuff that doesn’t make sense because it will absolutely run with it and you’ll end up with very prosaic nonsense at the end of your session.
Also according to Sinclair the tradition has started to become asking chatGPT to write the conclusion to your article on chatGPT. So I fed it this stream of consciousness and asked it to do so and it concluded the following:
These tools are not going away; they are already folded into our workflows, our grant deadlines, our inboxes at 11pm. They can lighten the drudgery, widen access for those writing in a second language, and accelerate processes that were never especially romantic to begin with. But they also risk amplifying the worst instincts of a system already obsessed with speed, output and metrics. ChatGPT will not fix academia’s structural problems, nor will it single-handedly destroy scholarship. It will, however, magnify whatever values we choose to embed in our use of it. If we treat it as a shortcut to avoid thinking, it will happily oblige. If we treat it as a tool — one that still requires judgement, expertise and responsibility — then perhaps it becomes less a forklift in the gym and more a power tool in the hands of someone who already knows how to build.
Which is, upsettingly, rather poetic and I’m slightly mad I didn’t write it.

Dr Yvonne Couch
Author
Dr Yvonne Couch is an Associate Professor of Neuroimmunology at the University of Oxford. Yvonne studies the role of extracellular vesicles and their role in changing the function of the vasculature after stroke, aiming to discover why the prevalence of dementia after stroke is three times higher than the average. It is her passion for problem solving and love of science that drives her, in advancing our knowledge of disease. Yvonne shares her opinions, talks about science and explores different careers topics in her monthly blogs – she does a great job of narrating too.

Print This Post