In late 2022, we saw the emergence of Large Language Models, with arguably the most well known being Chat GPT. The technology is a form of Generative Artificial Intelligence, or Gen AI which facilitates the creation of content, including images, videos, and text. The rapid and widespread availability of Gen AI in academia has raised questions about academic integrity and potential for deskilling of future scientists, but also the need to recognise the positives that these tools have in terms of enabling creativity and improving efficiency, for example. In this blog, I discuss the use of Gen AI in academic research and higher education.
To understand why there has been conflicting views on the use of Gen AI in academia and higher education, we first need to understand how the technology works and what its capabilities are. Gen AI tools are able to create new text, image, and video content by utilising existing content online, and analysing it for particular patterns and structure. This technology is then able to use this information to simulate characteristics of human intelligence, such as problem-solving. Large Language Models, like Chat GPT, are now widely available and are capable of generating clear responses by processing an input provided by a user, based on algorithms and extensive data it has access to. As the use of these tools increases, more data becomes available, and the more accurate and coherent responses can be generated. So over time, these tools will only get better and better. When Gen AI technology became widely available just over two years ago, many universities, funders, and academic journals were unprepared for the potential impact of these tools on the future of academic research and higher education.
When Gen AI tools became widely available, I had been a lecturer for less than a year. Up until then, a great deal hadn’t changed since I was an undergraduate student almost 15 years ago, in terms of how academic integrity was maintained and student assignments monitored for things like plagiarism and collusion. With Gen AI tools now easily accessible to all students, universities were forced to reconsider policies on what constitutes academic misconduct, to ensure that students continued to produce authentic work and develop the cognitive and creative skills intended from their courses. Particularly in STEM subjects like mine, there’s been concern around potential deskilling of future scientists, who may lack crucial critical thinking skills and have a high dependency on technology for completing research-related tasks. But is there scope for Gen AI to actually enhance student learning rather than have a detrimental effect on it?

How do you use AI in your academic work or teaching? Do you see it as a tool for efficiency—or a risk to integrity? Let’s discuss your experiences. 👇
Last year in higher education we saw a shift away from a blanket ban on the use of Gen AI in non-exam-based assessments, and a move towards exploring ways these tools can be used in an ethical way without compromising academic integrity. Gen AI can be used as a companion for both teaching and learning in a way which provides support for students. For example, students may be given permission to use Gen AI for certain parts of an assignment, but then be required to produce a critique or personal reflection of what the technology has produced. This has encouraged lecturers to create more innovative assessments and could reduce issues around academic misconduct.
The key is to design assessments in a way that Gen AI cannot easily replicate, and in areas where it is permitted to be used, it is essential for students to reference use of the technology and avoid breaching university regulations.
In addition, students need to be made aware of the potential to breach intellectual property or GDPR rules, because any content that is inputted into a Gen AI tool by a user (such as lecture slides or assignment briefs) can then be used and shared by the tool when responding to other users. There is also no guarantee that information generated from Gen AI tools will be correct due to their reliance on input from other users, leaving them vulnerable to reproducing biases and stereotypes. Therefore, students should be encouraged to evaluate the credibility of the content generated by cross-referencing, if they are permitted to use it in an assessment. So in higher education we’ve seen a transition towards finding ways of integrating Gen AI, but what about academia more broadly?
Academics are contending with increasing workloads with higher teaching and admin tasks and lower research time, as the sector faces recruitment freezes and mass redundancies to address widespread financial deficits. Technology which can help to streamline research tasks to enable more efficient working and higher productivity is certain to be welcomed, but many of the same issues relating to use of Gen AI in higher education also apply in academic research. If we consider applying for grants as the beginning of the research pipeline, Gen AI can be a useful tool for developing ideas and encouraging creativity. It can also help speed up the process of writing reviews when assessing grant applications for funding bodies. However, the risks already discussed relating to, for example, originality, data protection, bias, and intellectual property remain, with the potential to compromise the funding application and assessment process. These concerns ultimately led to a joint funder statement being released in 2023, and in 2024 UKRI published its policy on the use of Gen AI in funding applications and the grant assessment review process. Similarly to how students have been advised on the use of Gen AI for assessments, a blanket ban has been avoided, largely because it would be impossible to police. Instead, researchers are advised to acknowledge if, and where, Gen AI tools have been used in funding applications whilst abiding by legal and ethical standards to avoid breaches of GDPR and intellectual property regulations. However, researchers involved in the grant review process are advised against using Gen AI tools to generate their peer reviews because this is likely to involve inputting sections of the application into one of these tools, which will compromise confidentiality.
At the other end of the research pipeline, Gen AI tools can be useful for drafting manuscripts for publication. These tools can help reduce inequalities, particularly where English isn’t someone’s first language. However, many would agree that it would be unethical to use Gen AI to write an entire paper, which could lead to cases of academic fraud and a proliferation of poor quality scientific output. Some journals have taken the initiative and published rules about the use of Gen AI in academic publishing. Similar to the approach with students in higher education, they require authors to state if, and where, such tools have been used, whilst recognising these tools cannot be considered as ‘authors’ of any work, as they lack accountability.
In many respects, the main issues around the use of Gen AI in higher education and academic research are comparable to those relating to plagiarism and using paper writing services to get another person to produce written work. When I was an undergraduate, paying someone to write an essay, or copying the work of another person and passing it off as your own, were the biggest issues relating to academic integrity.
Now, Gen AI is that person who can write your essay, or the person whose work you can pass off as your own.
In all cases there’s a failure to attribute credit to the original authors for their work, and this is unethical.
If students or academics choose to use Gen AI in any of the ways discussed, it’s important that all content produced is checked for accuracy, any responses should be rewritten or paraphrased, and, as one would with all scientific writing, sources of work should be attributed to original authors with citations and references. So it is up to us how we choose to adapt to an AI-enabled society whilst maintaining scientific rigor and academic integrity, but whether we like it or not, this technology is here to stay, and its use is only going to grow.

Dr Kamar Ameen-Ali
Author
Dr Kamar Ameen-Ali is a Lecturer in Biomedical Science at Teesside University & Affiliate Researcher at Glasgow University. In addition to teaching, Kamar is exploring how neuroinflammation following traumatic brain injury contributes to the progression of neurodegenerative diseases that lead to dementia. Having first pursued a career as an NHS Psychologist, Kamar went back to University in Durham to look at rodent behavioural tasks to completed her PhD, and then worked as a regional Programme Manager for NC3Rs.