The Covid Cohort and large language models

Post-pandemic graduates trickle into the employment pool, bleary-eyed and intellectually compromised

Credit: Joshua Hoehne, Unsplash

15/07/2025
Joseph Quash

Hundreds of thousands of young adults have been donning their mortar boards for a photo finish to their studies this July. My class, whose GCSEs were suspended after the first coronavirus lockdown in March 2020, will soon begin to trickle into the employment pool after five years of educational turbulence. The road to classification has been technologically paved with good intentions – but what effects have academic cyber-crutches such as large language models (LLMs) had on the cognitive abilities of this Covid cohort?

Last month, a clip began circulating of a UCLA student at his graduation ceremony seemingly flaunting his use of ChatGPT to attain his qualification. He holds his laptop open, displaying a conversation with the Generative AI chatbot whilst pumping his fist and shouting “Let’s go!” into the roving camera.

Although the brazen nature of this ostensible admission is a rarity, it is no secret that the use of LLMs by students to assist with their submissions has skyrocketed. A report published earlier this year by the Higher Education Policy Institute (HEPI) tracked changes in LLM usage in full-time undergraduate students between 2024 and 2025, finding that general use had “surged” from 66% to 92% this year, indicating that “almost all students” now utilise the technology. 18% of this number admitted to including “AI-generated text directly in their work”. 

We are repeatedly reminded that the AI revolution is an opportunity to improve efficiency and democratise access to academia. One 2024 study noted that “[t]hese models facilitate more interactive, collaborative, and self-directed learning” and “promot[e] access to knowledge”. Undoubtedly, LLMs facilitate these usages – as an assistant, rather than a replacement, for intellectual improvement. However, if they continue to be misappropriated at scale, as they have been this academic year with one fifth of undergraduates copy-pasting AI-generated responses, the negative impact will invariably compound.

An MIT paper published last month entitled ‘Your Brain on ChatGPT’ compared cognitive effects in students during essay writing. One group was designated an LLM to use, one a search engine, and the other left ‘brain-only’. The results were startling: “[b]rain connectivity systematically scaled down with the amount of external support”, posits the report, outlining that the group that utilised the LLM demonstrated “weaker neural connectivity” as well as a lower sense of ownership and impaired ability to recall quotes from the essay.

Not only this, but over time these effects worsened in a phenomenon the authors referred to as an “accumulation of cognitive debt”. LLM users were ultimately invited to write an essay without the support of an external system, during which they failed to reach the “neural” and “linguistic” levels of their brain-only counterparts. Repeated use of LLMs involves an outsourcing of cognition which, over time, leads to “diminished prospects for independent problem-solving and critical thinking”.

Analytical atrophy through misuse of LLMs is the next stage of what Harry Lambert calls the “self-perpetuating downward spiral of shattering standards” at universities. These models risk becoming the newest iteration of rote learning, albeit a version which involves neither rote nor learning, but one which induces a similar petrification of thought. 

The distinction that historian Henry Newbolt made in 1921 between “education” and “information” must now be muddied by passive deterioration, a process in which students act merely as a crumbling conduit between bot and examiner for essays which are largely, according to KCL Lecturer in Geography Dalia Gebrial, “standardised in quality and… content”. If institutional policies regarding GenAI fail to adapt, the oncoming torrent of AI slop risks alienating academics and churning out a generation of intellectually stunted graduates. 

Almost everyone at my ceremony will have used an LLM in some capacity, although I don’t expect any public demonstrations of that fact.

© Joseph Quash 2025