The case against academic AI normalization

AI use in universities threatens the value of student-made work

The rise of AI tools like LLM’s are decaying students’ skills before entering the workforce.

With the rising prominence of Generative Artificial Intelligence (GenAI) in academia, universities are becoming increasingly concerned about students' capacity for learning.

Overreliance on AI has begun to overshadow student-made work and threatens to erode the value of human creation in general. What’s worse is that, as Large Language Models (LLMs) continue to get better at mimicking human communication, it’s becoming increasingly difficult for us to tell the difference.

Faculties themselves are already noticing the shift. 

In a 2025 survey conducted by the American Association of Colleges and Universities and Elon University’s Imagining the Digital Future Center, 95 per cent of the 1,057 faculty participants agreed that students’ overreliance on GenAI is on the rise. Additionally, 90 per cent agreed that these tools risk undermining students’ critical thinking skills.

These claims are further supported by a Massachusetts Institute of Technology study on “the cognitive cost of using an LLM in the educational context of writing an essay.” 

Using electroencephalography, researchers recorded the brain activity of 54 college students in three groups: LLM-assisted, search-engine-assisted and brain-only. They found significant differences in each group’s neural connectivity patterns, with the LLM-assisted participants showing weaker overall cognitive engagement. 

Even with its small sample size, the study lays important groundwork: relying on AI to write may come at a real cognitive cost.

The real problem is that universities currently have few practical ways to respond. Faculties often struggle to prove, or even notice, AI use among students. The normalization of academic AI use threatens students who refrain from using it, too, as we are not only graded against our peers, but also against technology.

Students must be prepared to defend the humanity of their work in the face of accusations of AI use and acknowledge that some professors may find their defence insufficient. Since AI detectors have proven unreliable, a return to in-class assessments—written by hand or presented orally—may be one of the few remaining ways to guarantee authenticity. 

A 2024 study testing the detectability of AI-written undergraduate exams at a UK university found that 94 per cent of AI submissions flew under the radar. They also found that these submissions averaged higher grades than those written by real students, a result that should make universities question how academic merit is currently evaluated.

This brings into question the evaluation and merit of AI-produced work, as standards designed to assess human writing may not be sufficient to determine the quality of AI-generated content.

How should a machine’s human-mimicking creation be valued in contrast to the human works from which it’s trained? As a system whose internal logic and imaginative capacities grow from the seeds of our data, LLMs anomalously fuse social construction and computational performance.
AI now exists as a liminal force between humans and machines, and universities have yet to fully confront what that means for the future of learning.

This article originally appeared in Volume 46, Issue 11, published March 17, 2026.