AI destroys critical thinking in university and beyond

Is the shortcut really worth it?

The overall decline in critical thinking at post-secondary institutions due to the rise of generative artificial intelligence is absolutely outrageous. Graphic Kasey Lamer

In a recent Q-and-A, OpenAI CEO Sam Altman stated that “people talk about how much energy it takes to train an AI model, but it also takes a lot of energy to train a human.” 

Here is what I have to say to that: fuck no. The implications of his statement are absolutely fucking terrifying. 

We have completely lost the plot and, frankly, it’s getting really embarrassing. I have been in post-secondary education for nearly seven years, and the overall decline in critical thinking due to the rise of generative artificial intelligence is absolutely outrageous. 

I’m not going to pretend that I’m a saint who has never used generative AI in their life. Of course I have. When ChatGPT began gaining popularity a few years ago, I indulged and experimented with this new technology as a support for my studies and beyond. 

And, almost immediately afterwards, I felt dumb. It took only a few months of giving AI my painless tasks—“Write me an introduction for this essay” or “Give me a summary of this paper”—before it became an ingrained habit to "ask Chat" for questions beyond just the simple. 

It is honestly humiliating to be using AI in class instead of looking through the notes and textbooks provided to derive your own meaning from them. You are in university to learn how to think critically and acquire skills. And this is not to mention the cost of post-secondary education, money you are flushing down the toilet with the last of your brain cells. 

Shortcuts feel good until they catch up to you. 

You are doing yourself the biggest disservice by relying on AI tools to coast through, and now this negligence is beginning to appear across all levels of academia. Witnessing it happen in real time is utterly discouraging. I‘ve even heard of students using ChatGPT to help write their master’s or PhD thesis. 

Instead of engaging more deeply, we allow this generative tech to give us incorrect information confidently. In an age already shaped by digital misinformation, AI risks adding fuel to the fire.

The issue of unrestricted generative AI use matters beyond just academia, but for anyone looking for trustworthy information from journalists, policymakers, educators and researchers.

That is not to say that AI doesn’t have potential use as an aid in education and research. AI models can absolutely save time and resources by quickly summarising work, editing and reference checking, or creating study guides. 

But just three years ago, we were all able to summarise work, edit and study just fine without relying on generative intelligence models.

Why are we suddenly incapable of doing things we did perfectly fine just a few years ago? Writing an email, making a grocery list or composing a birthday message now seems difficult for some people.

The rise of AI has begun to domesticate and tame us—not unlike the way wolves were domesticated—by dulling our minds and living our lives for us. We are, in essence, quickly becoming subservient to this new technology. The crucial question is: does this actually benefit us, or are the people behind it benefiting from our cognitive losses and inability to imagine a life without AI?

If that hasn't convinced you enough, there's more. A recent MIT Media Lab study uncovered the cognitive debt accumulated through the use of AI for essay writing. 

Researchers used an EEG (electroencephalogram) to record the writers’ brain activity across 32 regions, and found that ChatGPT users had the lowest brain engagement when writing an essay. Now, imagine using AI consistently for four years of university.

So, do you still think your excessive use of AI in school is worth it?