AI-fuelled polarization can amplify division, researchers and activists warn

Research findings echo student concerns that polarization is weakening grassroots organizing

Researchers say that reinforcement-learning bots can intensify polarization on social media. Graphic Olivia Shan

Polarization on social media is no new phenomenon. But researchers and student activists at Concordia University are concerned that artificial intelligence can take it to the next level—and that platforms aren’t prepared to stop it.

“Instead of being shown footage of what’s happening or content from the journalists who are reporting on it, we’re instead seeing overly dramatized AI art of things we should care about politically,” said Danna Ballantyne, external affairs and mobilization coordinator for the Concordia Student Union.

“It really distances people and removes accountability,” Ballantyne added.

Her worry lands as new Concordia research warns that artificial intelligence can supercharge polarization online.

In their paper on maximizing opinion polarization using reinforcement learning algorithms, professor Rastko R. Selmic and PhD student Mohamed N. Zareer found  that reinforcement-learning bots can sow division on social media networks with minimal data by pinpointing influential accounts and inserting targeted content.

“Our goal was to understand what threshold artificial intelligence can have on polarization and social media networks,” Zareer said, “and simulate it […] to measure how this polarization and disagreement can arise.” 

Instead of needing access to private data, the researchers’ simulation showed that simple signals like follower counts and recent posts were enough for an AI algorithm to learn how to spread division. 

“It’s concerning, because [while] it’s not a simple robot, it’s still an algorithm that you can create on your computer,” Zareer explained. “And when you have enough computing power, you can affect more and more networks.” 

This concern goes beyond academic theory. Zareer argued that the findings raise concerns about the responsibilities of platforms and policymakers to safeguard public discourse. 

However, Zareer said that the question remains: who decides what counts as harmful manipulation and what counts as free speech? 

“There is a fine line between monitoring and censoring and trying to control the network,” Zareer said.

He added that platforms risk tipping into censorship if they clamp down too aggressively, yet do little if they ignore the problem altogether.

“If you are trying to detect these kinds of bots from happening, influencing the network, and creating echo chambers, then that’s extremely important,” Zareer said. “However, if you move the control so much, then you will not be able to share [your opinions].”

While researchers warn about AI destabilizing online discourse, those involved on campus worry that AI risks replacing lived realities.

Ballantyne said the shift from seeing real footage of events online to seeing dramatized, political AI representations undermines years of organizing rooted in personal storytelling and community relationships. She said online, AI-generated content can create the illusion of awareness without the substance of real engagement. 

“AI completely scraps that,” Ballantyne said.

And for some students, the presence of AI on social media is a downright invasion of privacy. 

“[It’s] always there, anytime you want to type a message to someone, it’s always suggesting [something],” said Concordia student Valérie Chénier. “It’s just invasive and annoying.”

Meanwhile, Ballantyne said she believes misinformation generated by AI is advancing so quickly that it may mislead the average digitally literate student. What once seemed laughably fake is now sophisticated enough to spread confusion, she explained, and that pace of change makes her uneasy about how student engagement and organizing will adapt.

Algorithms often benefit from steering attention in particular directions, sometimes toward extreme hateful and misogynistic content. For students who are already relying on social media to engage with issues, Ballantyne worries this makes it harder to tell what’s genuine.

“I kind of hope that people are able to balance that with what they see in the real world, with what they see from the lived experiences of their friends and colleagues,” she said.

This article originally appeared in Volume 46, Issue 2, published September 16, 2025.