Rising use of deepfakes threatens online sex workers

With recent developments in AI, calls for deepfake regulation may fall on deaf ears

Graphic Jude M

In late January, explicit imagery containing Taylor Swift’s face embedded onto pornographic content circulated on X. This type of media is known as a deepfake.

Deepfaking is the process of taking the image or characteristics of one person and pasting it onto a pre-existing video or audio of another person to create new, synthetic videos or images.

Although Swift’s team was able to take down the images, swift justice has not been achieved for many other victims of non-consensual deepfaking, as nation-wide deepfake regulations have yet to be implemented in Canada. 

According to a report by Home Security Heroes, 95,820 deepfake videos were posted online in 2023, an increase of 550 per cent since 2019. It found that 98 per cent of this content was pornographic, with 99 per cent of deepfake pornography targeting women.

This rapidly advancing technology can range from face swaps to the digital removal of clothing. As deepfake technology becomes more accessible, thousands of videos have been uploaded online using pre-existing pornographic content. Advocates say it now threatens the livelihood of online sex workers.

Ali, whose last name was omitted for safety reasons, is an online adult content creator. She is one of thousands of women who are concerned about the unethical use of her content for the creation of deepfakes. 

“When you’re doing this kind of work, you have to be okay with the risk of exposure happening,” Ali said. “But I think things are going to get very bad […] and a lot of people are going to get fucked over.”

Ali has been producing content for the better part of a decade. During her career, she’s witnessed a number of trends within the industry, such as the uploading of personalized content through websites like OnlyFans. However, the advancement of deepfake technology had Ali blindsided. “I'm worried about that stuff (deepfakes) a lot more because if there's no way to detect it visually and it gets better, that’s what scares me.” 

Ali has also become a victim of digital content theft. “I’ve been recorded once or twice and put on Pornhub without any consent,” she said. “And facing the reality of being recorded without consent was very jarring.”

For Ali, recent advancements in artificial intelligence (AI) are a constant worry. Her anxiety is not limited to deepfakes, as developments in AI voice replication and art generation left her feeling like she’s lagging behind. 

Deepfaking is built on machine learning, similar to text and image generators. The AI is fed large amounts of data and learns how to adapt to certain patterns. 

Deepfaking is trained on generative adversarial networks. These include two separate parts, referred to as the “discriminator” and the “generator.” These parts work in tandem to generate a final product which conforms to the original prompt or desired outcome. "It's pretty accessible as well... the public can just download and use it,” said Jackie Cheung, associate scientific director at the Quebec AI Institute, Mila.

Although Ali has yet to encounter anyone who’s had their work used in deepfaking videos, she felt ill-equipped to face the ethical dilemma posed by deepfaking AI. “I'm worried about that stuff a lot more because if there's no way to detect visually that it could look real […] there’s nothing I can do,” she said.

Cheung acknowledged the ethical questions raised by advancing deepfake technology, suggesting that "companies or developers should try to have techniques that detect when it's being applied to sensitive or inappropriate inputs."

This sentiment was echoed by creative technologist Julia Anderson. Anderson is a “user experience (UX) designer,” which allows her to be at the forefront of designing ethical AI. “If someone is using the image or the likeness of someone, and is gaining profit from it, that’s essentially like plagiarism [...] and that can be really damaging for [victims] reputations.” 


Anderson suggested developers create accompanying detection tools for their deepfake generators. Another temporary solution could be watermarking the content. However, Anderson argued that any fundamental change would have to be done on a legislative level.

The Quebec government recently released a document aptly named Ready for AI which outlines the recommended steps from experts on how the government should approach the technology going forward. At the time of its release, overseer and Quebec’s chief innovator Luc Sirois urged that AI “needs a legal framework.” 

On March 13, member states of the European Union approved legislation on AI, involving labelling different systems depending on their threat level and requiring supervision at every level of development of different AI systems. 

In Canada, several provinces, including B.C., have introduced intimate image protection laws, covering both real and altered images. B.C.'s laws enable victims to seek removal of deepfaked images and pursue damages through a civil tribunal, imposing fines on individuals and websites for non-compliance. Quebec has yet to do something similar.

Pierre Trudel, a professor at the Université de Montréal Law School specializing in information and technology law, believes the solution to be in the near future. "It can be expected that Quebec will table a bill on AI in the next few months,” he said.

According to Trudel, if a person were to take legal action against someone for using their likeness, the option to sue is on the table. “Now, actually taking a case [involving deepfakes] to court is an entirely different story,” said Trudel. “It’s easier said than done.”

Although seeking justice through the legal system is an option, it may not be one openly available to sex workers in Montreal who work under the veil of privacy."The most damaging point about the situation is that most [sex workers] are still hidden,” said Francine Tremblay, a faculty member from the sociology and anthropology department at Concordia University. “How many women in the sex industry are ready to denounce [illegal activities] when they're still criminalized?"


Tremblay has worked closely with Montreal’s community of sex workers for decades. She believes that AI and deepfaking will only compound the complexities of sex work in the digital age. Tremblay explained that sex workers in Montreal currently have limited control over their labour, and therefore legal recourse to protect one’s content shouldn’t come at a high cost. “The rights applied to celebrities. The speed at which [Taylor Swift] had those posts removed. That should apply equally to sex workers.”

For Ali, prospects of getting justice for her stolen content seemed slim, regardless of the calibre of legal aid. "You can sue someone, but [going to court], it's a whole different battle [...] it's going to be a process to get to a point where that’s my body or that's my face."

Although deepfaking has been connected to exploitation, it has been used for the creation of art itself. Dr. Suzanne Kite, an Indigenous artist and academic teacher at Bard College, was an early adopter of machine learning. This included deepfaking technology, which she’s used in her series of works. Kite has worked closely with Indigenous communities on interacting with AI and creating works of art. She explained that AI-based exploitation is magnified in a capitalist context where, according to her, AI tools and artists are misused for profit.

"There's always a threat that people are going to abuse these things,” Kite said. “The real problem with it isn't AI, it's the fact that people need to scam each other to survive economically." 

This article originally appeared in Volume 44, Issue 12, published March 19, 2024.