Montreal’s Role at the Dawn of Artificial Intelligence

We Ask Researchers Why Siri Sucks So Much

Through the development of AI we’re promised a better future, technologically and economically. This is especially the case in Canada and in Montreal. Graphic Aiden Locke @lockedsgn

Whenever I hear the phrase “artificial intelligence,” I get sucked in a rabbit hole of pop-culture-inspired reveries. Science fiction has, from its very inception, depicted sentient artificial beings capable of empathy, self-awareness, and existentialist struggle.

Science fiction almost always gives us two visions of our future relationship with AI: We’ll either fall madly in love with artificial beings, or our sentient creations will turn on us and destroy us.

In Spike Jonze’s 2013 movie Her, the main character Theodore updates his computer operating system specially designed for him and later develops a romantic relationship with it. In Ridley Scott’s 1982 cult-classic Blade Runner, Rick Deckard falls in love with an artificial human/machine named Rachael, whom he’s supposed to kill.

Through the development of AI we’re promised a better future, technologically and economically. This is especially the case in Canada and in Montreal.

Element AI is a Montreal start-up co-founded in October 2016 by leading artificial intelligence academic Yoshua Bengio and entrepreneur Jean-François Gagné. Last June, they received $102 million USD in funding from a number of investors, bootstrapping a company researching AI and offering “artificial intelligence as a service” to other companies.

At the business conference C2 Montreal last May, Bengio said that as artificial intelligence advances, the closer we get to the dawn of a new industrial era. Notably, three big names in the technology industry, Facebook, Google, and Microsoft, have also launched their own artificial intelligence research labs in Montreal last year.

And the craze for AI doesn’t end at the private sector. Politicians like Justin Trudeau and Philippe Couillard have said they also want to promote the development of artificial intelligence.
The Quebec government announced last May a $100 million investment for the next five years in the field, and Trudeau said last October at the University of Toronto he wanted to lure Silicon Valley companies to Canada, while boasting about how much of a geek he is.

At Concordia, a new student initiative aims to jump on the hype train. Artificial Intelligence Society Concordia had its launch event in November.

The society’s president, Abdellatif Dalab, said he wants Concordia to become part of the Montreal AI “ecosystem.”

“Students, when they think about AI, they don’t think about Concordia,” he said. “They think of McGill and the Université de Montréal.”

He said his nine-person team wants to change that by organizing workshops and events at the university. He also said they want to teach students basic ways to use the technology. One of the society’s slogans is “AI is the future.”

When we learn about investments and advances in artificial intelligence technology, we often interpret these events through the assumptions science fiction movies, television series, and novels have ingrained in us.

But then I try to use Siri on my iPhone, and I wonder how on Earth we could get there. Ask Siri for today’s news headlines and she’ll literally do a Google search for “news headlines,” which isn’t very helpful.

But why doesn’t Siri have a personality like the computer operating system in Her? Why doesn’t she even have a little bit of self-awareness like Star Wars’ C-3PO? Why does Siri suck?

I met Concordia professor Dr. Sabine Bergler, an expert in artificial intelligence and machine learning, and asked her that very question. She stared at me.

“Everyone should take a little of computer science,” said Dr. Sabine Bergler. “It would help them not fall for the hype.”

The thing is, what we mean by artificial intelligence today is actually a technology called machine learning–or more precisely, deep learning, Bergler explained. Put simply, machine learning involves training a computer to recognize patterns, so it can later learn how to make predictions from large sets of data.

To accomplish this, researchers use algorithms—programs, that are set up like the neurons in your brain. The “deep” part in the phrase “deep machine learning” means that these neurons are set up in multiple layers, which allows for more complex operations. As the algorithms are trained on datasets, they tweak themselves to accomplish a task more efficiently. And that’s it.

The technique can be used in many ways, and it’s understandable that investors are flocking to companies that are developing the technology.

On the medical front, a team from McGill announced last August that it trained an artificial intelligence to detect Alzheimer’s through analyzing brain scan imagery.

In transport, Waymo, Google’s self-driving car division, announced last November that its program had driven autonomously for over four million miles, a distance that’s far longer than any human could ever drive in a lifetime. The program learns along the way how to drive better through the data the cars gather with its array sensors, which include cameras and radars.

A computer that detects Alzheimer’s disease more efficiently than doctors might seem intelligent, but we should remember that they are just machines. They aren’t conscious. All they do is analyze lots of data very efficiently. If Siri was to use machine learning, it would be to better recognize voice or actually find top news headlines, not to become more “intelligent” or self-aware.

While you won’t see someone have a matrimonial relationship with their phone next year, that doesn’t mean that AI technology doesn’t have giant looming issues that we need to sort out.
Machine learning relies on large sets of data. Huge amounts of that data are provided by us, consumers, to private companies through the use of social media, phones, laptops, and fancy internet-connected thermostats. This of course raises privacy issues.

Social media’s business model is to sell targeted ads to its users. If a particular ad is suited to you personally, it’s because you are exposing more of yourself to advertisers. Social media companies like Facebook, Twitter and Google use artificial intelligence to better target you.

Algorithms use your data, and infer even more data from it, just so you tap on an ad for a brand new watch or a pair of sunglasses.

But more than that, artificial intelligence is also very apt at recognizing faces, because there’s a practically infinite number of pictures available online that an algorithm can be trained with. That can be very useful for governments, and extremely dangerous for activists in authoritarian countries. For computers, identifying demonstrators participating in street protests could be an easy task.

Two researchers at Stanford University in California have developed a proof-of-concept algorithm trained on profile pictures from a dating site capable of detecting homosexuality with 91 per cent accuracy. Imagine a tool like this one in the hands of the government of a country where homosexual relationships are illegal.

“AI is like a gun,” Bergler said. “Guns don’t kill people, but they’re a very efficient tool for killing people.”

She added that she never thought she would be one day using the same rationale as the NRA, but that she finds herself saying it more and more these days.

Artificial intelligence could be used for good, and it could be used to commit atrocities. “Guns don’t shoot people by themselves, people shoot people.”

AI, the one we have right now, can’t do anything by itself. We are the ones who need to give it instructions. It’s a tool, and it’s up to us to use it right, she highlights.

In Canada, academics and private companies are self-regulating. Notably, the Montreal Declaration for a Responsible Development of Artificial Intelligence, an initiative announced last November by the Université de Montréal Ethics Research Centre, aims at setting guidelines for AI development.

The declaration says artificial intelligence should, among other aims, “promote the well-being of all sentient creatures,” “promote the autonomy of all human beings,” “seek to eliminate all types of discrimination,” “offer guarantees respecting personal privacy,” and “protect us from propaganda and manipulation.”

But it is just that: a declaration.

Back in the Cold War, the work that’s being done today by artificial intelligence was done by humans. We called these people spies. Bergler said we should think about regulating the use of data, just like we decided to do for other resources, like forests and mines.

“Right now it’s like the Wild West,” she said.

After all, it’s just a resource, a digital one instead of natural, but still, it’s a resource, and private companies are mining it for free.

A previous iteration of article mispelled Bergler’s name. The Link regrets the error.