Reality++

Graphic Clément Liu & Erin Sparks

Concordia Researchers Are Exploring the Future-Morphing Possibilities of Augmented Reality Systems. Get Ready to Play an Eggplant Like It’s a Musical Instrument.

Augmented reality is a bit of a buzzword these days among startups and tech junkies. It’s begun to take the app world by storm, from games that change depending on your location to interactive marketing wrapped in a gimmicky package.

And while these mobile time-killers may seem like a flavour-of-the-month smartphone craze, augmented reality’s potential is far greater than what’s available for mass consumption today.

AR blurs the line between Internet space and lived-in space like never before. While the technology today delivers mostly kitsch value and not-so-reliable functionality, it opens up a future where these two spaces become inseparable in everyday life.

As the name suggests, this heightened world will come to elevate what we know as reality, becoming more pervasive in our day-to-day activity as our technological means grow.

On the mobile front, progress in AR has focused around image-recognition software that combines with the camera and GPS hardware on your phone. Based on your location and what you’re viewing, relevant data is pulled from the Internet to add to—or replace— elements of real life.

For now, it’s a process of mapping and marking the world so that the apps can recognize the programmed connections, but as these apps grow, the breadth of incoming data does too.

Microsoft has recently filed a patent for augmented reality smart glasses, which would provide users with a heads-up display of stats of baseball players when watching a game in real time. Earlier this year, Google unveiled Project Glass, a visor-like headpiece potentially providing Google services already offered through Android.

Theoretically, in the near future you could be provided with instructions on how to use a tool, or get the weather forecast, just by “looking” at it with the AR device. These objects can be uniquely identified based on pre-existing Internet data about them online, making your physical world a fully wired system encapsulating everything you do.

“Transmutation”

“It’s as if we made the pinecone out of glass,” said Dr. Sha Xin Wei, playing a video of Concordia electroacoustics grad Navid Navab experimenting with auditory augmented reality.

Using contact microphones, which register vibrations through objects, rather than through air like conventional microphones, Navab then processes the natural sound of objects into something more immersive.

“Recording at a sample rate of 44,000 Hz, each tiny moment can be captured and processed, making the transmutation as fluid and complex as your real-life interaction with an object,” explained Sha.

Sha, an associate professor in the Design and Computation Arts department, is the director of Concordia’s Topological Media Lab, a multidisciplinary space that combines the technological, the ecological and the philosophical.

Some of the various projects underway at the lab employ augmented reality—such as the contact microphone procedure—but not in the screen-centric way that is typically associated with AR today.

“The object transmutes itself depending how we touch it,” said Sha.

“It’s as if, if you were playing a guitar, the strings were getting bigger or thinner, or changing from gut to steel or copper wire, depending on how you’re stroking it. It’s morphing underneath your fingers. That’s what we do.”

And it’s that almost natural relationship between real-world objects and the program’s output that, when done right, makes augmented reality so, well, real.

From vibrations to images to gestures, through sophisticated processing, it’s becoming possible to move through a room “as if the air was made of barbed wire, or beans,” in the words of Sha.

Because of the precision of the programming, the AR’s response is enough to fool the senses. In the case of Navab’s work, it can turn something as unmusical as an eggplant into an evolving percussion instrument.

“Why use a computer? Of course we can play a guitar made of wood; it’s already there. With what we can do here, we can mutate it, on the fly. And we can’t do that with a physical object,” said Sha.

Making Physical Ground

The idea of augmented reality got a publicity boost in 2009, after a TED talk revealed SixthSense to the world, a prototype system developed at the MIT Media Lab.
The gadget combines a smartphone with a camera, projector and a few other easily found objects, allowing users to pull data from the Internet instantaneously—simply by interacting with an object or person as they already would in real life.

AR has proliferated since then, but for now (at least until a reliable consumer heads-up display enters the market), the phenomenon is limited to being experienced on a screen, processed through the lens of a camera.

“I’m not a fan of how we always have to have a screen between, and I’m not sure how we can overcome that eventually,” said Nikos Chandolias, a graduate computation arts student who works in the TML on responsive processing for sound input. “[But] I’m really interested in how you could actually interact with the object projected.”

Given the amount of work being done with AR, however, things are only going to get more immersive and complex. Progress is bound as much to the limits of our imagination as it is by technological development.

“Augmenting, to me, is taking an ordinary thing and making it magic,” said Sha, “by adding some computation as you need to.”