Humanity + Technology = 1
Achieving Technological ‘Singularity’ with Google Glass
Before his death in 1957, Hungarian-American mathematician John Von Neumann said something that, even though it was only paraphrased, would profoundly shape the world to come.
He was talking to his contemporary Stanislaw Ulam, who recalls his friend speaking to him of an “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
This was the first time in recorded history the term “singularity” was used to describe the meeting of humanity and our technological creations—and like a flame catching kindling, once the spark was lit there was no going back.
Scientists, thinkers and fiction writers alike have posited ideas of what this future singularity—where artificial intelligences can think and feel—might look like, or whether it will exist at all. Concerns about the interfacing of man and machine are numerous—will we be watched, or worse, can we be controlled?
These are perhaps coupled further with underlying fears of being replaced as the dominant species by artificial intelligence, but often these fears and concerns are outweighed by very tangible and desirable benefits. Artificial intelligence doesn’t sound so alarming when it’s remembering your mother’s birthday and prompts you to buy her flowers on your Google Glass display, does it?
Wearable technology is just one of the arenas where technological advances are reigniting debate around the singularity, and Google Glass is one of the more prolific and publicized wearable tech products coming to the market.
Since unveiling Glass last year, Google has been largely quiet on the full uses and applications of their visor display. Speaking at a TED Talk in May, Google co-founder Sergey Brin revealed the company’s philosophy surrounding Glass, positioning the product as the next step in communication following the smartphone.
“We ultimately questioned whether this is the ultimate future of how you want to connect with other people in your life, how you want to connect to information; should it be by walking around looking down?” he explained.
“In addition to potentially socially isolating yourself when you’re out and about with your phone, it’s kind of [a question of] is this what you’re meant to do with your body?
“[…] That was the vision behind Glass, and that’s why we created this form factor.”
Developers were first given access to Glass prototypes last year for app creation, but Google only released a developer’s kit Nov. 19, unlocking the product so offline apps and programs that use Glass’s onboard accelerometer and GPS could be built.
However, many of Glass’s features still cannot be accessed, much to the dismay of many developers.
Opening Glass
Brandyn White is a 27 year-old Glass developer and CEO of Dapper Vision, a computer science consulting and development firm that focuses on mobile and cloud applications. One of the company’s projects is Open Shades, which facilitates new software development for Glass displays. It’s the kind of thing White says Google is relying on developers to create.
“[Google] is taking things a little bit slow, and they’re staying kind of focused on more phone-like activity. They’re making Glass like an evolutionary step from a phone,” he said.
“As a group we really want to demonstrate and show what the platform is capable of by making our own software and just promoting the concept [of interfacing with wearable technology].”
Some of the programs built through Open Shades include eye tracking and web control applications, as well as augmented reality software that allows for a user to incorporate or interact with physical objects using their Glass display.
“We want to get people to be able to prototype really ambitious applications such as those of augmented reality really quickly,” added Open Shades developer Scott Greenwald.
According to White, a PhD student at the MIT Media Lab, this fast turnaround between having an idea and creating it is also propelling development of applications with artificial intelligence.
“We’re working on [context-centred software]; for example you can create scripts and applications that understand what you’re doing right now, so it can essentially be an extension of your current state,” he said.
“You don’t have to say, ‘Oh, well I’m about to leave the house, should I take anything with me?’ It could know you’re leaving the house and [know] that it’s raining outside so it says, ‘Don’t forget to bring an umbrella,’ for example,” he added.
But there are privacy and security drawbacks to technology that monitors you, White continued, which is why he says user data needs to be safeguarded.
“As a device gets a lot more personal and more intimately tied to your day to day activities you have to really trust it,” he said.
“It can’t be something that’s going to be marketed towards you or using your information in ways that are not okay with you—it has to be something that you can trust with everything you’re doing all the time.
And if that’s the case, then it can augment everything you’re doing and make it just a little bit better.”
Future Undefined
But while these applications use AI to adapt to their users, Glass is not conscious, nor does it display free will.
Despite how far we have come, the singularity is still a little ways off.
Dr. Osama Moselhi, a building, civil and environmental engineering professor at Concordia, says while his area of expertise in applied sciences limits him from making an educated guess as a computer science expert, he believes the singularity can’t be too far off in the future.
“From what I know, it will be less than 10 years for sure that will happen, because it’s happening now on an experimental scale,” said Moselhi, who uses AI in his research to mine for data and apply dynamic organization or synchronization to systems used in construction.
As for White, he says items like Glass will help normalize mass society to human-interfacing technology as we continue on our march to progress.
“This is obviously the closest we’ve ever been to [the singularity] on a mass scale and I think that it definitely puts us so much closer than we were able to be ever before.”