0:56 AMHow Google Glass Works
Introduction to How Google Glass Works
According to string-theory advocates, our universe has at least 10 dimensions. But we humans can only directly perceive three spatial dimensions. We also experience the passage of time, a fourth dimension. Beyond that, we only know other dimensions are even possible through theoretical mathematics. Our universe may hold secrets that we will never be able to observe directly.
Even if you discount string theory and the idea of dimensions beyond our perception, our world contains a wealth of information that most of us aren't aware of in our daily lives. When visiting a city for the first time, for example, we may only have our senses to rely on when gathering information. A smartphone or computer can help out, pulling in more data about the city's geography, history, economics, cuisine and other cultural features.
Augmented-reality applications overlay a level of digital information on top of the physical world around us. With an one of these apps on your smartphone, you might be able to hold your phone's camera up to capture the image of a city street and then receive on-screen information about your surroundings.
While these augmented reality apps can be informative and entertaining, the form factor is still a little clunky. We have to hold up the smartphone and look at the screen -- it's like you're on a "Star Trek" away team, and you're the one with your eyes glued to a tricorder instead of drinking in the sights.
Google is one of several companies creating a solution to the problem comes in the form of a wearable device. It looks like a pair of glasses with one side of the frames thicker than the other. It's called Google Glass, and it might open your eyes to a new digital world -- or make you look like the nerdy Terminator who's always picked last for robo-dodgeball.
The Birth of Google Glass
One of Google's many divisions is called Google X. Descriptions from visitors make it sound like it's equal parts computer lab and mad scientist's lair [source: Miller and Bilton]. Projects at Google X tackle big problems in engineering. Everything from networked homes to space elevators gets a shot within the lab. One of the many projects the division worked on is Project Glass.
Back in April 2012, a Project Glass account appeared on Google's social networking platform Google Plus. The account's first post revealed the purpose of the project -- to build a wearable computer that helps you "explore and share your world" [source: Google Glass]. The post included a concept video of what the project -- a pair of glasses -- might be able to do in the future.
In other posts and articles, Google released more details about the glasses. Some versions had no lenses. What all versions did have was a thick area of the frame over the right eye. This is where Google put the screen for the glasses. To look at the screen, you have to glance up with your eyes. The placement was important -- putting the screen in your direct line of vision could result in some serious safety problems.
It wasn't long after Google released the concept video that people got a chance to see a pair of the glasses in real life. Google co-founders Sergey Brin and Larry Page wore the high-tech specs to events in late spring of 2012. And at the Google I/O event on June 27, 2012, Google gave attendees a thrilling demonstration of the technology.
The Google I/O took place inside the Moscone Center in San Francisco, but the first part of the demonstration was outside the building. To be more specific, it was a few thousand feet above the building. Google had outfitted a skydiving team riding in a blimp with Google Glass, and had set up a Hangout -- a video chat on theGoogle Plus platform -- with the team. Footage from the Glass cameras caught all the action. The team jumped from the blimp and maneuvered so that they landed on the roof of the Moscone Center.
The demonstration didn't end there. Expert bicyclists, also wearing Glass, did tricks on top of the Moscone Center's roof until they reached the edge of the building. Then, a man wearing Glass rappelled down the side of the building and handed off a package to another biker. That biker weaved through the conference center to get to the stage and hand the package off to Sergey Brin.
The audience watched the whole thing happen on a giant screen as footage from the various glasses played out in front of them [source: Google Developers]. Afterward, members from the Google X team in charge of the project talked about the philosophy behind the eyewear. Brin then returned to the stage to announce that Google planned to ship a developer pair of the glasses called the Explorer Edition in early 2013 for $1,500. That's still the cost of the Explorer glasses, but it may not be the final price for Google Glass once it hits the general consumer market.
What Google Glass Does
Upon launch of the Explorers beta program, Google Glass owners could use their specs' specs to:
As of the publication of this article in early 2014, Glass can't overlay digital information on top of physical locations. But imagine looking at a building and seeing the names of the businesses inside it or glancing at a restaurant and being able to take a peek at the menu. With the right application, you could apply dozens of filters to provide different types of information.
For example, let's say you're in London, sporting your snazzy Glass glasses. You take a look at the new Globe Theatre and ask for more information. You're given choices -- do you want to learn about the history of the original Globe Theatre? Would you like to learn about the new version that opened in the 1990s? Or maybe you just want to see what productions are currently running on the stage this season. Google Glass could potentially provide you all of that information.
Looking even further into the future, you might be able to use Google Glass to help you keep track of the people in your life or learn more about the people you meet. With facial recognition software and social networking, it's possible you could take a look at someone you've just met and see their public profiles on any number of social platforms. (If that sounds potentially creepy to you, you're not alone – we'll talk about the criticisms of this feature in a bit.)
Google Glass is tightly packed with chips, sensors and feedback devices. Let's take a look under the hood -- or, rather, behind the lens.
What makes Google Glass work?
If you were to take apart a Glass, two things would likely happen: You'd discover the components that make the glasses work and you'd feel a deep sense of regret for ripping apart a $1,500 gadget. Fortunately, other people have already done this on your behalf.
There are a few different ways to control Google Glass. One is by using the capacitive touch pad along the right side of the glasses. The touchpad responds to changes in capacitance, which is essentially a weak electrostatic field generated across the screen. When your finger makes contact with the panel, a controller chip detects the resulting change in electric capacitance and registers it as a touch. Swiping your finger horizontally allows you to navigate menus on the device. Swiping downward on the touchpad backs you out of a choice or, if you're at a top-level menu, puts the glasses in sleep mode.
Another way to control Google Glass is through voice commands. A microphone on the glasses picks up your voice and the microprocessor interprets the commands. You can't just say anything and expect Google Glass to respond -- there's a set list of commands that you can use, and nearly all of them start with "OK, Glass," which alerts your glasses that a command will soon follow. For example, "OK, Glass, take a picture" will send a command to the microprocessor to snap a photo of whatever you're looking at.
As of early 2014, the processor in the Explorer version of Google Glass is from Texas Instruments. It's an Open Multimedia Applications Platform chip (OMAP). These chips belong to a larger classification of microchips called systems on chip. That means there are multiple components working together -- in this case, an ARM-based microprocessor, video processors and a memory interface. According to Texas Instruments' specifications, the chip is capable of playing video at 1080p resolution and 30 frames per second.
The main circuit board also houses a SanDisk flash drive for memory -- 16 gigabytes' worth of storage, though only 12 gigabytes are available to the user. A company called Micron Memory (formerly known as Elpida) supplied the dynamic random access memory (DRAM) chip. These chips provide not only storage for media and apps, but also the memory that the microchip requires to run programs on the Glass.
While you can use Google Glass to take photos and videos without having it connect to the outside world, to get the most from the product you'll need to connect to the Internet. The two ways to do that are over Bluetooth (connecting to some other device, such as a smartphone) or WiFi. A single chip inside Google Glass provides support for either type of connection. Another chip, the SirFstarIV, is a global positioning system (GPS) microchip that allows Google Glass to determine its location via satellite signals [source: Torborg and Simpson].
Cameras, Speakers and Sensors, Oh My!
While the guts of Google Glass are interesting, the most eye-catching component is the prism-like screen. When turned off, it appears to be a clear prism. Viewed from the top, you can just make out a diagonal line that bisects the prism's width. This diagonal line is where the prism has an angled layer that acts as a reflective surface.
Images from Google Glass project onto the reflective surface in the prism, which redirects the light toward your eye. The images are semi-transparent -- you can see through them to the real world on the other side. As of early 2014, the resolution for the display is 640 by 360. It's not exactly high definition, but at such a close distance to your eye it doesn't appear to be low resolution.
If you look just to the side of the display toward the outer edge of the glasses, you'll see a camera lens. According to Google, the camera can take photos with a resolution of 5 megapixels. It can also capture video at 720p resolution.
The speaker on Google Glass is a bone conduction speaker. That means the speaker sends vibrations that travel through your skull to your inner ear -- there's no need to plug in ear buds or wear headphones. Using the camera and speaker together allows you to make video conferencing calls. Just know that the person on the other end of the line will be seeing what you're seeing since there's only a forward-facing camera on the glasses.
Also on board the glasses are a proximity sensor and an ambient light sensor. These sensors help the glasses figure out if they are being worn or removed. You can choose to have your Google Glass go into sleep mode automatically if you take them off and wake up again when you put them on. These sensors can also detect if you take an action such as winking, which opens up the option to send commands such as "take a picture" just by giving a big wink. (Yep, there's absolutely nothing creepy about that.)
One last sensor inside Google Glass is the InvenSense MPU-9150. This chip is an inertial sensor, which means it detects motion. This comes in handy in several applications, including one that allows you to wake up Google Glass from sleep mode just by tilting your head back to a predetermined angle.
All of these chips and features need power to work. That power is provided by a battery housed in a wide section of the stem. It fits behind your right ear. It's a lithium polymer battery with a capacity of 2.1 watt-hours. Google says that charging the battery takes just 45 minutes when using the charging cable and plug that come with the glasses [source: Torborg and Simpson].
Not everyone has greeted the news of Google Glass with enthusiasm. Some, like Internet security expert David Asprey, have voiced concerns about the product and its implications
Part of Asprey's apprehension stems from the wording of Google's terms of service for Google Drive, a cloud-storage service. Included in those terms of service is this passage [source:Connelly]:
"When you upload or otherwise submit content to our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. This license continues even if you stop using our Services (for example, for a business listing you have added to Google Maps)."
At first glance, this makes it sound like any data you store in Google Drive effectively becomes Google's property. Google representatives say this is to allow Google to display your data in different ways [source: Google Terms of Service]. Let's say you've uploaded a file to Google Drive and you choose to make it publicly available. When people search for terms related to your file, that file should show up in search results. This part of the terms of service gives Google permission to display part of that file within the search results themselves.
Asprey's point is that the wording of terms like this one seems to give Google more control over user data than it should have. He also points out that with facial recognition software, the glasses could raise privacy issues.
Another concern is that Google could use the eyewear as a platform for collecting personal data and serving ads. As you go about your day wearing these glasses, Google could create a virtual profile. Based on your behaviors and location, Google could potentially serve up what it considers to be relevant advertisements to the screen on your glasses.
And then there are concerns that receiving social networking updates in your field of vision could impair your ability to do other tasks, such as driving. Google's response to this reaction was to point out that the screen in Google Glass eyewear is positioned at the top of the frame, requiring you to look up with your eyes to see it. You shouldn't have to worry about text messages or pop-up ads obscuring your view as you wear the glasses.
Even with the Explorer program in full gear, it's too early to say whether these concerns are warranted. It may be that the glasses never make it to consumer shelves. But privacy advocates warn that we should think about the possible consequences now, before they become real problems later.
Google representatives have already addressed some concerns and say they welcome feedback. It's likely that the consumer version of the glasses -- assuming the project gets that far -- will be different from the prototype versions. Perhaps by then, Google will have found a way to let people dive into a data-rich environment while still protecting their privacy.
I wanted a pair of these glasses as soon as I first heard rumor of them. I'm an information junkie. I love the idea of exploring the world with a pair of glasses that can give me data about every aspect of my surroundings. In the summer of 2013, I got the opportunity to become a member of the Explorers program and jumped at the chance. When I wear them around conventions, other people spend about as much time taking pictures of me wearing Glass as I spend taking pictures with my Glass. It's obvious that I'm not the only person fascinated with this technology!Related Articles
|Total comments: 0|