Google Lens shows the practical, everyday value of augmented reality

smart city persona DXC Blogs

The consumer world was introduced to augmented reality (AR) in a huge way last summer through Pokemon GO, a mobile app that allows users to see (and attempt to “capture”) little cartoon figures superimposed onto actual reality through their phone screens.

While interest in Pokemon GO peaked within weeks of its release, it offered a fascinating (though relatively trivial) glimpse into the potential of AR on mobile devices.

At Google’s annual developer conference, Google I/O, the search giant announced a far more practical application of AR on smartphones that should be available to consumers and enterprise users by year’s end.

Google Lens is a technology that will search for information on whatever users are viewing through their smartphone’s camera. In other words, just as Google Assistant responds to “OK Google” voice commands to find information (a weather forecast, a song, a text message, etc.), Google Lens will respond to visual data detected by a smartphone’s camera and present information on top of the actual image.

You can check it out in this video of Google CEO Sundar Pichai’s presentation at I/O. As he explains, “Google Lens is a set of vision-based computing capabilities that can help you understand what you’re looking at and help you take action based on the information.”

Google Lens can be used for something as simple as identifying an object or a location. Let’s say you’re on the road and looking for a good restaurant. As you point your camera at a particular establishment, text is superimposed over the image telling users the name of the restaurant and its average rating. This can save you the trouble of pulling out your Yelp or TripAdvisor app (or asking Google Assistant to do it for you).

This video shows how Google Lens will work within Google Assistant, and it’s pretty cool. For example, if you point your camera at a sign or menu in a different language, Google Assistant will translate it for you and show you the text in your default language. Even more compelling, users can have a conversation with Google Assistant based on the information it finds relative to the image in question.

Google Lens will be available at an unspecified date later this year, Pichai said. I’m looking forward to it.

RELATED LINKS

A rookie’s guide to mobile virtual reality for iPhone

Pokemon GO could fuel demand for augmented reality and virtual reality jobs

5 killer use-cases for augmented reality in the enterprise

 

Comments

  1. Khaled Soubani says:

    Fascinating. This weekend I had to look at a dozen or so images of blooms to find the name of a plant.

    Like

  2. Ronald Sonntag says:

    I recently used a nifty App on a vacation through the Canada’s Jasper and Glacier national parks. The App, called PeakLens, did a pretty good job of identifying (with hovering labels) the names of the mountain peaks I pointed my Android phone at. Even a string of mountains in the distance received hovering labels. Think about the terrain maps, compass, GPS, and visual image processing all happening on a device that fits into the palm of your hand!

    Like

Trackbacks

  1. […] Google Lens shows the practical, everyday value of augmented reality […]

    Like

  2. […] Google Lens shows the practical, everyday value of augmented reality […]

    Like

  3. […] Google Lens shows the practical, everyday value of augmented reality […]

    Like

  4. […] Google Lens shows the practical, everyday value of augmented reality […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: