Geographic certitude is at the core of everything an AR browser does.
Augmented reality browsers use a mobile phone’s camera to digest a scene and annotate it with web-connected flair: live directions, geotagged tweets, reviews that hover over restaurants. But the current generation of AR browsers is lagging on geolocation, are content-poor, and have limited interactivity, according to a recent paper (paywall) in the Proceedings of the Institute of Electrical and Electronics Engineers (IEEE).
The researchers behind this paper, from Austria’s Graz University of Technology, argue that the AR browsers, which have existed since 2009, are not fulfilling their potential.
Geographic certitude is at the core of everything an AR browser does. Seamless AR browsing has to integrate sensor data from GPS (location); body direction (compass); and pose, or where the device is pointed (gyroscope and accelerometer). The Austrian team says that the current level of consumer-grade hardware can’t deliver all of this accurately, and that future AR browsers can overcome this by being more clever about the way they use computational resources.
For example, GPS sensors are prone to drift, which can throw off a browser’s ability to pinpoint location, so the Austrian team suggests instead relying on low-resolution GPS data to get the user’s approximate location, then using image-recognition software to fill in the details. They say the browser can build a panorama out of the physical view as the user sweeps the camera around, registering landmarks that each add to the browser’s certainty about where it should place content. Street-view data from Google could also help the AR browsers identify what they are looking at.
This might not be true for long. At last month’s Mobile Web Conference, the three most popular AR browsers announced an agreement to use a single language, called ARML 2.0 (Augmented Reality Markup Language). This framework was also accepted by the Open Geospatial Consortium, an international group that oversees standardization for location-based services. The language doesn’t specifically address how devices will use sensors, or how users of AR browsers will interact with objects on the screen, but it will open up the platforms to developers who can create shareable interactive-rich content.
The race is on to solve the problems with AR, especially now that Google has pulled back the veil on its own AR smartphone. Project Tango, still in prototype, doesn’t yet have a web browser. But, since Google is planning to ship 200 of these to developers, smart money would bet that the existing AR browser companies are scrambling to get their hands on a development model. A successful company could use Project Tango’s specialized cameras and hardware to push their browser’s scene-building, depth-sensing, map-making capabilities to the limit.