There are two types of augmented reality, explained Matt Grob, Sr. VP of Engineering and Head of Corporate Research and Development at Qualcomm, this morning at the Summit at Stanford. The most common form of augmented reality
is compass-based AR. It’s pretty effective, but subject to errors. For example, the compass can be off in its measurements. While a phone's compass makes augmented reality applications possible, there are fundamental limitations with this model.
The better model is vision-based AR. To give an example of what Qualcomm is working on, Grob showed a video of a Rock ‘Em-Sock ‘Em Robots app developed in partnership with Mattel.
Vision-based augmented reality uses a smartphone’s camera as sensor. “Vision is the key enabler," Grob said. "You start with real-world view with virtual content on top. This enables a more immersive experience.”
“We’re seeing the transition from looking at things on your phone to with phone,” Grob said. “Your phone is portal to magical world where there’s incredible stuff.”
There are several parts to the process. First, scanning with camera and determining an object’s natural features, and determining patterns. After that, comparing with database of known images; if the image matches, then using computing vision techniques to match the object’s XYZ coordinates.
After this, the application renders graphical overlays, with computations at 30X second. The tool needs to be low-power and needs to be offered at reasonable cost.
“We’re looking at making this scale,” Grob said. “This is very important. We want this to work at very large scale.”
“The trend is towards an architecture that is so flexible so you can support all the modes you have to support,” Grob said.