Soon We’ll Control Our Devices With Our Body Language, Not Our Voice

LDprod/Shutterstock.com

There are limitations to voice-enabled interactions.

Since Siri was introduced in 2010, the world has been increasingly enamored with voice interfaces. When we need to adjust the thermostat, we ask Alexa. If we want to put on a movie, we ask the remote to search for it. According to some estimates, some 33 million voice-enabled devices reported for duty in American homes by the end of 2017.

But there are limitations to voice-enabled interactions. They’re slow, embarrassing when other humans are around, and require awkward trigger phrases like “Okay, Google” or “Hey, Siri.”

Thankfully, though, talking into midair is no longer our only—or best—option.

The new iPhone introduced a camera that can perceive three dimensions and record a depth for every pixel, and home devices like the Nest IQ and Amazon’s Echo Look now have cameras of their own. Combined with neural nets that learn and improve with more training data, these new cameras create a point cloud or depth map of the people in a scene, how they are posing, and how they are moving. The nets can be trained to recognize specific people, classify their activities, and respond to gestures from afar. Together, neural nets and better cameras open up an entirely new space for gestural design and gesture-based interaction models.

Read the rest of the story on Quartz.