Robots of the Future Will Learn Just Like They Do in Star Trek’s Holodeck

Willyam Bradberry/Shutterstock.com

When future robots enter the world, they won’t have a learning curve.

When future robots enter the world, they won’t have a learning curve.

Artificial intelligence researchers are creating tools to help teach the robots that will assemble our gadgets in factories, or do chores around our home, before they ever step (or roll) into the real world. These simulators, most recently announced by Nvidia as a project called Isaac’s Lab but also pioneered by Alphabet’s DeepMind and Elon Musk’s OpenAI, are 3D spaces that have physics just like reality, with virtual objects that act the same way as their physical counterparts.

Virtual spaces are required because one way of teaching robots is a method called reinforcement learning, or the chore of doing one task over and over again until it’s done correctly. In a simulation, training the bots can be done more quickly and cheaply than in real life because lots of simulated robots can learn at once. It’s also safer—if a robot arm begins to flail around wildly, there’s no risk of anyone getting hurt before the error is fixed.

Nvidia CEO Jensen Huang told a crowd at the 2017 Computex conference May 30 in Taipei that he took his inspiration from the "Star Trek" TV series, which imagined a room called the Holodeck that acted like a virtual reality headset you could walk into.

“We imagine that one of these days, we’ll be able to go into the Holodeck, design a product, design the factory that’s going to make the product, and design the robots that’s going to make the factory that makes the products. That future is upon us,” Huang said.

Alphabet’s DeepMind has had similar ideas: The AI research lab is most well-known for applying its AI to games, notably AlphaGo, which continues to beat human world-champions at Go, but also building AI that beats video games like Atari and Starcraft.

DeepMind last year released an open-source version of its Lab, a 3-D environment for training AI to complete certain tasks. While Nvidia’s Isaac’s Lab is meant to help build robots and products that do specific tasks in the real world, DeepMind’s Lab is geared more towards research, or finding ways to build AI that can learn about its surroundings with little input.

OpenAI, a research firm backed by Musk for open AI development, is also working on this kind of technology, earlier this month releasing a video where a robot learned how to stack blocks by imitating a human in virtual reality.

NEXT STORY: How Games Advance Emerging Tech