This article was last updated 5 years ago

Facebook has created a new simulator, that looks and behaves just like real worlds, to teach AI powered robots to navigate in human environment and carry out simple tasks.

Called Habitat, Facebook has created one of the most advanced systems till date. The system was mentioned some months ago but today received the full expository treatment, to accompany a paper on the system being presented at CVPR.

Navigating a real world and performing simple tasks may take number of hours even years, for AI powered robots. That is why these simulators and virtual worlds are produced to help them learn before placing them in actual physical space.

These virtual environments approximate to real ones and the basics can be hashed out as fast as the computer can run the 3D world calculations. It implies you can accomplish thousands of hours of training in just a few minutes of intense computing time.

Habitat is a platform on which simulated world run. It is compatible with several environments including SUNCG, MatterPort3D, Gibson and others and is effective and efficient as researchers can run it at hundreds of times real world speeds.

Facebook wanted to advance the virtual world art and therefore, created a number of photorealistic house comprising of kitchen, bathroom, doors, a living room etc. It was created by Facebook’s Reality Labs and a result of meticulous photography and in depth mapping of real world.

Most important thing to be noted is how the team has worked on myriad annotations on the 3D data. The 3D environment is not only captured but the objects and surfaces are exhaustively labeled. That’s not just a couch but a grey couch, with blue pillows. And depending on the logic of the agent, it might or might not know that the couch is “soft,” that it’s “on top of a rug,” that it’s “by the TV,” and so on.

Including comprehensive labels increases the flexibility of the environment, and a comprehensive API and task language allows agents can perform complex multi-step problems like “go to the kitchen and tell me what color the vase on the table is.”

These assistants are meant to help, for instance, old people can’t easily get around their homes so they will need well trained assistant to help them. Habitat and Replica will help creating such savvy and give them the training and learning they need.

However, Habitat does not create truly realistic simulator environment. The robots themselves are not rendered realistically, a robot might be tall or small, have wheels or legs, use depth cameras or RGB. Some of the things won’t change; like your size doesn’t influence the distance from the couch to the kitchen but some will; a small robot might be able to walk under a table, or be unable to see what’s on top of it.

Another drawback is the lack of functionality in Replica in relation to physics and interactivity. For instance, you can ask the assistant to go to kitchen and open the refrigerator, but there’s no way at all that it could be opened. There’s infact no refrigerator, but the virtual world lacking functionality.

On the other hand, simulators like THOR focus more on the physical aspect than virtual. It teaches AIs to learn difficult tasks from scratch like opening the drawer. THOR praised Habitat for providing powerful systems for AIs to learn navigation but emphasised that it lacked interactivity.

One thing needed to be realised is that there is need for both; one can’t be other, it can be either physical or virtual, not both. But Facebook and others in AI research are working hard to create one.