Researchers are currently trying to teach every possible human interaction to artificially intelligent systems. Google has successfully taught its neural networks to translate, read images, weed out fake news, play popular Chinese game Go and is in the process of teaching numerous other tasks to them.

In a move to make it further ubiquitous, the Elon Musk-backed AI research lab OpenAI has today released a new virtual training ground for artificial intelligent systems called ‘Universe.’ This software platform is a collection of video games, browser interfaces, and web applications which can be used to measure and train an AI’s general intelligence. It allows an AI agent to look at screen pixels and operate a virtual mouse and keyboard to emulate tasks a human can accomplish using a computer.

The primary aim of the Universe, as described in the official blog post, is as under:

Our goal is to develop a single AI agent that can flexibly apply its past experience on Universe environments to quickly master unfamiliar, difficult environments, which would be a major step towards general intelligence.

Universe is an open-source platform which supports Gym, an OpenAI toolkit launched in April to support development and comparison of reinforcement learning(RL) algorithms. The Gym toolkit enables the developer to train specific actions to their AI using a reward scheme — which doesn’t need any special access to program internals, source code, or bot APIs.

It is now open source and anyone can contribute or modify most of it. This means that the platform can be used to make the AI learn one app after another —  taking the leap to learn hard new tasks each day.

If we are to make progress towards generally intelligent agents, we must allow them to experience a wide repertoire of tasks so they can develop world knowledge and problem-solving strategies that can be efficiently reused in a new task,

reads the blog post.

In Universe, the AI agent operates a remote desktop (not a simulation or an emulation) by observing pixels of the display and producing keyboard and mouse commands accordingly. This practice is made possible using a Virtual Network Computing or VNC, which keeps track of the agents training to understand what process (or method) helps it score the highest, win the game or something similar.

Each environment, running parallelly on the Universe, is packaged as a Docker image and hosts two servers — VNC and WebSocket — to communicate with the outside world. One of these is used to transmit learning data while the other send the reward signal for the learning process.

The research lab has the backing and support of newest member Microsoft Studios, Valve, Wolfram, EA, and many others. Earlier AI was being trained using just 55 Atari games — which was the largest RL learning source. But OpenAI now plans to accelerate the education of AI agents by broadening the scope of learning and training resources. It wants the Universe to support a single Python process driving 20 environments in parallel at 60 frames per second.

The research team has already created Gym environments for over 1,000 Flash games; 2,600 Atari titles, and other applications such as Portal, Fable Anniversary, World of Goo, RimWorld, Slime Rancher, Shovel Knight, SpaceChem, Wing Commander III.  This resource is further being expanded to over 30,000 Flash games and other bigger titles like slither.io and GTA V —  accessible via the Universe python library.

The researchers at OpenAI are already pushing Universe beyond games into web browsers and protein folding apps used by biologists. Thus, if the AI agents learn the general problem-solving skills performed by a human, they’ll have to learn and gather massive amounts of reward data to figure out a basic language for themselves – which might soon become a reality.

Leave a Reply

Your email address will not be published. Required fields are marked *