deepmind

With our increasing dependence on machines and artificial intelligence becoming a focal attraction, researchers are steadily trying to make them think like actual human brains. This means that the AI should not only be able to perform functions as efficiently as us but also retain the knowledge it has amassed in the process and learn from it. Google’s AI research unit ‘DeepMind’ seems to have cracked the code.

Usually, artificial intelligence systems are designed to adapt and adjust itself with the aim of solving one particular task. The system is generally known to forget what it has learned when it has completed the said task. And this is exactly what the researchers at Google’s DeepMind are currently trying to fix with neural networks and machine learning algorithms.

Thus, they’ve now developed a new algorithm called Elastic Weight Consolidation (EWC), which allows the AI to remember previous knowledge and bank on it to learn new tasks more effectively. The highlight of the said algorithm is the fact that the AI doesn’t forget what it has already learned i.e it’s not exactly overwritten, says the official blogpost.

Instead, the AI is capable enough of recognizing the importance of certain connections defined in its system while learning new tasks. It links a protection value (obviously proportional to its importance) to these connections and uses this knowledge later on. When the AI is assigned new tasks, it will look upon existing safeguarded connections and link them to the new task. The said algorithm has been described in the official DeepMind blog post as under:

A neural network consists of several connections in much the same way as a brain. After learning a task, we compute how important each connection is to that task. When we learn a new task, each connection is protected from modification by an amount proportional to its importance to the old tasks.

Thus, it is possible to learn the new task without overwriting what has been learnt in the previous task and without incurring a significant computational cost.

The human brain is wired to remember old skills and eventually apply them while learning new tasks. This comes naturally to us but the AI systems have to be taught the initial steps before they can start performing on their own self. Also, since the systems are based on neural networks, it is steadily learning a new task by trial and error. And now it has also been handed down the task of protecting important computational connections, thus, adding to its miseries.

The said technology is currently working but is still rough around the edges. DeepMind researchers have tested this AI by making the same learn ten Atari games sequentially. Their tests showed that a normal AI was able to master the game after a few tries but would forget the progress in another game. But when they used the EWC-backed AI, the results were completely opposite. It was learning to process the information it learned in the previous game and apply it to get through the next one.

Our research also progresses our understanding of how consolidation happens in the human brain. The neuroscientific theories that our work is based on, in fact, have mainly been proven in very simple examples. We hope to give further weight to the idea that synaptic consolidation is key to retaining memories and know-how.

This DeepMind research shows that AIs might eventually have the power to compute and perform like human brains, but it’s still a far-fetched reality. The team, on the hand, believes that these neural networks (which are similar to brain neurons) will at least give them more insight into the functioning of the human brain.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.