deepmind, ai

Remember the livestreamed Google Home conversation experiment, which saw AI-powered virtual assistants get too real with one another. It was just a silly experiment that didn’t exclusively provide us with conclusive end results but instead some fresh content. But, Google might have taken a page out from this book and conducted a similar but focused research.

Google’s London-based AI research division, DeepMind, is of the opinion that artificial intelligent agents will be responsible for managing most systems on the daily. And if they perform their work in a correct manner is secondary as they’re now plagued with a challenge bigger than the automation of services. The researchers are exploring a situation where two AI agents are active on the same playing field. But the question is: whether agents will easily work together, get into conflicts or learn to co-operate.

The DeepMind team has already conducted the said experiment and penned their conclusions in a new study published just today. The same has also been explained in a concise blog post as well. The team of researchers employed the help of two different games to understand the behavior of AI agents in “social dilemmas”.

Such situations are defined for individuals who can profit from being selfish but they (everyone) loses out on the said reward if they all turn selfish. It is similar to the prisoner’s dilemma situation and thousands of tests were conducted to get to the final conclusion. Now, let’s look at the conclusions of the two AI-powered gameplays (can we say its AI-powered!?).

The first game developed by DeepMind is Gathering, where two colored squares (red and blue) share a common space and have been tasked with collecting apples (green) to receive positive rewards. They have also been given the ability of ”tagging them” to temporarily remove the other square from the game. Once the experiment began, it was noticed that the AI agents learned to peacefully coexist when the number of apples in the playfield was ample. However, when the number diminished, AI agents became aggressive and started tagging each other to get more time for themselves.

Once the experiment began, it was noticed that the AI agents learned to peacefully co-exist when the number of apples in the playfield was ample. However, when the number diminished, AI agents became aggressive and started tagging each other to give themselves more time for collection of scarce apples. This is highly similar to the aforementioned situation as it shows that the rate of tagging went up when their hunger for computation power increased.

While Gathering was focused on pitting the AI agents against one another, the second game — Wolfpack — was aimed at challenging the two AI agents in catching a third floating AI agent. Also, the shared space contains randomly placed objects that block the AI agent’s motion. It requires the AI agents to flank object, work together, and trap the third block to successfully capture it. And the same was witnessed by the DeepMind researchers team, who concluded that higher cognitive capacity pushed them to co-operate.

Talking about the outcomes of this experiment, the blog post reads,

In another game called Wolfpack, which requires close coordination to successfully cooperate, we find that greater capacity to implement complex strategies leads to more cooperation between agents, the opposite of the finding with Gathering.

Further, Joel Leibo, the lead author of a published research added that the behavior of AI agents ultimately depends on the environment in which they operate. Though this experiment doesn’t have any current real-world applications but it’ll help developers in building AI agents who can easily gel together to execute multi-agent systems such as the economy, traffic systems, or the ecological health of our planet. The said development will enable the humankind to swap out current manual functions with AI-powered solutions.

Leave a Reply

Your email address will not be published.