Google IT & Web-tech News Research

Google Robots Learning To Grasp, Display Interesting Behavioural Traits In The Process

Google, Google Cloud
Share on Facebook
Tweet about this on TwitterShare on Google+Share on StumbleUponShare on LinkedInPin on PinterestShare on Reddit

Machine learning is the next ‘IT’ technology and Google of course, is not one to stand behind when it comes to new stuff. However, despite all the precision robots offer, teaching them something new, is not your usual cup of tea.

For example, grasping things is something even toddlers can manage, however researchers globally are having a hard time convincing robots to do the same, largely because it is complicatedly difficult to teach human-like traits to robots. Google however, may have earned a breakthrough here.

Initially, the developers begin with how humans learn, i.e. by mapping the surroundings, making a plan and then executing it. I know it sounds a bit ridiculous, but yes, thats what happens every time you reach for that apple — except for the remarkable feedback mechanisms possesed by humans, that uses sensory cues to correct mistakes and compensate for perturbations, hardly make us feel the time lag.

However, giving robots the same set of capabilities, along with the ability to learn from their mistakes is proving to be quite difficult. Google has decided to ease the process by doing two things.

Firstly, it has dedicated 14 interconnected robots, capable of sharing everything they learn with each other, to the task. So its rather like 14 minds — or processors — working upon the same problem.

Next, is something Google calls the convoluted neural network, a technique that is being widely implemented to improve machine learning everywhere. As per Google,

While initially the grasps are executed at random and succeed only rarely, each day the latest experiences are used to train a deep convolutional neural network(CNN) to learn to predict the outcome of a grasp, given a camera image and a potential motor command.This CNN is then deployed on the robots the following day, in the inner loop of a servoing mechanism that continually adjusts the robot’s motion to maximize the predicted chance of a successful grasp.

In essence, hand eye co-ordination. Who knew it could be this difficult, right? Also, note the potential before the motor command. Its rather like hit and trial, except that the adjustments take place before the hit does.

Granted, the process is slow — painstakingly slow. Almost 3,000 hours of practice (and 800,000 grasp attempts) later, the robots are still not perfect at picking things up. However, with 14 state of the art processors on the problem, they are almost as good as a two year old — a breakthrough when you consider that its basically pieces of metal and wires   we are talking about.

The robots have already acquired a success rate of over 80 percent, which while impressive, is less so, than the fact that they learned it entirely on their own. In fact, they are almost doing things that have not been programmed into them beforehand.

The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group. All of these behaviors emerged naturally from learning, rather than being programmed into the system.

Is it just me who keeps thinking of Skynet while reading this? Jokes apart though, the news may just be the prologue to pointing us towards a new era in robotics, where artificial intelligence is actually intelligent enough to develop and grow through experience — a trait that has until now, been the sole province of living things.


A bibliophile and a business enthusiast.

[email protected]

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *