Google News

Google using custom chip called Tensor Processing Unit (TPU) to speed up machine learning tasks

Share on Facebook
Tweet about this on TwitterShare on Google+Share on StumbleUponShare on LinkedInPin on PinterestShare on Reddit

Google has been working on a secret custom chip called Tensor Processing Unit for last several years to speed up various machine learning tasks. At the I/O developer conference 2016, Google CEO Sundar Pichai revealed this information and said that the company has been using the chip in various AI applications for last one year.

The custom application-specific integrated circuit (ASIC) chip has been running in Google data centre racks for past one year.

It powers various applications at Google such as RankBrain (to improve the relevancy of search results in Google search), Google Street View( to improve quality and accuracy of maps and navigation) and the famed AlphaGo artificial intelligence (AI) powered Go player that beat top-ranked Go player Lee Sedol this year. In addition to it, voice recognition services and Cloud Machine Learning services of Google also run on Tensor Processing Unit chips.

According to the company, these chips deliver “an order of magnitude better-optimized performance per watt for machine learning”. And this is apparently roughly equivalent to a technology which is about seven years into the future or three generations of Moore’s Law.

TPU is custom built for machine learning applications as it is tolerant of reduced computational precision and requires fewer transistors per operation. As a result of it, more operations per second can be squeezed into the chip and sophisticated and powerful machine learning models can be applied more effectively.

Google further plans to expose more machine learning APIs and make the power of these chips available to developers to help them build intelligent  applications for customers.

Our goal is to lead the industry on machine learning and make that innovation available to our customers. Building TPUs into our infrastructure stack will allow us to bring the power of Google to developers across software like TensorFlow and Cloud Machine Learning with advanced acceleration capabilities.

wrote Norm Jouppi, Distinguished Hardware Engineer, Google in a blog post.

Google, however, is not the only tech company working on its own chips to power machine learning and artificial intelligence. Microsoft has been using programmable chips called Field Programmable Gate Arrays to accelerate AI computations which are also being used by Bing search.

IBM  has also designed its own brain-inspired chip called TrueNorth that is under testing at Lawrence Livermore National Laboratory.

Nvidia Corporation, which is known for its gaming GPUs (graphical processing units), has also been testing them to for artificial intelligence and machine learning applications. In fact, Google also used Nvidia GPUs for early testing of the AlphaGo software.


[email protected]

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *