Today at its I/O event, Google has announced that its second generation Tensor Processing Units are coming to the Google Cloud. The addition will significantly bulk up the capabilities of Google Cloud and provide acceleration to machine learning workloads. This means that devs looking to train their models can now do the process that much process.

Announcing the news, the company said:

We call them Cloud TPUs, and they will initially be available via Google Compute Engine.

Machine learning has made rapid progress over the past few years. The effect is easily visible across Google’s wide portfolio of products including Google Translate, Photos and its Go Champion program AlphaGo. Doing this was not easy and like all machine learning models, required huge computational power. Google got over the difficulty by bringing its own family of Tensor Processing Units (or TPUs) that took care of the machine learning processes.

Google first went with scaling its training capabilities however, in face of huge turnaround times, the company realized that it needed a new machine learning system — and that is exactly what it designed. The very same system, deploys the second-generation TPU announced today, at its center.

The new TPUs are capable of delivering a massive 180 teraflops of floating-point performance to train and run your machine learning model. However, if that amount of power is  not enough, you can use multiple TPUs connected together — forming what Google calls TPU Pods.

A TPU pod contains 64 second-generation TPUs and provides up to 11.5 petaflops to accelerate the training of a single large machine learning model. That’s a lot of computation!

In case you are wondering just how much, dig this: A large-scale translation models that takes a full day to train on 32 of the best commercially-available GPUs takes merely an afternoon in reaching the same level in its training using just one eighth of a TPU pod.

The company is introducing these TPUs to Google Compute engine as Cloud TPUs. Users will be able to connect and run them with different virtual machines, hardware and then use TensorFlow to program them. Interested already? Well, you can join the Cloud TPU Alpha program by signing up here.

The company also announced that it is making 1,000 Cloud TPUs available at no cost to ML researchers via the TensorFlow Research Cloud, to further promote research and innovation in the field.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.