This article was published 8 yearsago

Nvidia, Google

Google has announced that it is finally making GPUs on the Google Cloud Platform available to developers. The announcement builds on another from three months earlier, that saw the company talk about offering support for high-end graphics processing units.

Developers will now be able to attach as many as eight GPUs to custom Compute Engine machines of their own. Announcing the news, Google said:

Google Cloud Platform gets a performance boost today with the much anticipated public beta of NVIDIA Tesla K80 GPUs. You can now spin up NVIDIA GPU-based VMs in three GCP regions: us-east1, asia-east1 and europe-west1, using the gcloud command-line tool. 

Each of the NVIDIA Tesla K80 GPUs features as many as 2,496 of NVIDIA’s stream processors. The GPUs account for a total of 12 GB of GDDR5 memory  and the accompanying K80 board comes with two cores and 24 GB of RAM.  The company will also be rolling out support for creating GPU VMs using the Cloud Console, later this week.

The Google cloud GPUs are actually quite useful. They are capable of accelerating many different types of computing and analysis functions. Google says that you can deploy these GPUs to improve the performance of different functions “video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis, computational chemistry, finance, fluid dynamics and visualization.”

Google appears to aiming these GPUs towards developers who need to power complex structures such as machine learning frameworks.

The company also said that you can accelerate your machine learning experience using its Cloud GPUs. Apparently, the GPUs are tightly integrated with Google Cloud Machine Learning and that will allow you to scale the model up using the TensorFlow framework.

Now, instead of taking several days to train an image classifier on a large image dataset on a single machine, you can run distributed training with multiple GPU workers on Cloud ML, dramatically shorten your development cycle and iterate quickly on the model.

Meanwhile, you can take up these GPUs at around $0.70 per hour for each, although the cost varies slightly across the globe. Here is a sheet to help you along:

Nvidia, Google

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.