Google Cloud Anticipates Machine Learning Growth with GPU

Chris Burt

November 18, 2016

2 Min Read
Google Cloud Anticipates Machine Learning Growth with GPU

WHIR-logo.png

Brought to You by The WHIR

Google is bringing graphics processing unit (GPU) chips to its Compute Engine and Cloud Machine Learning to boost performance for intensive computing tasks like rendering and large-scale simulations. GPUs will be available from Google Cloud Platform worldwide in early 2017, according to an announcement this week.

In a separate announcement, the company announced a new Cloud Machine Learning group, and a series of moves related to machine learning and artificial intelligence.

Google introduced its new Cloud Jobs API, which applies machine learning to the hiring process, as well as significantly reduced prices for its Cloud Vision API, and a premium edition of its Cloud Translation (formerly Google Translate) API.  A blog post outlining Google’s machine learning-related updates also announces the general availability of the Cloud Natural Language API.

The announcements collectively represent a push not just to broaden its compute-intensive cloud services, but to deliver practical services built on them. GPUs are used as accelerators on many clouds, from Peer1 to AWS, though industry players are engaged in an ongoing debate about the scalability and relative efficiency of different approaches.

Google says the GPUs on its cloud will be AMD’s FirePro S9300 x2 for remote workstations and the Tesla P100 and Tesla K80s from NVIDIA for deep learning, AI, and high-performance computing (HPC) applications. Google is offering the GPUs in passthrough mode for bare metal performance, and up to 8 can be attached to each VM instance.

“These new instances of GPUs in the Google Cloud offer extraordinary performance advantages over comparable CPU-based systems and underscore the inflection point we are seeing in computing today,” said Todd Mostak, founder and CEO of data exploration startup MapD, which used them as part of an early access program. “Using standard analytical queries on the 1.2 billion row NYC taxi dataset, we found that a single Google n1-highmem-32 instance with 8 attached K80 dies is on average 85 times faster than Impala running on a cluster of 6 nodes each with 32 vCPUs. Further, the innovative SSD storage configuration via NVME further reduced cold load times by a factor of five. This performance offers tremendous flexibility for enterprises interested in millisecond speed at over billions of rows.”

GPU instances take minutes to set up from Google Cloud Console or the gcloud command line, and are priced per minute.

Google Cloud Platform also extended its geographic reach earlier this month with a new Tokyo region.

This article was originally posted here at The Whir.

Read more about:

Google Alphabet
Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like