What Cloud and AI Do and Don't Mean for Google’s Data Center Strategy
A Data Center Knowledge Q&A with Joe Kava, Google's VP of data centers
One of Google's big missions this year has been to prove to the world that it is a serious player in the cloud services market, a player that's capable of taking on Amazon Web Services. The Alphabet subsidiary has been taking big steps to show that it is "dead serious" about its cloud business, to quote Diane Greene, the founder of VMware whom Google hired last year to lead this charge.
Hiring Greene was one of the biggest steps. The other was Google's commitment to make a sizable investment in expanding the data center infrastructure necessary to support a global cloud business. The company said in March it would add cloud data centers in 10 new locations around the world before the end of next year.
One of the key people executing this expansion is Joseph Kava, a Google VP who leads the company's data center engineering, design, and operations. This week, before the company kicked off its big annual conference Google I/O, taking place in a concert arena next to its headquarters in Mountain View, California, we sat down with Kava to get a better understanding of Google's data center strategy as its cloud business evolves and to ask him what effects, if any, will the rise of the Internet of Things, machine learning, and virtual reality have on its infrastructure.
Here are the highlights of our conversation, edited for brevity and easier readability:
Data Center Knowledge: Cloud computing has changed how companies think about nearly everything that has to do with data centers. What effects has it had on the data center site selection process for Google as a service provider?
Joe Kava: We are expanding into some other regions that we didn’t have already. We announced that between now and the end of 2017, we’re going into 10 new serving regions for Google Cloud Platform. One of them that is coming up shortly is in Japan, and that is a region we didn’t have a data center in previously.
If you look at where most of our data centers have been, our campuses are not in major metropolitan areas. They’re not in Chicago and New York; they’re in Council Bluffs, Iowa, or Pryor, Oklahoma, where you can get large pieces of land and build for long periods of time. But for the public cloud, we’re going into a lot of the major metro areas that are going to be the biggest regions for cloud, like Tokyo.
Google's data center in Douglas County, Georgia (Photo: Google)
Google famously likes to keep as many engineering tasks in-house as possible, including data center operations. But company executives recently said the big expansion announced earlier this year would include both Google’s own and leased data centers. Does competing in the public cloud mean Google has to compromise on its traditional strategy of keeping data center operations in-house?
Joe Kava speaking at Google's GCP Next event in San Francisco in March, 2016
It may not be cost-effective to build your own data center for a small instance in a new region. At some point, that region might be big enough to where having our own data center makes sense. It’s just a total-cost-of-ownership analysis, and the same goes for a large enterprise company. If you need a few hundred kilowatts, you wouldn’t necessarily build your own data center, because you’re going to pay a lot of money for that. It doesn’t mean we’re changing strategy.
Read more: Google to Build and Lease Data Centers in Big Cloud Expansion
We hear from companies specializing in edge data center markets that demand from big cloud providers in those markets is rising. How is Google thinking about the infrastructure that’s necessary to provide cloud services to users in those regions?
We have a huge global network of points of presence, and once we can get our customers onto our network, it really doesn’t matter. We can serve them just fine from our cloud regions that we’ve already established, plus the new ones that are coming online over the next year or so. We also have a lot of edge-style data centers [primarily in colocation facilities] already that we’ve been using for caching, so our use case might be a little bit different than others’.
Digital Realty Trust, one of the biggest data center providers, recently changed its strategy to focus on combining large wholesale-scale data centers with interconnection-rich colocation facilities. The expectation is that cloud providers will take lots of wholesale space near these interconnection points where enterprise customers can access them directly, and hopefully, the enterprises will also take space adjacent to the ecosystem. Do you see a lot of value in being in those big cloud campuses for enterprises and cloud providers?
We all know that as people move to the public cloud, they are developing a hybrid strategy. They are still keeping some of their apps and some of their systems either on-premise or in their colo, and they’re offloading a tremendous amount of workloads to the public cloud providers. If they were in a large multi-tenant colo and each of the public cloud providers also had a cluster or something in that large colo, then I’m sure it’s very attractive from their perspective, because they have a lot of choice.
But that’s human behavior. It’s just that comfort level. I think it generally doesn’t matter. Wherever customers are, we have enough points of presence for them to get onto our network and take good advantage of our cloud platform. It will probably take some transition time for people to get used to it.
Read more: Digital Realty Leans on IBM, AT&T to Hook Enterprises on Hybrid Cloud
There’s a lot of excitement currently about the Internet of Things. What implications do you think IoT has for Google’s data center strategy?
We’ve already had the Internet of Things. They’re called smartphones. Android has over a billion registered things that are chatting with our data centers all the time. Having the next billion interconnected things doesn’t really worry me, because those devices, whether they’re your refrigerator at home, or whatever those internet-connected things are going to be, they’re generally not going to be as chatty with data centers as your smartphone is. We’ve already dealt with it.
Artificial Intelligence or machine learning have been a big focus for Google recently. What are the implications of this focus for your data center decisions, especially now that it’s become a core part of the company’s cloud services strategy?
Google's Tensor Processing Unit boards fit into server hard drive slots in the company's data centers. TPU is a custom chip Google designed specifically for machine learning applications. (Photo: Google)
Machine learning as a service offered through our cloud platform is a huge offering, and I think more and more companies are seeing the benefits of that. It’s going to be a big growing product in our portfolio, but from the infrastructure side of things, not necessarily a big change.
There are customized hardware platforms that machine learning runs better on. It doesn’t affect the way we design our data centers, because we’ve already been running pretty high-density, high-performance compute systems for many years. We optimize everything from the actual server through the rack and the cooling systems, so it won’t really change our strategy.
Read more: Google Has Built Its Own Custom Chip for AI Servers
Read more about:
Google AlphabetAbout the Author
You May Also Like