Cyxtera Launches AI Hardware as a Service in Its Data Centers

The specialized machine learning IaaS offering runs on Nvidia DGX A100 hardware.

Wylie Wong, Chips and Hardware Writer

August 4, 2020

3 Min Read
Nvidia DGX A100
Nvidia DGX A100Nvidia

Cyxtera on Tuesday launched an Nvidia-powered, compute-as-a-service offering for machine learning, giving its data center colocation customers the option to use AI hardware through a subscription-based, IaaS model.

The AI/ML-compute-as-a-service offering, which runs on Nvidia’s DGX A100 systems, is targeted at two types of customers: enterprises that want to quickly and easily deploy AI and managed service providers that want to offer AI to their own customers without having to invest in their own AI infrastructure, Russell Cozart, Cyxtera’s senior VP of marketing and product strategy, said.

“It’s not just a pool of resources in a public cloud. It is dedicated, single-tenant DGX systems that they have full control over,” Cozart told Data Center Knowledge.

Cyxtera claims that it’s the first data center operator to offer subscription-based access to Nvidia’s DGX A100 AI hardware systems.

A growing number of data center operators and all the major public cloud providers, such as Amazon Web Services, Microsoft Azure, and Google Cloud, offer AI computing infrastructure as a service. Specialist data center operators, such as ScaleMatrix, Colovore, and Core Scientific, are among colocation providers that offer AI hardware as a service or AI-ready data center services.

Related:Digital Realty’s Connected Campus Brings AI Compute and Data Closer Together

Digital Realty Trust recently announced a partnership with Core Scientific and Nvidia to offer an AI computing service at its “Cloud House” five-story, 120,000-square foot facility in London. The service is powered by DGX A100 hardware.

TIRIAS Research principal analyst Jim McGregor said every data center operator and service provider will have to offer AI hardware as part of its offerings in the coming years because customers are requesting it.  

“It’s a complete necessity,” McGregor told us. “If they don’t, they won’t be in business in five years.”

There is huge demand among enterprise customers to run AI workloads – not just for scientific workloads or autonomous vehicles, but also retail and any business that wants to use AI to run more efficiently, he said. TIRIAS Research forecasts that 95 percent to 98 percent of every new digital platform will use some form of AI by 2025 – whether it’s on a device, on a network, in the cloud, or as a hybrid infrastructure solution.  

Cyxtera says the AI/ML-compute-as-a-service offering is available today in its data centers in three markets: Northern Virginia, Dallas-Fort Worth, and London. The company, which has 62 data centers in 29 markets, plans to expand the offering globally based on market demand, Cozart said.

Related:AI Hardware Landlords

The three-year-old company also offers traditional bare metal server configurations using HPE servers and hyperconverged infrastructure through Nutanix and VMware. Customers can access the new AI/ML-compute-as-a-service offering with easy point-and-click provisioning, Cyxtera said.

“Customers can self-service subscribe through APIs or a web-based portal and consume it as a service,” Cozart said.

Tony Paikeday, senior director of product marketing for Nvidia’s artificial intelligence systems, believes Cyxtera’s AI service offering through a flexible OpEx model will be attractive to both enterprises and managed service providers.

Many enterprises have invested in data scientists who have built AI models, but they don’t have the scalable infrastructure necessary to deploy and take advantage of them in production, Paikeday said.

“Cyxtera is plugging that gap,” he said. “With this offering, enterprise IT leaders can look at it as their own private AI infrastructure cloud without having to do it themselves. They have experts running it for them, and with that in place, we will see much more of their models go from prototype to production, and that’s great news.”

Paikeday said Nvidia’s DGX A100 is the company’s fastest, latest-generation product released this spring. DGX A100 systems are purpose-built appliances that are equipped with data science tools and an AI software stack, which makes it easy for data scientists to build models and run experiments, he said.

Previous models of DGX were focused on AI training. The new DGX A100 systems are designed for training, data analytics, and inference, which puts a trained model to work, to draw conclusions or make predictions.

“What is new with the DGX A100 is it integrates the full lifecycle of AI development into one appliance,” Paikeday said.

About the Author

Wylie Wong

Chips and Hardware Writer

Wylie Wong is a journalist and freelance writer specializing in technology, business and sports. He previously worked at CNET, Computerworld and CRN and loves covering and learning about the advances and ever-changing dynamics of the technology industry. On the sports front, Wylie is co-author of Giants: Where Have You Gone, a where-are-they-now book on former San Francisco Giants. He previously launched and wrote a Giants blog for the San Jose Mercury News, and in recent years, has enjoyed writing about the intersection of technology and sports.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like