Azure Arc Manages Kubernetes Clusters Anywhere, Brings ML on Premises

The key capability opens up options for applying an ML algorithm to a data store regardless of where each is located.

Scott Fulton III, Contributor

March 5, 2021

5 Min Read
Kubernetes
Alamy

At the virtual version of its Ignite developers’ conference Tuesday, Microsoft released into general availability a key function of Azure Arc that gives the service something resembling a genuine purpose: the ability to manage ordinary Kubernetes clusters wherever they may reside.

“More than ever customers are building modern applications using containers with Kubernetes,” remarked Jeremy Winter, Microsoft’s director of Azure Management. “I’m happy to announce that Azure Arc-enabled Kubernetes is now generally available and ready for production downloads.” This comes in addition to last fall’s preview release (a kind of “use-at-your-own-risk” warning for early adopters) of so-called “Azure Arc-enabled data services” for deployment in Kubernetes workload containers.

One very lucrative use case for this new capability is bringing machine learning services into customer data centers, eliminating communications-induced costs and latencies by letting them run next to data stores. In another era we called this “installing software on the same system as the database.”

Microsoft senior program manager Saurya Das demonstrated this feature with Azure Machine Learning Studio during a recorded Ignite conference session. The user of the Studio portal can connect to any Kubernetes cluster made accessible through Azure Arc. The user does need to click a button marked Kubernetes via Azure Arc first, but that’s the only added step.

This way, the ML workload (located on any cluster) can access the ML data store (located on any cluster), and if they happen to be side-by-side rather than in separate clouds, that’s more than fine.

In late 2019 Microsoft began rolling out an Azure-based management extension platform that could incorporate servers outside the Azure cloud, potentially on customer premises. Azure Arc has been a way of centralizing hybrid cloud management inside the Azure cloud, as well as a means of wedging Microsoft back into the server management landscape now that “Windows Server” and “legacy” are for many IT operators becoming somewhat synonymous.

Azure’s latest move brings its hybridization capabilities more in line with Google’s Anthos, that company’s management platform for Kubernetes clusters. Last September Google extended Anthos’ management capabilities to apply to AWS and Azure’s Kubernetes platforms as well as its own. Anthos, which is open source, can be run outside the cloud, in containers or on bare-metal platforms; for now, the Azure Portal requires Azure cloud infrastructure, though theoretically that can be supplied by Azure Stack on-premises.

If all that sounds like too many “A” words in one paragraph, we’re sorry.

Where Did the Azure Cloud Go Now?

Cloud-based management of on-premises clusters sounds a lot like a sandwich made up of sandwiches, so a word of explanation is in order. Although public cloud-delivered services are typically described as “cloud-based,” one of the trends that make up the “multi-cloud” phenomenon is the ability to deploy workloads that are delivered by public cloud services on platforms outside the cloud itself, including on premises, in customer-owned or operated data centers.

The most lucrative reason anyone would want to do that is to avoid the most cost-intensive process in cloud computing today: data transfer, especially with high-volume databases such as machine learning datasets and streaming video. Containerizing and orchestrating workloads via Kubernetes has always enabled the theoretical capability to do the reverse: transport database managers and applications off the cloud and onto enterprise networks where these huge data stores already reside.

Up to now Azure Stack has been Microsoft’s method for enabling enterprises to move Azure workloads into their facilities. Its tactic has been to recreate Azure server infrastructure as a portable, consumable product. This way, facilities can have separate server or server racks reserved for Azure, manageable through an almost identical portal.

But that was necessary back when Azure’s fundamentals were based on Windows Server. Although that operating system still exists – and is even due for a 2022 update – its usage share worldwide is widely believed to be waning. In June 2019 Linux Kernel maintainer Sasha Levin (then with Microsoft, now with Google) admitted in an open forum for the first time that Linux’s usage share exceeded that of Windows Server, even within Microsoft’s own cloud clusters.

So the long-term necessity for an essentially rebranded Windows Server cluster to be extended into the enterprise, when the cloud that would extend services to that enterprise cluster is itself moving toward Linux, merits scrutiny.

“You can have visibility over all your Kubernetes clusters and applications in that same single pane of glass that you have for your servers,” explained Winter. “IT teams can leverage the control plane to designate self-service access to cluster admins and developers. Cluster administrators can then use Azure policy for zero-touch config and partnering with developers to build out their DevOps practices based on GitOps.”

By “GitOps” Winter was referring to automated Kubernetes management workflows using Git, the Linux-based automation system originally engineered for workload deployment.

The payoff here is minimizing the transactional bulk between the public cloud and private servers in a hybrid data center environment.

Is This Really Anything New?

Multi-cloud Kubernetes cluster management through the Azure Portal was feasible in the recent past but only through the help of third-party services. It’s also been possible to deploy Kubernetes clusters through Azure, AWS, and Google platforms, and manage them all simultaneously through an independent management tool called Kublr. To be fair, Tuesday’s announcement brings everything together in one brand, following a development pattern Microsoft used to employ for Windows.

Although some online Ignite attendees may have been wary of Winter’s use of the phrase “Azure Arc-enabled Kubernetes,” he later went on to clarify that any CNCF-certified Kubernetes distribution may be utilized as infrastructure for Kubernetes clusters — which may or may not be hosting Azure Arc services — that may then be overseen and managed through the central Azure Management platform console.

Azure Stack and Stack HCI (its software-based deployment option introduced in 2019) still play unique roles, and will likely continue to do so, as an on-premises cloud platform that can be provisioned and managed like the Azure cloud itself.

“You can run your virtual machines, Kubernetes apps, as well as Azure Arc-enabled services like data and machine learning on top of Azure Stack HCI,” commented Winter. “With the native Azure Arc integration you can manage Azure Stack HCI deployments and workloads right from the Azure portal.”

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like