2019 Brought Red Hat Enterprise Linux 8, k3s and HPC-in-a-Box
What were the most important stories concerning "compute engines" in 2019? Here's three that we think will affect the direction of IT for years to come.
December 23, 2019
This year saw the release of Red Hat Enterprise Linux 8, Red Hat's server operating system; k3s, a tiny Kubernetes distribution from Rancher Labs; and AI Anywhere, a high-performance computing system ready to deploy out-of-the-box. What these releases all have in common is that they're all poised to bring important changes to the IT landscape for years to come.
Red Hat Enterprise Linux 8 Removes DevOps Pain Points
The May release of Red Hat Enterprise Linux 8 was noteworthy because it was the first major point release of Red Hat's flagship Linux operating system in five years. It also received attention at the time because, with the closing of the IBM deal only months away, it would be the company's last release as an independent open source vendor. Under the hood, Red Hat Enterprise Linux 8 pointed in the direction that IBM and Red Hat had already indicated their marriage would take: hybrid cloud plus the modern cloud-native deployment pipeline.
Red Hat pushing hybrid cloud is nothing new. It goes back to before RHEL 7, which was released just as containers were becoming a thing and before "DevOps," "agile" and "microservices" became buzzwords – all of which Red Hat Enterprise Linux 8 supports in spades.
Most interesting, however, is the distribution's additional focus on making life easier on DevOps teams, which has since become something that every cloud-native vendor is touting.
With Red Hat Enterprise Linux 8, this came in the form of Web Console – which Gunnar Hellekson, Red Hat's senior director of product management, described as "a graphical interface to point and click your way through some basic systems management tasks, hopefully lowering the barrier of entry for people who are new to Linux." In addition, it offers an expansion to Ansible System Roles to take the pain out of upgrading to the next release of the operating system.
Rancher Shrinks Kubernetes to Fit Edge and IoT
It might have been a ho-hum moment for some when Rancher Labs announced in March that it was readying a tiny version of Kubernetes for release, but folks deploying at edge locations or embedding in IoT devices sat up and took notice.
Called k3s (because it's "smaller than k8s," a common shorthand term for Kubernetes), the minified software weighs in at 40 MB and needs only 512 MB RAM to run, which makes it ideal for use in compute-restrained situations. Because "Kubernetes is Kubernetes," it's easy to integrate into a larger ecosystem that's based on the full-sized version, which requires at least 15 GB of disk space to install and run.
Within a day of announcing that k3s had reached general availability at KubeCon in November, Rancher CEO Sheng Liang was telling ITPro Today that the new release had already seen 10,000 downloads. This wasn't much of a surprise since we'd been hearing for several months that companies were already using beta versions in production.
Some companies are already finding unorthodox ways to take advantage of tiny Kubernetes. Liang told us of one database vendor that's embedded k3s in their deployment scripts for easier and quicker installation.
Plug 'n' Play Supercomputers
Modern artificial intelligence applications require something akin to a supercomputer to handle the petaflops of data they crunch, which increases server density north of 30 kilowatts (an average load is three to five kW per rack) and introduces more heat than most data centers are equipped to handle. This necessitates expensive renovations to existing on-prem datacenters, or the use of colocation facilities specifically designed to accommodate the high-density requirements of AI workloads.
Things just got easier for operators wanting to deploy AI workloads. In November, a consortium that includes AI workload datacenter operator ScaleMatrix; chipmaker Nvidia; and Microway, a provider of computer clusters, servers and workstations for HPC and AI, announced the general availability of AI Anywhere, a complete high-performance computing solution that ships with built-in cooling.
According to ScaleMatrix CEO Chris Orlando: "All we need is a roof, floor space and a place to plug the appliance in, and we can turn on an enterprise-class data center capable of supporting a significant artificial intelligence or high-performance computing workload."
AI Anywhere employs Nvidia's DGX supercomputing hardware, and is available in two models, one consuming 42 kW to deliver 13 petaflops and the other a 43 kW, 8 petaflop system. Both versions include the NVIDIA DGX software stack, deep learning and AI framework containers, NetApp ONTAP storage and Mellanox switching, and are cooled using ScaleMatrix's proprietary closed-loop chilled water-assisted forced-air cooling system.
As market opportunities in AI and ML continue to grow, expect to see similar solutions by other vendors in the near future.
About the Author
You May Also Like