Cisco Advances CliQr’s Tech for Automation
Four months after its acquisition of data center automation platform maker CliQr, Cisco is making the strongest case yet for a hyperscale automation system that blurs the boundaries between clouds and customer premises, and renders opaque the containment mechanisms for workloads.
“The old world for our customers, for your business, of hard-wiring applications to all the nuances of different infrastructure environments, just doesn’t scale anymore,” declared Cisco Senior Director for CloudCenter Business Development David Cope [pictured above], during his Day 2 address to the Cisco Live conference. Cope’s comments came by way of introducing a new server platform for Cisco that incorporates high-level workload automation with the company’s new Tetration network analytics system.
“There needs to be a way to really turn the proposition around,” Cope continued. “Instead of getting apps to work for the infrastructure, there needs to be a way to get the infrastructure to dynamically conform to the needs of the application, providing complete portability and manageability across any of the environments.”
Homogenizing Heterogeneity
The dream of server and networking vendors, including Cisco, is to produce a single solution that fills all the gaps in most everyone’s data center. In 2011, the overwhelming need for scalability led Facebook to organize the Open Compute Project — an effort at a common definition for the most homogenous, undistinguished “bare metal” a company could buy or build in bulk.
Hardware makers, including the very short list of companies that produce CPUs nowadays, saw OCP as a threat to their ability to craft a value proposition. If data centers everywhere relied upon tens of thousands of expendable, basic boxes, margins could plummet, and value-added servers could follow desktop PCs into obscurity.
Along came Docker and containerization, and the hardware community saw a glimmer of hope. Suddenly they foresaw a future where applications were completely decoupled from their underlying infrastructure. Applications could perceive the data center as completely homogenous, when in fact it could be multifarious.
Up to now, the boldest effort by a server maker to capitalize on this trend has been HPE’s promotion of what it calls composable infrastructure. As HPE’s SVP and general manager for data center infrastructure Ric Lewis laid out for us last March, the basic concept there is to stage workloads on whatever infrastructure is best suited for it at the moment, and to separate that decision entirely from the workload’s scheduling, execution, and maintenance.
“We worked with customers and partners like yourselves to develop a single infrastructure,” said Lewis, as he introduced HPE’s composable infrastructure last December, “that... has fluid resources that can dynamically flex to the specific needs of an application, in any deployment model — whether it virtual, physical or bare metal, or even in containers.”
The platform Cisco’s Cope introduced at Cisco Live definitely has many of the same goals. But in Cisco’s case, accomplishing its version of dynamic infrastructure involves the implementation of a kind of general-purpose manifest for workloads, which Cope described as a composite topology.
“It starts with graphically capturing the topology and dependencies of the application, in an application profile or blueprint, which contains the topology, the dependencies of the application, and related policies,” he told his audience, “again, completely agnostic of infrastructure.”
The rendering of a topology looks a bit like a genealogy tree. As Cope described, a topology may include first-generation virtual machines (VMware, KVM, Xen, etc.), containerized workloads, or symbols for PaaS services provided through the public cloud. A composite topology would include meta-applications, if you will, that combine elements from all three.
With an Asterisk Next to Docker
But “completely agnostic,” as Cope and his Cisco colleagues demonstrated on stage, somehow does not entail a failure to take advantage of modern infrastructures when they become available. In a very convincing demo involving Docker, Cisco showed how its platform generates labels that may be used in staging containerized workloads, steering them through the network.
Specifically, a label can be deployed when spinning up containers with Docker Engine. Each label has no direct purpose for Docker itself, although as it travels with the workload, it associates the application it contains with an SDN path that Cisco’s orchestration produced for it.
Segment routing, as Cisco Director of Engineering Brook Crossman showed the audience, uses Tetration to help determine the optimum route for an application as it traverses the network. “We’ve actually extended Docker,” Crossman explained, “to take advantage of this functionality, so that we can present some really interesting opportunities to solve your problem.”
Crossman’s demo clearly showed three segment routing labels representing the path between the gateways of two data centers. Each label referred to a segment of the network whose access policies were governed by Contiv, the company’s policy enforcement engine.
This way, packets are sent over the network over paths that have already been secured, and whose access policies are already being enforced, without arbitration having to take place for each and every packet individually.
While Cisco’s goal is for its new infrastructure to behave with absolute agnosticism, it’s clear that newer classes of virtualized infrastructure will exhibit behaviors that any staging platform will need to take into consideration, especially if it wishes to continue presenting the appearance of integration.
“I think you’ll find, as these containerized applications move into mainstream production,” said Cisco’s David Cope, “they will almost always have non-containerized dependencies — reaching out to a PaaS, reaching out to a database. Cisco’s CloudCenter is unique, in that it allows you to create and manage these composite topologies.”
As software continues to take over the architecture of data centers, look for hardware manufacturers to find clever ways to take the lead in pursuing software-defined evolutionary paths.
About the Author
You May Also Like