Explaining Knative, the Project to Liberate Serverless from Cloud Giants
Today, using serverless means choosing a cloud platform to lock yourself into. The open source project expected to fix that is approaching prime time.
January 27, 2020
One of the stars of the show at November's KubeCon, the big annual Kubernetes conference whose 2019 edition happened in San Diego, was a new technology called Knative. Almost ready for prime time, it's an open-source project Google started with the aid of IBM, Pivotal, Red Hat (now owned by IBM), and others to make vendor-agnostic serverless functions available to commercial users in their own data centers or across any combination of public clouds.
The project's intended users are both developers and platform operators, be they cloud providers or in-house corporate IT shops. DevOps teams using it will probably find that in addition to bringing serverless functions to their data centers, Knative will simplify running Kubernetes, Richard Seroter, VMware's senior director of technical marketing and developer relations, told Data Center Knowledge. In its raw form, Kubernetes is notoriously complex and not user friendly.
"It's not just a function platform, it is a kind of a Kubernetes native app runtime," Seroter said. Vendors like Pivotal (which VMware recently acquired) might use the platform to put a "dev experience" on top of Kubernetes that "developers who don't want to know all the plumbing of Kubernetes can use."
In 2018, Google turned over control of the Kubernetes project to the Linux Foundation. It was expected to do something similar with Knative. However, a couple months before the last KubeCon, the company surprised developers by announcing that it would keep development of Knative under its thumb for the foreseeable future.
Grokking Serverless
To understand Knative, it’s first necessary to understand serverless, which is often misunderstood because its name implies that functions do the impossible by somehow running without a server. They don't.
Serverless, or functions-as-a-service, is a cloud-native technique that’s been available since at least 2014, when Amazon Web Services introduced Lambda, offering a new way of looking at how software systems could be designed. Lambda allowed developers to create small functions, or tasks, that could be fired up in containers when called, scaled as needed, and shut down once a task is completed, with the user only paying AWS for the few seconds the function is up and running.
The idea of only paying for the service while it’s being used proved to be popular with cloud customers -- popular enough to prompt Microsoft Azure to roll out a copycat feature called Functions two years later.
Serverless is often confused with another cloud-native technique: microservices. It's a way of building an application as a collection of loosely coupled services as opposed to a single monolith. Unlike serverless functions, which are only deployed when triggered by an event, microservices are typically always active.
“Those types of tasks [the ones serverless functions address] don’t need to run all the time, they just need to run when an event happens,” Brian Gracely, Red Hat OpenShift’s senior director of product strategy, told Data Center Knowledge. “Maybe a database table got updated for an order, so I’m going to kick off a small job to send a text message to the customer with this updated status. There are lots of things like that which go on in IT with applications all the time, so this became something that a subset of developers found really appealing.”
Simplifying Serverless
The trouble with serverless is, again, complexity. DevOps teams find it difficult and time-consuming to wrangle outside the big cloud providers' offerings.
And while the public cloud providers make it relatively easy to get serverless functions up and running, there are drawbacks to their offerings, which essentially boil down to vendor lock-in. None of their solutions work and play well with others, making it next to impossible to use serverless across hybrid or multi-cloud environments and leaving DevOps teams to confine serverless functions to a single cloud or complicate their infrastructure by treating functions differently depending on whether they run on AWS or Azure.
This is what Knative developers are solving for.
Because serverless is a container-based technique, Knative integrates with Kubernetes and a service mesh (usually either Istio or Linkerd), with Kubernetes doing the heavy lifting and the mesh providing the routing. It works in part by creating a new Kubernetes "deployment type," which is how Kubernetes understands the characteristics or patterns of different types of applications. An app that runs for a long period is called "Deployment," for example, while a database that has to maintain its IP address when it fails is called "Staple."
“Knative essentially becomes another pattern or deployment type built around the idea that it’s event-driven, short-running, auto-scaling type of stuff,” Gracely said.
Knative has a modular design, with two components that are being developed somewhat independently: Eventing and Serving.
“Events are the things that trigger your application code to work,” Gracely explained. “There ’s a new entry in the database, there’s a new file in storage, or there’s a stream of data that’s now coming in… Those are going to be the things that trigger your function to start. 'Serving' means actually start the application, do whatever computation I need to do, and then I have an action that comes out of that.”
When work on Knative first began, there was also a third component, Build, a CI/CD pipeline function that was quickly spun off as stand-alone project called Tekton.
“The Knative community said, that’s probably not something we should do specific to Knative,” Gracely explained about the spin-off. “That’s more of a generic function that could be useful to a lot of people, but it should be very Kubernetes-knowledgeable."
Getting Ready for GA
Although Knative isn't yet ready for prime time, the production-ready release is just around the corner, with a release schedule that will come in two stages. Gracely said the Serving component is targeted to reach general availability in March. Eventing, which is lagging behind, should reach beta at about the same time, with GA coming several months afterwards.
"Eventing is a broader-scoped part of the project with lots of different use cases to address in terms of what types of events can trigger the Serving function," he explained. "Serving is much closer to a use-case in Kubernetes, hence gets to GA faster."
When prime-time Knative arrives, he added, it will be an integrated part of Red Hat's OpenShift Kubernetes platform and included in its core subscription.
About the Author
You May Also Like