Insight and analysis on the data center space from industry thought leaders.
Tech Primer: Clarity on Containers
Expect for IT departments to finally come to a greater understanding of container technology and how it can realistically and appropriately be used for IT operations alongside virtual infrastructure.
February 28, 2017
Kong Yang is Head Geek at SolarWinds.
Container ecosystems from the likes of Google, Docker, CoreOS, and Joyent are easily one of the more intriguing IT innovations in the enterprise and cloud computing space today. In the past year, organizations across all major industries, from finance to e-commerce, took notice of containers as a cost efficient, portable, and convenient means to build an application. The prospects gave organizations an exciting new model to compare and contrast to virtualization.
But for all the hype, many organizations and IT professionals still struggle to understand both the technology itself and how to take advantage of its unique benefits—especially as Docker, the market leader, begins to expand the use cases for containers into the stateful architectural landscape that is common with enterprise applications. This technology primer aims to provide clarity on containers and arm you with everything you need to know to successfully leverage this technology.
Containers 101: Back to Basics
To start, one of the main misconceptions about containers is that they are part and parcel replacements for virtual machines (VMs).
Despite some early enterprise adopters implementing them as such, that is not the case. In a nutshell, a container consists of an entire runtime environment—an application, its dependencies, libraries and other binaries, and configuration files needed to run it—bundled into one package designed for lightweight, short-term use. When implemented correctly, containers enable much more agile and portable software development environments. Containers simply abstract away the need for traditional servers and operating systems.
Virtualization, on the other hand, includes a hypervisor layer (whether it be Microsoft Hyper-V or VMware vSphere) that segregates virtual machines and their individual operating systems. Virtualization abstracts resources of the underlying hardware infrastructure, consisting of servers and storage so that VMs can use these pools of resources. VMs can take considerably longer than containers to prep, provision, and deploy, and VMs tend to stay in commission much longer than containers. As a result, VMs tend to have a much longer application lifecycles.
Therefore, a key difference is that the container model is not intended to be a long-term environment like VMs; rather, they are designed to (ideally) be paired with microservices in order to do one thing very well and move on. With this in mind, let’s discuss some of their benefits.
First, as mentioned, containers spin up much more quickly and use less memory, ultimately leaving a smaller footprint on data center resources than traditional virtualization. This is important, as it enables process efficiency for the development team, which in turn leads to much shorter development and quality assurance testing cycles. With containers, a developer could write and quickly test code in two parallel container environments to understand how each performs and decide on the best code fork to take. Docker builds an image automatically by reading the specific set of instructions stored in the Dockerfile, a text file that contains all the commands need to build a given image. This means that containers should be ephemeral, meaning they can be stopped, changed, and newly built with minimal set up and configuration.
Containers can also support greater collaboration between multiple team members who are all contributing to a project. Version control and consistency of applications can be problematic with multiple team members working in their own virtual environments. Think about all the different combinations of environment configurations. Containers, on the other hand, drive consistency in the deployment of an image—combining this with a hub like GitHub allows for quick packaging and deployment of consistently known good images. The ability to quickly spin up mirror images of an application will allow various members of the same development team to test and rework lines of code in-flight, within disparate but consistent image environments that can ultimately synchronize and integrate more seamlessly.
Interestingly, Docker has begun evolving container technology to go beyond the typical test-dev model, and both deliver additional business value and reduce the barrier of consumption for enterprises. Docker engine runs on major desktop platforms like Windows, Linux, and Macintosh operating systems, which allows organizations to get experience with containers while demoing and testing a few use cases on laptops. Additionally, Docker’s recent acquisition of Infinit, a distributed storage vendor, emphasizes the company’s intention to expand and support enterprise needs. The integration of Infinit’s technology will allow developers to deploy stateful web architecture and legacy enterprise applications. This combination of technologies aims to influence organizations saddled with technology debt and legacy applications to adopt containers.
Virtualization or Containers: Which is Right for You?
So, how do you decide when to leverage containers? It starts with a fundamental understanding of your application architecture and its lifecycle—from development to production to retirement. Establishing this baseline will help you decide whether said applications are ideal for container or better left as VMs.
An e-commerce site, for instance, might decide to transition from using several VMs that are executing on multiple functions to a container-based model where the tiered “monolithic” application is broken down into several services distributed across public cloud or internal infrastructure. One container image would then be responsible for the application client, another container image for the web services, and so forth. These containers can be shipped to an unlimited number of host machines with identical configuration settings, so you can scale and drive consistency across your e-commerce site.
However, even though some applications in your environment might be prime candidates to shift to containers, the cost to evolve people, processes, and technology remains a large obstacle for most organizations that currently support virtualization. Heavy investments in vSphere, Hyper-V, or KVM virtualization solutions, not to mention the accumulated technical expertise and process to support it, is a key reason why businesses today are struggling to adopt container technology.
Despite this, organizations should look for opportunities to gain experience with container technology. As demonstrated by Docker’s expansion into more enterprise capabilities, containers can certainly begin to play a larger role in the modern data center, where web scale and mobile rule. The following best practices will help businesses better prepare themselves to work with and manage containers:
Getting There: Best Practices for Working with Containers
Adopt strategically – As mentioned, there are a few barriers to adoption for many organizations (cost, technology debt, the need to build up operational expertise, etc.), so a move to integrate containers requires thoughtful consideration. To ease your way into containerization, your organization should look for the low-hanging fruit opportunities, which happens to be test-dev environments. You should aim to leverage Docker’s compatibility with Windows, Linux, and Macintosh OSs to gain experience with some of the simpler use cases like normalizing development environments. This will help you better understand how containers could play a larger role with your organization’s delivery of more complex applications or workloads.
Monitor as a discipline – To determine how best to integrate container technology into your existing environment, IT professionals must leverage a comprehensive monitoring tool that provides the single point of truth across the entire IT environment and application stack. The resulting performance and behavioral baseline supply the data from which subject matter experts can analyze and determine workload candidates for containers or VMs. At the end of the day, companies expect both performance guarantees and cost efficiency. The best way to meet this requirement is with monitoring tools that provide an understanding of how your applications change over time, and tracking the actual requirements of that application and its workload.
Automate and orchestrate your application workflow – Containers aim to drive scalability and agility by normalizing the consistency of configurations and application delivery. Thus, automation and orchestration become key to successful container efficacy. The reason an organization leverages containers is to automate the provisioning of resources or applications to either run, deliver a service, or run and test a service before taking it to production, and to do it at web scale. Once you’ve reached this type of scale, you need to orchestrate the workload to take advantage of the collaboration efficiency between all development team members.
A security state of mind – By sharing the same operating system kernel and associated system memory, containers are able to be extremely lightweight and easy to provision. However, this also means any user or service with access to the kernel root account is able to see and access all containers sharing the kernel. With the cadence of data breaches showing no sign of slowing down, organizations that choose to work with container technology will need to create a security framework and set of procedures that is consistently evaluated and updated to prevent attacks. Examples of these preventive measures include reducing the container attack surface and tightening user access control.
Conclusion
In the year ahead, I expect IT departments will finally come to a greater understanding of container technology and how it can realistically and appropriately be used for IT operations alongside virtual infrastructure.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like