How Reducing Technical Debt Improves Sustainability
ITOps teams can become more environmentally friendly by reducing their technical debt. One way to do that is to run apps inside containers instead of VMs.
You know that technical debt — meaning inefficient code or processes that run repeatedly — can distract IT teams and cost the business money. For years, IT engineers have been advised that rooting out technical debt is a key step toward improving operational efficiency.
But here's another reason why you should loathe technical debt: It's bad for the environment.
After all, inefficient processes tend to result in higher energy consumption. And higher energy consumption is bad from a sustainability standpoint. If you want to embrace a green computing strategy, eliminating technical debt should be part of your plans.
To illustrate the point, let's examine a classic example of technical debt in the real world, and how it impacts power consumption and energy efficiency.
Technical Debt Example: VMs vs. Containers
A technical debt scenario that many ITOps engineers can appreciate today is running applications inside virtual machines when containers would be a better choice.
Virtual machines are less efficient because they require running a full-blown guest operating system. In contrast, a containerized application shares resources with the host server's operating system, rather than requiring its own guest operating system.
Containers' ability to share resources translates to significantly lower energy consumption, among other benefits.
To find out just how much energy containers can save, I did a quick experiment to compare the energy consumption of Ubuntu Linux running in a VM and Ubuntu running as a container.
I deployed the VM using the Kernel Virtual Machine:
kvm -m 2048 ubuntu-22.04-live-server-amd64.iso
And I deployed an Ubuntu Docker container with:
docker run -it ubuntu
No other VMs or containers were running on the system, and no applications were running in either the Ubuntu VM or the Ubuntu container.
After giving both the VM and the container a couple of minutes to settle after initial launch, I ran PowerTop, a tool that reports the power consumption of running processes on Linux. Here's what it showed:
Power est.UsageEvents/sCategoryDescription 467 mW 33.7 ms/s 103.6 Process [PID 119643] qemu-system-x86_64 -enable-kvm -m 2048 ubuntu-22.04-live-server-amd64.iso 44.7 mW 55.0 µs/s 11.3 Process [PID 1488] /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 33.3 mW 155.0 µs/s 8.4 Process [PID 1557] /usr/bin/containerd 23.6 mW 131.0 µs/s 5.9 Process [PID 1001] /usr/bin/containerd 22.7 mW 34.4 µs/s 5.7 Process [PID 120396] runc 18.8 mW 75.4 µs/s 4.7 Process [PID 1021] /usr/bin/containerd 9.69 mW 32.8 µs/s 2.4 Process [PID 1017] /usr/bin/containerd 9.26 mW 12.1 µs/s 2.3 Process [PID 1905] containerd-shim
(In the output above, I excluded lines that were unrelated to the virtual machine or container processes.)
As you can see, PowerTop reports that the virtual machine is using 467 mW of power. Meanwhile, the processes related to the Docker container are consuming 162.05 mW of power total.
From an energy consumption perspective, this makes the containerized instance of Ubuntu about 280% more efficient.
Keep in mind, too, that I had only one container running. It's likely that total electricity savings would be even higher if I were comparing multiple containers to multiple VMs because some of the container processes that PowerTop is tracking could be shared among containers. But with KVM, each VM would have to run as its own heavyweight process.
How Much Energy Does Technical Debt Really Waste?
Obviously, we're talking here only about milliwatts' worth of difference in energy consumption. In my experiment, the energy saved by using a container instead of a VM (305 milliwatts) is equal to around 1/28th of the total energy required to power a standard LED light bulb (8,500 milliwatts) for an hour. That's pretty negligible from a sustainability perspective.
But at scale, the difference could add up. If you are deploying hundreds or thousands of applications on a continuous basis, being able to run them in containers – which, as we've seen, can reduce energy consumption by a factor of something like 2.8 – could save very large amounts of electricity.
That's especially true if you factor in the ancillary energy costs (like cooling) associated with energy-hungry resources, in addition to the direct wattage consumption of each process.
Limitations in Comparing VM and Container Energy Consumption
Admittedly, the results above are subject to some limitations. For one, I booted the KVM virtual machine to an Ubuntu Server live ISO image, which is not identical to the Ubuntu container base image that I used when running the container. So it's not quite an apples-to-apples comparison. It's also possible that I could have improved the energy efficiency of the VM by experimenting with different memory allocations or turning off unnecessary processes inside the guest operating system. A VM that is actually installed to a virtual disk may also consume less energy than one running as a live system based on an ISO file.
That said, it seems pretty unlikely that any of these variables would affect the overall results of my experiment. No matter which tweaks you apply, it's almost certainly the case that containers are more energy-efficient — and, hence, more sustainable — than VMs.
Conclusion
Although most ITOps teams know that running applications inside containers is more efficient, they may put off the migration to containers because they don't have time to refactor applications or learn complex container technology.
But in failing to embrace the more efficient technology, these teams end up with technical debt. Not only does that bloat costs and create operational inefficiencies (like slower startup times for VMs as opposed to containers), but it also has a clearly negative impact on sustainability in the form of higher energy consumption.
christophertozzi_13
Christopher Tozzi is a technology analyst with subject matter expertise in cloud computing, application development, open source software, virtualization, containers and more. He also lectures at a major university in the Albany, New York, area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.Read more about:
Technical ExplainerAbout the Author
You May Also Like