Insight and analysis on the data center space from industry thought leaders.
Strategies for the Containerized Data Center
Unlike traditional data centers where there is one core layer and all of the resources are managed as a whole, modular data centers use a distributed core architecture. These containerized data centers can deliver flexibility, reliability, and scalability at a lower cost.
September 8, 2011
Bala Pitchaikani is Vice President, Product Line Management, at Force10 Networks (recently acquired by Dell), where he is focused on delivering open, standards-based products and solutions for high-performance data centers.
Bala-Pitchaikani-tn
BALA PITCHAIKANIForce 10 Networks
Whether seeking to distribute their data center assets to geographies with lower power and cooling costs or delivering computing resources to better support local populations, many large organizations are looking at containerized data centers as a solution. But building a containerized data center isn’t the same as building a conventional data center. In this article, we’ll explore the key differences between containerized and conventional data centers, and look at the requirements for containerized data centers.
Containerized data centers vs. traditional data centers
A containerized data center is a self-contained module consisting of compute, storage, and networking resources, designed to fit into a shipping container. Local experts build and configure from one to several racks of equipment inside a container, and the container is then shipped to a remote location to serve a specific set of customers, or to take advantage of lower space, power, and cooling costs. Containerized data centers are placed in parking garages or other properties where power can be supplied.
Unlike traditional data centers where there is one core layer and all of the resources are managed as a whole, modular data centers use a distributed core architecture.
There are several key advantages to a containerized data center:
Scalability – A containerized data center approach allows the total computing power to be scaled up by simply adding more containers. Each module plugs into the architecture like a Lego block. You don’t have to schedule a maintenance window to add capacity (or provision unused and expensive spare capacity) as you would in a traditional data center, which can cost thousands or even hundreds of thousands of dollars, depending on which applications you’re running. In a high-frequency trading application or a Web 2.0 gaming portal, for example, downtime is extremely expensive.
Management - Unlike traditional data centers, a containerized data center features lights-out management – each separate ‘virtualized’ container is managed remotely from the company’s network operations center. This is much more cost-effective than having management personnel on site.
Troubleshooting – In a modular data center approach, faults are confined to each data center module, so it is easier to isolate problems and diagnose them. Containerized data centers are designed to be maintained by simply replacing a bad server or storage array or a (partial) rack, so there is less expertise required to perform on-site maintenance at a remote location than there is in a conventional data center.
Reliability – Because the containerized approach creates and leverages several self-contained data center modules, a total failure of the core in one module does not bring down the entire infrastructure, as it would in a traditional data center.
Requirements for building a containerized data center
Since gaining the benefits of a containerized data center approach relies heavily on distributed core and manageability, the focal point for containerized data center architecture is the top-of-rack switch. The top-of-rack switch can create rack- server- and storage-level management and administration capabilities to provide the flexibility needed to serve various customers with the container.
There are several requirements for building a containerized data center:
Distributed Core – The data center must have a distributed core so it can be extended in an elastic fashion while providing a resilient non-blocking core. Key technologies to deliver a distributed core are ultra-scalable L3 architecture, including support for 64-way ECMP and/or TRILL-like L2 architectures.
Open Standards, Open Ecosystems – Adherence to open standards is a fundamental requirement for a containerized data center, because the user needs the flexibility to build and replace components by mixing and matching if necessary. The requirement for standards adherence extends beyond Ethernet and Fibre Channel to specific protocols such as EVB (Ethernet Virtual Bridging) and TRILL while conforming to usage models defined by consortiums like Open Stack and the ODC Alliance.
Self-healing and self-management – The modular data center should be able to automatically work around failed components such as server blades or storage arrays. Such a failure should not bring down the whole data center, but should be absorbed by the overall infrastructure until the bad component can be replaced. The top-of-rack switch should handle this.
Self-orchestration – The top-of-rack switch should have the ability to provision groups of servers or storage arrays and assign them to specific customers. Using a drag-and-drop interface, the remote management team should be able to provision any number of physical and virtual servers and a specific number of Terabytes of storage, and make them available to one or more customers.
Appliance-level services – The containerized data center must have discrete services such as firewalling, load balancing, and network optimization. With an application-aware top-of-rack switch, these functions can be provided directly on the top-of-rack switch.
While there is a level of comfort provided by having all of a company’s data center assets in one building where they can be closely managed, containerized data centers can deliver greater flexibility, reliability, and scalability at a lower cost.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like