Insight and analysis on the data center space from industry thought leaders.

Navigating Data Center Performance Challenges With KPIs (Part 1)

The path to effective infrastructure management begins with global visibility and metrics, writes David Applebaum of Sentilla. In Part I of a two-part series, he examines what Key Performance Indicators (KPIs) data center managers need to measure.

Industry Perspectives

February 12, 2013

4 Min Read
DataCenterKnowledge logo in a gray background | DataCenterKnowledge

David Appelbaum is vice president of marketing at Sentilla Corporation, and has worked in software marketing roles at Borland, Oracle, Autonomy, Salesforce.com, BigFix, and Act-On.

David_Appelbaum-Sentilla-tn

David_Appelbaum-Sentilla-tn

DAVID APPELBAUM
Sentilla

Today’s data center is characterized by sprawling, complex and difficult-to-understand infrastructure that once installed never leaves. Data center professionals must address constant demands for new services and rapid data growth, along with escalating demand and availability requirements. To make smart decisions about their infrastructure, they need asset-level and multi-level visibility into what's happening across all their data centers and colocation facilities. The path to effective infrastructure management begins with global visibility and metrics.

A survey of 5,000 data center professionals conducted in the second half of 2012 highlighted the Key Performance Indicators (KPIs) and metrics that data center professionals need for smart infrastructure decisions. It also tracked the kinds of tools respondents were using, and how effective those tools were at delivering meaningful KPIs.

Here is the first group of findings from this survey, with the rest to be unveiled in a second post coming soon.

Good Metrics are Essential for Flight

An experienced pilot is comfortable flying a small plane by sight in good weather. But, if you put that same pilot in a jumbo jet in foggy weather, he or she will need cockpit instrumentation, a flight plan and air traffic control support.

Today's data center is more like the jumbo jet in the fog than the small plane. There are many moving parts. New technologies like virtualization add layers of abstraction to the data center. Applications –and especially data– grow exponentially. Power, space, and storage are expensive and finite, yet you need to keep all the business-critical applications running.

To plan for tomorrow's data center infrastructure, you need visibility into what's happening today. Relevant metrics show not only what’s happening, but also how it relates to other parts of the data center and to your costs:

  • What’s the ongoing operational and management cost of one application versus another?

  • What’s your data center’s power capacity versus utilization?

  • Do you have enough compute, network, storage, power, and space to add this new application?

  • Which applications aren't using much of their capacity, and can you reclaim capacity by virtualizing them?

In the absence of good information, you have to make decisions based on hunches, trends, and incomplete, disconnected data. The safest choice may be over-provisioning. This kind of 'flying blind' is a risky and expensive way to run a data center.

The Heart of the Problem: Capacity Limits

While over-provisioning might seem like the safest course of action for preventing performance and availability problems, it can also lead to serious capacity shortages in other places. For example, if you keep adding servers to an application, you may run out of rack space and hit power limitations.

The aforementioned 2012 survey indicates that many data centers are running into capacity constraints. Nearly a third of the respondents have used more than 70 percent of their available rack space.

Many face storage constraints as well – 40 percent of the respondents have used more than half of their available storage, and another 36 percent didn’t know their disk space utilization.

Many organizations find themselves in a cycle in which they’re constantly planning and building out new data centers – without having optimized the efficiency of the existing infrastructure.

Cloud technologies seem to offer an answer, but in reality cloud computing only moves the capacity problems from one location to another. To make intelligent decisions about what to move to the cloud, you need insight into application utilization, capacity and cost – because whether an application resides in your data center, a private hosted cloud or the public cloud, you’re still paying for capacity.

How Do You Address Capacity Constraints?

Different strategies are being employed to address the capacity constraints in the data center:

  • Migrate applications to cloud resources (in either private or public clouds).  Moving to something like Amazon Web Services may require application re-architecting.

  • Consolidate applications, servers and even data centers to gain efficiencies. The 2012 survey shows that there’s still plenty of room for more server virtualization in most data centers. More than half of respondents had virtualized 50 percent or less of their data center applications.

To make smart decisions about where to invest your resources and efforts, you need insight into what’s happening in the data center today. This includes information about capacity, utilization, and cost. Stay tuned for Part 2 to see the rest of the findings.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like