Latency, Bandwidth, Disaster Recovery: Selecting the Right Data Center

The process to select a good data center has to involve not only the physical elements of the facility but the workload to be delivered as well.

Bill Kleyman, CEO and Co-Founder

September 1, 2015

4 Min Read
Latency, Bandwidth, Disaster Recovery: Selecting the Right Data Center
Inside a CenturyLink data center. (Photo: CenturyLink)

In selecting the right type of data center colocation, administrators must thoroughly plan out their deployment and strategies. This means involving more than just facilities teams in the planning stages. The process to select a good data center has to involve not only the physical elements of the facility but the workload to be delivered as well.

Are you working with web applications? Are you delivering virtual desktops to users across the nation? There are several key considerations around the type of data or applications an organization is trying to deliver via the data center.

Network Bandwidth and Latency: With the increase of traffic moving through the internet, there is a greater demand for more bandwidth and less latency. As discussed earlier, it’s important to have your data reside closer to your users as well as the applications or workloads which are being accessed. Where data may have not fluctuated too much in the past, current demands are much different.

  • Bandwidth Bursts. Many providers now offer something known as bandwidth bursts. This allows the administrator to temporarily increase the amount of bandwidth available to the environment based on immediate demand. This is useful for seasonal or highly cyclical industries. There will come a time when for a period of business operation more bandwidth is required to help deliver the data. In those cases, look for partners who can dynamically increase that amount and then de-provision those resources when they are no longer being used.

  • Network Testing. Always test your network and the network of the provider. Examine their internal speeds and see how your data will act on that network. This also means taking a look at the various ISP and connectivity providers being offered by the colocation provider. Many times a poor networking infrastructure won’t be able to handle a large organization’s ‘Big Data’ needs despite potentially having a fast internet connection. Without good QoS and ISP segmentation, some data centers can actually become saturated. Look for partners with good, established connections providing guaranteed speeds.

  • Know Your Applications. One of the best ways to gauge data requirements is to know and understand the underlying application or workload. Deployment best practices dictate that there must be a clear understanding of how an application functions, the resources it requires and how well it operates on a given platform. By designing the needs around the application, there is less chance that improper resources are assigned to that workload.

Balancing the Workload, Continuity and Disaster Recovery: Selecting a colocation provider goes well beyond just choosing their internal features and offerings. Companies looking to move to a provider platform must know what they are deploying, the continuity metrics of their infrastructure and incorporate disaster recovery into their planning.

  • Workload Balancing. When working with a data center provider, design your infrastructure around a well-balanced workload model. This means that no one server is over-provisioned and that each physical host is capable of handling the workload of another host should an event occur. Good workload balancing will ensure that no one system is ever over-burdened. This is where a good colocation partner can help. Many times monitoring tools can be used to see inside the workload to ensure that the physical server running that application is operating optimally. Sometimes features offering dynamic workload balancing are available. If that is a requirement, make sure to have that conversation with your colocation partner.

  • Business Continuity. In a business continuity model, the idea is to keep operations running optimally without disruptions in the general infrastructure. One of the best ways to understand business continuity metrics is to, again, conduct a BIA. By having documentation available showing which workload or server is most critical, measures can be taken to ensure maximum uptime.

  • Disaster Recovery. A core function of many colocation providers is their ability to act as a major disaster recovery component. In working with a partner, select a design which is capable of handling a major failure, while still recovering systems quickly. There is really no way of telling which components are more critical than others without conducting a BIA. Without this type of assessment, an organization can miss some vital pieces and severely lessen the effectiveness of a DR plan. Once the DR components are established, an organization can work with a colocation provider to develop a plan to ensure maximum uptime for those pieces. This is where clear communication and good DR documentation can really help. The idea here is to understand that a major event occurred and recover from that event as quickly and efficiently as possible. A good DR plan will have a price associated with it, but from a business uptime perspective, it’s worth it.

Various technologies can affect how well a data center performs. The distance the data has to travel and the amount of bandwidth provided by a colocation provider can mean the difference between a great user experience and a failed colocation deployment. Cloud computing has created a greater dependency on WAN technologies and virtualization has enabled significantly more powerful servers and more dense storage. With these new technologies come new considerations around how this type of data is being stored and delivered. When selecting the right colocation provider, make sure that their infrastructure is capable of growing with the needs of your organization.

About the Author

Bill Kleyman

CEO and Co-Founder, Apolo

Bill Kleyman has more than 15 years of experience in enterprise technology. He also enjoys writing, blogging, and educating colleagues about tech. His published and referenced work can be found on Data Center Knowledge, AFCOM, ITPro Today, InformationWeek, Network Computing, TechTarget, Dark Reading, Forbes, CBS Interactive, Slashdot, and more.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like