Five Best Practices for Managing Remote Data Center Resources

The key point to remember is that resources are always finite and can become a costly expense when they’re not managed properly.

Bill Kleyman, CEO and Co-Founder

November 14, 2016

6 Min Read
Five Best Practices for Managing Remote Data Center Resources
Thinkstock

WHIR-logo.png

Brought to You by The WHIR

Today’s modern IT infrastructure has evolved from one localized environment into a distributed data center architecture. Administrators are able to utilize better remote colocations to help them expand their existing environment and accomplish more business-related IT tasks. Organizations are utilizing cloud technologies and dedicated WAN links to help them grow their existing data centers and use outside resources for DR purposes, expansion, additional user workloads, or even testing and development.

When these resources become available, the tendency becomes to use them up. When an environment is local, it might be easier to manage and keep an eye on existing resources within a data center.

However, what if there are branches or other remote environments which require monitoring as well? The key point to remember is that resources are always finite and can become a costly expense when they’re not managed properly.

Remote Data Center Resource Best Practices

Much like working with a local data center environment, administrators must continuously work to monitor their infrastructure to allow optimal performance. There are 5 considerations that must be made when working with remote data centers. However, many of them will resemble existing local environments.

Planning

Prior to deploying any environment in the cloud or remotely, considerable planning must happen with all team members involved. For example, if the remote data center is only hosting a testing environment, project managers must still include appropriate teams to help build and drive the project. Active directory authentication, storage allocation and other vital resources may be spread through various teams. Without proper planning an environment may be launched, but it will probably be mismanaged. In cases where planning excludes specific team members, administrators quickly become unaware of potential inefficiencies happening within their environment. A good example of that would be the exclusion of the security team just because the environment is deemed “non-critical.”

A high level of preparedness must be in place since access to corporate workloads is delivered over the WAN. Even in a low priority environment, all necessary considerations and planning steps must be made.

If an organization is planning on using a service provider for a pay-as-you-go data center model, take the time to develop an appropriate SLA. This is a crucial part of the planning process which unfortunately often goes overlooked. Take the time to understand uptime requirements, service metrics and how overages will be charged. The key here is to make sure the environment makes sense and continues to work in favor of the organization.

Managing Resources

Over time, resource management has become easier with better tools and more visibility into a given environment. Nevertheless, managing these resources is still a crucial part of remote data center management. Visibility into the remote data center is vital and must occur with a proactive approach. There are two ways to look at remote data center resource management:

  • Controlled: In this situation, an organization has direct control over their data center and must take the responsibility of managing their environment. In these cases, engineers must use remote data center management tools to observe and proactively act upon the needs of their infrastructure.

  • Service-driven: Oftentimes an organization will outsource their entire branch data center to third party vendors and allow them to manage their resources. Even in these scenarios it’s still very important to keep an eye on existing workloads. Service provider contracts will have stipulations covering overages on RAM, CPU and WAN usage. Proper workload and VM management will help prevent additional costs associated with going over on resources.

Using Native Tools

We’ve taken a look at the importance in planning out a remote environment and managing their resources. Now, it’s important to take a look at how this can be accomplished. Modern data center design has grown to heavily rely on virtualization. Hypervisor technology has matured greatly over the past few years to the point of where granular information is available at the user’s fingertips. Native hypervisor tools provide powerful, proactive features capable of granular visibility into an environment.

Alerts can be setup per host or per VM depending on the needs of the environment. It is always important to monitor other aspects outside of just memory. Setting up alerts for storage, CPU and networking are also very important considerations.

Being aware of how resources are allocated and used within a remote data center help not only in preventing downtime, it also helps with the under or over allocation of resources. When resources are managed properly, administrators will know how much they are able to assign to each workload thereby saving money by accurately sizing their VMs.

Using Third-Party Tools

Non-native, third party tools give you the ability to granularly examine remote data centers and view how their resources are being used. This screen shot is examining one specific remote location and its current resource usage, as well as the past statistics.

Further capabilities of these toolsets allow the administrator to view multiple sites at the same time. This type of visibility creates an environment where resources are properly managed and distributed as needed.

Utilizing third-party tools can truly expand the capabilities of a distributed environment. Oftentimes, highly dispersed data center environments require this type of granular visibility where native hypervisor and monitoring tools fall short. Since every environment is unique, it’ll be up to the IT managers to truly decide which approach is best for their distributed data center infrastructure.

WAN Management

Every good environment will have made WAN link considerations when it comes to their distributed environment. Since the goals of each organization are unique, requirements for bandwidth will always depend on the requirements of the infrastructure. When working with a provider, administrators must work with their SLA to ensure optimal performance. Prior to moving to any WAN provider, it’s important to know and understand the demands of the distributed environment.

This means testing existing workloads and their bandwidth requirements. SQL clusters, SAN-to-SAN replication and application networking are all considerations that must be made when working on choosing the right amount of bandwidth between locations.

Although each environment will have its own needs, there is a good set of best practices which can be followed for a respective site type.

Regardless of the type of distributed infrastructure, administrators must always be prepared to manage their data center resources. As environments continue to evolve, it will be up to the IT managers to know and understand the type of visibility that is required to keep their organization functioning properly.

This article was originally posted here at The Whir.

About the Author

Bill Kleyman

CEO and Co-Founder, Apolo

Bill Kleyman has more than 15 years of experience in enterprise technology. He also enjoys writing, blogging, and educating colleagues about tech. His published and referenced work can be found on Data Center Knowledge, AFCOM, ITPro Today, InformationWeek, Network Computing, TechTarget, Dark Reading, Forbes, CBS Interactive, Slashdot, and more.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like