Allocating Storage to VMs and Extending to Cloud
As your organization grows, find out how to properly manage your storage requirements and how to extend into the cloud
Storage is one of the hottest IT topics today. Acquisitions are happening regularly, as more users are moving to flash and new types of storage controller ecosystems. We’re seeing powerful hybrid systems emerge and even more impact around extending environments to cloud storage. Throughout all of this, organizations must understand how to utilize these new types of storage resources, and where they apply to their data centers.
The challenge to virtualization and storage engineers is this: How do you manage and work with all the new storage capabilities? Even more important, how can you dynamically manage workload storage requirements within a virtual environment?
Understanding Virtual Machine Storage Requirements
It goes without saying that planning is everything, and everything depends on the unique variables of the environment. Since each data center is different, certain questions must be answered prior to a storage initiative:
The business needs to understand the scope of virtualization in the environment. Will it have a majority of its systems virtualized, or will it be just a few VMs running?
More users, more services and more applications will all impact computing resources. You’ll need to accommodate for this. What are your business goals for the environment? That is, have you planned for the future?
Once a plan is established, the engineering team needs to understand what type of storage solution they will be rolling out. Some VMs require a set parameter for their storage requirements, while others can operate more dynamically. Consider these two options:
Pre-allocate the entire storage for the virtual disk upon creation
In this scenario the virtual storage disk is deployed as either split over a collection of flat files, typically each one is 2GB in size, collectively called a split flat file, or as a single, large flat file. The pre-allocated storage architecture is also commonly known as “thick-provisioning.”
Dynamically grow the storage on demand
Here, the virtual disk can still be implemented using split or single files with core exception: storage can be allocated on demand. This type of dynamic growth storage is also known as “thin-provisioning” (a term that both VMWare and Citrix made popular).
Storage is Finite, so Plan it Out
Almost any experienced IT engineer will confirm that storage is a valuable asset in any environment. One of the biggest problems facing any virtual deployment is storage. To be clear, the issue is not lack of it; it's the amount of storage available. Oftentimes, an IT manager will purchase several terabytes of disk space only to see it used up very quickly. After about three months of active usage engineers start to notice that almost 70 percent of the originally purchased space has been utilized. So, what happened?
Once storage becomes available, it tends to be used up very quickly. Needed storage space often gets allocated without much planning or initiative. The point is, if you have a SAN with ample amounts of space, use it wisely and plan out its usage. By knowing and understanding what the VMs and workloads in an environment require, an IT engineer can see their storage infrastructure start to work much longer and more efficiently. However, with virtualization constantly growing and the need to migrate old physical servers to a VM state, allocating storage becomes a daunting task. This is where powerful hypervisor and virtualization technologies can really help.
VMware, Citrix’s XenServer, and Microsoft Hyper-V, for example, are deployed with very sophisticated graphical user interfaces that provide a great deal of information. An administrator can see the connected storage repository, how it is being utilized and the space requirements for each VM. Each new update to these types of hypervisors expands this storage-link capability to include more vendors, more features and more control over storage directly at the GUI level. In fact, new features like VMware’s vSAN technology takes the storage conversation into the software-defined layer.
Using the hypervisor’s own GUI, administrators can now monitor, allocate, and manage their space requirements for all VMs. When thin-provisioning (dynamic storage allocation) is utilized for virtual disks, it’s very important to keep track of the unused space in the storage resource pool or data-store.
Over-allocating disk space is an issue where IT engineers are not keeping track of their free storage space. By keeping track of unallocated resources engineers can apply best practices and take steps to either free up space in the existing resource pool or increase the size of that pool before application disruption or downtime occur. To avoid any system downtime, it is recommended to track space usage over time and set alerts or alarms that will call attention to a pending out of space issue.
Remember, dynamic space allocation is nothing new. This feature has been available in most leading hypervisors for a few versions now. However, there are certain best practices to doing this the right way.
Set an alarm for your space requirements.
Adding additional space is not difficult. In reality it can be accomplished with about 3 mouse clicks. The challenge here is knowing how much space there is to allocate and if the environment is running out. To resolve this problem an engineer should set alarms with in the hypervisor to properly manage thin-provisioning. These alarms can be customized to trigger alerts upon certain thresholds so that an IT administrator can take the actions required to prevent an out-of-space issue. Alarms can be set on a data-store for a percentage full as well as a percent overcommitted.
Document and monitor the environment
Every major hypervisor’s GUI is advanced enough where any IT engineer should be able to look at the storage repository and have a solid idea to where they stand on space. Working with space requirements is a never ending process that requires attention at all times. Running out of space is not a pleasant issue to deal with and can be avoided for the most part by auditing and maintaining the storage environment.
Keep the storage and hypervisor infrastructure updated
Watching over the workload is an important ongoing task – keeping an eye on the storage hardware and hypervisor software is just as vital. New hardware and software releases promise better support and feature sets that help IT engineers manage their environments. Small changes can go a long way in managing space needs.
It’s important to remember that every data center and business is unique, and therefore space requirements can span all over the board. However, there are some key best practice tips and notes of caution that every IT engineer should keep in mind.
Nothing is ever set in stone. Modifying the size of a VM is very common. Some VMs cannot be changed and their space requirements are preset either by the IT manager or by the vendor. However, these examples are few. For the most part, a VM running in a storage pool has the ability to have its storage space modified. Administrators have the ability to add disk space as needed.
Always monitor your VMs. As mentioned earlier it’s important to know which resources VMs are using at any given moment. Workload management that involves watching VMs perform over time and seeing when storage demands fluctuate allows an engineer to properly distribute resources when needed.
Know your workloads. Never assume that an application or workload will always run the same. With service packs, additional users and changes in the overall environment, certain workloads can require more storage at any given time.
In today’s ever fluid IT infrastructure, it’s more critical than ever to manage our resources. The hypervisor is the gateway into the cloud. And, with that, we see the span of storage and our data. Allocating storage to your critical workloads is an important task which helps support your business. At the heart of the cloud sit various storage repositories all working hard to manage your virtual machines and data pools. It’ll be up the talented storage architects and the tools that they use to ensure these storage ecosystems continue to run optimally.
About the Author
You May Also Like