Insight and analysis on the data center space from industry thought leaders.

Virtual Visibility: Why You Need X-Ray Vision to Back Up Virtual Machines

All of that new-found agility that makes the virtual machine teams ninja-like in their ability to deliver IT as a service comes with a back-end challenge: back-up.

Industry Perspectives

November 30, 2011

5 Min Read
DataCenterKnowledge logo in a gray background | DataCenterKnowledge

Sean Regan, senior director of product marketing, Symantec Information Management Group.

Sean Regan, Symantec

SREGAN

SEAN REGAN
Symantec

This is part one in a two-part series on virtualization.

Virtualization makes almost everything easier for IT, except backup. Through virtualization, IT organizations can provision new machines almost instantly and perform disaster recovery even if they don’t have matching hardware. Furthermore, server cost reduction as a result of virtualization has proven to be one of the fastest and strongest ROI arguments around. If there is one key challenge for the virtualization team, it is backup.

All of that new-found agility that makes the virtual machine (VM) teams ninja-like in their ability to deliver IT as a service comes with a back-end challenge. As more and more mission critical applications and systems go virtual, how can these teams make sure they can deliver the same or better SLAs for backup? Virtualized systems and data are not second class workloads anymore, they are prime time.

Get on the Fast Track

What if you could accelerate your virtualization efforts with backup? Some backup technologies have improved upon their existing products to support VMware and Hyper-V backups brilliantly. Others have attempted to leverage the traditional models for backup on virtual machines with less success.

Through these innovations, many organizations have been able to successfully extend their existing backup infrastructure to VMware and Hyper-V estates. This helps mitigate the operational expense of setting up a new backup infrastructure, such as learning new systems, new procurement, maintenance and support processes. And, because it is an integrated platform that connects the existing physical infrastructure, the underlying host servers and the VM estate backup becomes a platform instead of a conglomeration of point solutions. That will result in lower CapEx costs as storage can be more efficiently managed via global deduplication. OpEx costs will also drop as existing processes for recovery, backup policy management, support, maintenance and training can be leveraged.

When a company evaluates backup, deduplication and storage management solutions that can only protect a small part of the backup environment they aren’t seeing the whole picture. It is a bit like a doctor who can only see part of an X-Ray.

X-Ray_Blocked

X-Ray_Blocked

BLOCKED VIEW

X-Ray_Full.jpg

X-Ray_Full

FULL VIEW (Images courtesy of Symantec.)

Start Treating Your Virtual Data like Real Data

In 2011, Symantec identified customers taking shortcuts to protecting information in physical and virtual environments. It was as if virtual systems were less important and companies were willing to lose them. Because 2011 saw such great progress as companies moved past the test and development stages of virtualization to virtualizing mission critical apps, I expect that 2012 will see some major failures in the recovery of information, if something doesn’t change.

There are some simple steps organizations can take to centralize backup and recovery of their physical and virtual server environments.

  • Unify Physical and Virtual! A common software platform enables organizations to centrally schedule backup jobs, manage recoveries, monitor the success and failure of backup jobs and provide a common console that backup administrators can use to administer all backup jobs.

  • Leverage the Backup Team to Protect VMs: Help keep the virtualization project humming by engaging the existing storage and backup teams to support your goals. In doing so, the goal of 50 percent or more virtualization can be achieved faster.

  • Implement automated and centralized monitoring and reporting: The creation of a virtual machine can now occur in as quickly as a few minutes by an individual and their creation can be automated through the use of scripting. Often the backup or VM admin may not even know about this, so organizations should have centralized backup software that can monitor and report on the creation of these VMs.

  • Choose a platform with deep API integration: Integration with leading virtual server platform features such as the vStorage API in VMware vSphere should be viewed as a prerequisite in order for them to successfully protect and recover these platforms. For example, the vStorage API introduced the concept of Change Block Tracking (CBT) that can be used to identify used blocks in the VMDK file so only those blocks are backed up instead of needing to backup the entire VMDK file. This same feature can also be used when doing incremental backups so only used blocks that have changed from the previous backup need to be backed up.

  • Dedupe Everywhere: Organizations should implement deduplication on all backup data at all levels across the enterprise’s physical and virtual environments. De-duplicating just VM backup data helps to reduce data stores but deduplicating all backup data, across all jobs from both physical and virtual machines can result in even higher storage savings.

  • Get Granular on Granular Recovery: Granular recovery is where the rubber meets the road when it comes to integration with VMware and Hyper-V. Those that have done it well can recover the single file they want, when they want, without the VM or backup administrator being forced to traverse the directories and files. They can instantly recover an email message without having to recreate and troll through the entire email repository.

By implementing a data protection strategy that provides improved virtualization visibility, and setting up the right processes, organizations can not only realize the initial benefits that server virtualization delivers, but successfully avoid some of the challenges that can accompany it.

X-ray vision isn’t just for doctors – IT managers need to see what’s going on inside virtual and physical machines because you can’t recover what you can’t see.

In part two of this two-part series, we’ll cover how improving virtualization visibility and managing physical and virtual environments together can improve storage utilization rates. Given that most companies have historically had storage utilization rates as low as 40 percent to 50 percent the value of addressing this problem becomes even more relevant. Gartner recently predicted that the popularity of server virtualization technology would increase storage capacity needs by 600 percent! We’ll talk about how to improve storage utilization by improving visibility.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like