Five Best Practices to Optimize Virtualization and Cloud

As you move your organization closer to the cloud you'll experience even greater levels of virtualization and data abstraction

Bill Kleyman, CEO and Co-Founder

December 16, 2015

4 Min Read
Five Best Practices to Optimize Virtualization and Cloud
(Photo by Sean Gallup/Getty Images)

At this point, almost every modern data center will have worked with some type of virtualization technology. A recent Cisco report noted that cloud workloads are expected to more than triple (grow 3.3-fold) from 2014 to 2019, whereas traditional data center workloads are expected to see a global decline, for the first time, at a negative 1 percent CAGR from 2014 to 2019.

Traditionally, one server carried one workload. However, with increasing server computing capacity and virtualization, multiple workloads per physical server are common in cloud architectures. Cloud economics, including server cost, resiliency, scalability, and product lifespan, along with enhancements in cloud security, are promoting migration of workloads across servers, both inside the data center and across data centers (even data centers in different geographic areas).

With this in mind, it’s important to note that the modern hypervisor and cloud ecosystem have come a long way. VMware, Microsoft, Citrix, and others are paving the way with enterprise-ready technologies capable of consolidating an infrastructure and helping it grow harmoniously with other tools. Today, many systems are designed for virtualization and cloud readiness. In fact, best practices have been written around virtualizing heavy workloads such as SQL, Oracle, Exchange, and so on. Taking advantage of these cloud-ready platforms will make your data center more agile and more capable of meeting market demands.

As cloud and virtualization continue to grow and impact more organizations, let’s pause and examine some key considerations and best practices around these technologies.

  1. Use virtualization and cloud for business resiliency. Remember, from a DR and efficiency perspective, it’s always easier to provision a new VM than it is to rebuild a physical piece of hardware. You can create snapshots, backups, and even replicate entire virtual workloads between data center and cloud ecosystems. Virtualization and cloud computing can create a great DRBC strategy when configured properly.

  2. Virtualization and cloud help you shift data center economics. So many environments have hardware which, at this point, is approaching their EOL. In these situations it’s very important to take good look at how virtualization technologies, in conjunction with the cloud and unified architecture, can really help an environment consolidate and expand. Remember, better hardware with more intelligence built in means more VMs per host and greater density. This means more users can be handled with less amounts of hardware and physical resources. New server and data center ecosystems allow you to dynamically provision resources and allocate users. A good hardware platform can create great cloud economics. All of this translates into cost savings in the form of power, HVAC usage, space requirements and hardware utilization within the data center.

  3. Cloud and virtualization give you powerful controls around resources, VMs, and users. Automation, creating workflows, and controlling entire cloud instances is now a part of the management toolset within cloud and virtualization environments. For example, by using virtual images, administrators are able to move their workloads between distributed sites ensuring the resiliency of their data. Creating highly replicated hot sites becomes easier with mature, built-in technologies that come directly with the hypervisor. For example, integration with storage systems – both onsite and remote – is now a normal practice where data deduplication and backup comes standard with a given feature set. Furthermore, integrating with intelligent virtualization and cloud management controls makes working with a virtualized datacenter much easier and more efficient.

  4. Always plan around capacity, growth, and business alignment. Although virtualization is being widely adopted, there are still some areas which will need careful attention. First of all, sizing and scaling an environment will always be very important. Initial planning stages are crucial to making the right hardware and resource decisions. Not having enough resources to support your user count can be much more costly to resolve after a system has gone live. Remember, as with any physical resource, the capabilities of your data center are finite. This means administrators must carefully watch how their virtual workload is operating and where their resources are going. Too often, administrators over-provision a VM only to see that most of the resources go unused.

  5. With such a fluid architecture cloud and virtualization requires regular testing. For any cloud and virtualization ecosystem supporting critical applications - testing and maintenance will be very important. Always remember to manage logs, VM health and accessibility regularly. This means performing off-hours DR testing to ensure production system stay live. Creating runbooks and documenting changes helps resolve issues quickly and help administrators understand their environments quickly. The density and segmentation capabilities of cloud and virtualization allow administrators to carve bits of their environments out for testing and development. This allows you to test “production” workloads in a safe ecosystem. Take advantage of this, understand your workloads, and continuously optimize how you deploy your content. Testing applications, virtualization, and your entire cloud environment will create a much more proactive data center and cloud platform.

Just like any tool, cloud and virtualization must be properly maintained and optimized. The fluid nature of the modern user and the data that they are accessing requires administrators to know how their data center is performing. New tools allow you to granularly see how resources are allocated between local data center and distributed cloud locations. With greater levels of visibility come better levels of support and management. The overall goal should be continuous optimization which aligns with your IT teams, the users, and the business.

About the Author

Bill Kleyman

CEO and Co-Founder, Apolo

Bill Kleyman has more than 15 years of experience in enterprise technology. He also enjoys writing, blogging, and educating colleagues about tech. His published and referenced work can be found on Data Center Knowledge, AFCOM, ITPro Today, InformationWeek, Network Computing, TechTarget, Dark Reading, Forbes, CBS Interactive, Slashdot, and more.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like