HPC Virtualization Use Cases and Best Practices
The power of virtualization has now expanded into the HPC world. Find out how new virtual systems can impact your HPC environment and your business!
The use of high-performance computing is continuing to grow. The critical nature of information and complex workloads have created a growing need for HPC systems. Through it all, compute density plays a big role in the number of parallel workloads we’re able to run. So, how are HPC, virtualization, and cloud computing playing together? Let’s take a look.
One variant of HPC infrastructure is vHPC ("v" stands for "virtual"). A typical HPC cluster runs a single operating system and software stack across all nodes. This could be great for scheduling jobs, but what if multiple people and groups are involved? What if a researcher needs their own piece of HPC space for testing, development, and data correlation? Virtualized HPC clusters enable sharing of compute resources, letting researchers “bring their own software.” You can then archive images, test against them, and maintain the ability for individual teams to fully customize their OS, research tools, and workload configurations.
Effectively, you are eliminating islands of compute and allowing the use of VMs in a shared environment, which removes another obstacle to centralization of HPC resources. These benefits can have an impact in fields like life sciences, finance, and education, to name just a few examples.
Combining Cloud and HPC. When an end user deploys a virtual HPC cluster, they’re doing so with a pre-validated architecture which specifies the required machine attributes, the number of VMs, and the critical software that should be included in the VM. Basically, you allow full customization to their requirements. This architecture also allows the central IT group to enforce corporate IT policies. By centralizing data, virtual resources, and user workloads security administrators are able to, for example, ensure security and data protection policies.
Virtualizing Hadoop (and Big Data Engines). Not only can you virtualize Big Data clusters, you can now allow them to scale into the cloud. Project Serengeti, for example – VMware’s open source Hadoop virtualization for its hypervisor -- allows the virtual system to be triggered from the VMware vCloud Automation Center, making it easy to enable users to self-provision Hadoop clusters. So why introduce an extra level of indirection to get from the Map/Reduce tasks to the storage? Here are a few reasons:
Virtualization ecosystems can make it very fast
Because it allows for easy elasticity of the compute part of the cluster (since it is decoupled from storage), and because it supports multi-tenant access to the underlying HDFS file system, which is owned and managed by the DNs.
Other key benefits to consider:
Simplified Hadoop cluster configuration and provisioning
Support multi-tenant environments
Support Hadoop usage in existing virtualized data centers
Big Data Extensions
There are two critical aspects to look out for when virtualizing HPC:
Low-latency apps. Bypassing the kernel in HPC bare-metal environments is the standard way to achieve highest bandwidth and lowest latency, which is critical for many MPI applications. The cool part here is that VMware can do the analog of this in their virtual environment. In using VM Direct Path I/O, VMware can make the hardware device (e.g. InfiniBand) directly visible to the guest, which then allows the application to have direct access to the hardware as in the bare-metal case. This capability is available now using ESXi.
Not all HPC workloads are made to be virtualized. We need to be realistic here. Before you virtualize an HPC cluster, make sure it’s the right thing to do. Basically, make sure to develop a use case for your application and ensure the performance metrics you require will be available for your cluster. Remember, use cases can revolve around:
Research requirements
Volume of data being processed
Sensitive nature of the information
Specific hardware requirements
Your user base
Location of the data
Here’s one more thing to think about: getting started with virtual HPC isn’t as hard or as risky as it may seem. Why? Because it’s virtual.
About the Author
You May Also Like