HPC Customers Get a Cloud Computing Option
Six months after launching its Penguin on Demand (POD) cloud computing service, high-performance Linux cluster provider Penguin Computing said up to 200 of its 2,000-strong customer base are using its on-demand offering.
February 5, 2010
Six months after launching its Penguin on Demand (POD) cloud computing service, high-performance Linux cluster provider Penguin Computing said up to 200 of its 2,000-strong customer base are using its on-demand offering, including some new customer wins in the life-sciences sector.
POD is a complete high-performance computing (HPC) system in the cloud, but no virtualization technology is used because Penguin wanted to offer each of its HPC customers dedicated servers to ensure optimum performance.
HPC users, particularly academic researchers, are naturally drawn to the cost benefits of dynamic compute environments, as illustrated by the many shared networks of academic and research organizations, such as ESnet, a high-speed network managed by Lawrence Berkeley National Laboratory that's used by thousands of Department of Energy scientists and collaborators worldwide. Some researchers have also been drawn to public clouds, including Amazon Web Services, which offers free usage credits to educators, academic researchers and students.
HPC on Azure, Too
Microsoft is reporting HPC customers using traditional HPC systems in conjunction with its Azure hosted platform. RiskMetrics, a financial risk management firm used Azure to help it measure and model complex financial instruments. According to Microsoft, RiskMetrics "anticipates developing increasingly seamless and scalable applications that span Windows Azure and Windows HPC Server 2008 ... to deliver both on-premises and cloud computing capacity as needed."
Charles Wuischpard, CEO at Penguin Computing, said some of its POD customers tested cloud services from Amazon, which they used in conjunction with Penguin's existing Linux clusters. However, the performance wasn't enough for HPC users, Wuischpard said, and Amazon failed to provide the level of service required of the scientific community.
"The main problem with running HPC tasks on conventional clouds is that conventional clouds are geared toward supporting general-purpose applications and services – short transactional workloads such as Web applications and database tasks," writes William Fellows, principal analyst at The 451 Group in a report on POD. "These are heavily dependent on the need to be processed serially and within an infrastructure geared toward supporting inter-process communication.
"HPC tasks, by contrast, are mostly complex, long-running algorithms processed in parallel, with the result of one task not dependent on the outcome of another. Processing threads are brought together at the end of the activity."
Dedicated Servers vs. Virtualized Instances
Wuischpard said it took one customer 18 hours to run a heart modeling simulation in EC2, but the same application took 30 minutes on POD using 64 cores, the same number that was used in Amazon's cloud. Wuischpard attributes the speed increase to the use of dedicated servers, instead of virtualized instances that could be distributed across geographies. POD jobs are run on supercomputers using Gigabit Ethernet and InfiniBand, and servers are in the same location to maximize performance, according to Penguin.
Despite the use of high-performance equipment, Wuischpard says POD's pricing is comparable to the high-end of Amazon's pricing plan.
Wuischpard says HPC cloud offerings will co-exist with the traditional supercomputing market, and predicts that POD could represent 20% of Penguin's overall revenue.
About the Author
You May Also Like