LANL Selects DataDirect Networks to Support Petascale Collaboration
Los Alamos National Laboratory selected DataDirect Networks to deliver high-performance storage to support its institutional computing program.
November 14, 2013
DataDirect Networks (DDN) announced that it has been selected by Los Alamos National Laboratory (LANL) to deliver high-performance storage to support its institutional computing program, which encompasses a broad range of unclassified, collaborative scientific efforts, including the study of biology, earth science, physics, oceans and cosmology.
“In supporting LANL’s broad range of scientific computing projects, it’s imperative that our scientists, researchers and colleagues have instant access to the data they need to analyze results and improve scientific outcomes," said Bob Tomlinson, institutional computing program manager at Los Alamos National Laboratory. "With DDN’s high-performance storage and site-wide file system approach, LANL will be equipped to support our compute-intensive demands both now and in the future.”
The collaborative effort brings the power of nearly 70,000 computing cores and more than a petaflop of processing power to every LANL scientist and engineer as well as research colleagues around the world.
LANL selected DDN Storage Fusion Architecture (SFA) high-performance storage and DDN’s EXAScaler Lustre file system appliance to meet the compute-intensive demands of 11 separate computing clusters by delivering 4.3 PBs of storage capacity and up to 40 GB/second I/O performance via the Lustre file system.
The storage will allow users to store, access and share massive amounts of data among its diverse and distributed community of scientists and researchers while having the flexibility to connect to different HPC platforms across its common environment.
SC13 Sneak Peak
DDN also gave a sneak peak into new technology that it will be announcing next week at Supercomputing 2013 in Denver, Colorado. At the event, DDN will introduce a proprietary new technology designed to significantly accelerate file systems, extract the best performance efficiency across the I/O hierarchy and drive down storage costs. The "burst forward" approach of this new technology looks to bring HPC customers within reach of the exascale "Holy Grail."
About the Author
You May Also Like