SuperNAP: Wiring the Las Vegas Economy

When casino services firm Global Cash Access sought a data center with industrial-strength security and connectivity, it didn't have far to look. GCA chose the SuperNAP in Las Vegas to host infrastructure powered by Cisco's Unified Computing System.

John Rath

July 19, 2011

4 Min Read
DataCenterKnowledge logo in a gray background | DataCenterKnowledge

supernap-tscif-end-high

supernap-tscif-end-high

A high-density T-SCIF (Thermal Separate Compartment in Facility) enclosure inside the SuperNAP in Las Vegas.

When you process $19 billion in transactions a year, security and speed matter. Global Cash Access processes ATM transactions for many leading casinos, providing the back-end infrastructure that supports much of the gaming activity in Las Vegas. When the company began seeking a data center facility to provide industrial-strength security, reliability and connectivity, it didn't have far to look.

The SuperNAP is a huge data center built by Switch Communications with these kind of requirements in mind. The 407,000 square foot facility provides high-density colocation services for government and enterprise customers, and is emerging as a key hub for cloud computing services. For Global Cash Access (GCA), this was the right environment to house its upgraded IT environment running atop Cisco's Unified Computing System.

Connectivity, Cooling Build Customer Base
Switch has been building a series of colocation centers in Las Vegas since 2000. Its business got a boost in December 2002, when it acquired a former Enron broadband services facility initially designed to host a bandwidth exchange. Switch leveraged that connectivity to build and fill six data centers in the Las Vegas area.

In 2008 Switch came out of stealth mode as it unveiled the SuperNAP and cooling systems that could support up to 20 kilowatts per cabinet. Earlier this year it announced plans to build another 1.6 million square feet of space on adjacent land to complement the existing SuperNAP facility.

The combination of location, massive power, and big pipes has helped the SuperNAP address the scalability and density challenges for customers like Global Cash Access. During the 2011 Cisco Live! event Cisco sponsored a tour of the SuperNAP facility and showcased GCA's use of Cisco infrastructure and its transition to new infrastructure. Less than a year ago, GCA and its technology partner Nexus IS set out to create a hot/hot data center configuration in which the company's existing data center within company headquarters would be supplemented by an installation at the SuperNAP.

GCA boasts a who's who of Casino businesses around the world as clients. In the time that GCA, Nexus and Switch gave their presentations, GCA delivered $525,000 to casino floors worldwide. Their partner Nexus helped deliver a network design that met the GCA requirements of building a single unified infrastructure.  This was accomplished with Cisco UCS and VCE Vblock, supported by a Cisco Nexus network fabric, ASA firewalls and EMC and Cisco MDS storage equipment. All of it had the GCA requirement of 100 percent uptime.

Gaming Ecosystem at the SuperNAP

Similar to the financial trading communities that have emerged in peering data centers in New York and Chicago, GCA saw that many of its casino clients were also in the SuperNAP, making it easy to set up a cross connect within the facility to conduct business.  Additionally, GCA takes advantage of the SuperNAP's blended bandwidth offering called Switch CORE (Combined Ordering Retail Ecosystem). The core network at Switch is powered by Cisco equipment.

Today Switch has 100 megawatts of power available at the SuperNAP, with 150 megawatts on tap for expansion and hopes of going to 500 megawatts when the campus is completed. A total of 50 2.8-megawatt generators can generate 140 megawatts of backup power in the event of a utility outage.  The SuperNAP is second only to the Hoover Dam on the priority list for diesel fuel suppliers in an emergency, with contracts with multiple suppliers.

Power is delivered to the cabinet from a central corridor where a system + system design is used.  Borrowing from the Navy a color coded system in the power equipment helps to lower the human error when change windows are performed. The over 31,000 cabinets inside the SuperNAP range anywhere from a few kW and can go as high as 30kW.

SuperNAP High-Density Cooling

Supporting a 68-72 degree temperature range at the cabinet is an efficient cooling system developed by Switch founder and CEO  Rob Roy. The WDMD (Wattage Density Modular Design) units outside of the facility are able to switch between four different cooling methods depending on the current weather conditions. A total of 202,000 tons of cooling is delivered at 22,000,000 cubic feet per minute to the Thermal-SCIF containment systems, where cold air is delivered and the contained hot air is delivered back to the ceiling plenum to be returned to the cooling units.

With government and multi-national enterprise clients the amount and levels of security employed at the facility can not be understated. The Cisco tour buses were greeted by two Switch security vehicles (Hummers, of course) and led into a secure, gated area.  Armed guards patrolled the outside and the rest were every few feet inside.  The guards are all Switch employees, primarily military veterans. The usual man-trap and multi-factor authentication systems were employed as well, coordinated by the guards for tour members.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like