An Epic Super-Sizing for the SuperNAP
Switch plans to build an additional 1.6 million square feet of high-density mission-critical space on land adjacent to its SuperNAP, creating an immense data center hub in Las Vegas spanning more than 2 million square feet.
January 10, 2011
supernap-scif
One of the T-SCIF high-density cooling enclosures at the SuperNAP in Las Vegas. Switch is expanding its Vegas campus to 2 million square feet of data center space.
The SuperNAP is about to become the Super-DuperNAP. The 407,000 square foot Switch SuperNAP in Las Vegas is already one of the world’s largest data centers. But it turns out that the mammoth facility was just a first glimpse of a much larger vision for the future.
Switch now plans to build another 1.6 million square feet of mission-critical space on land adjacent to the SuperNAP, creating an immense data center hub spanning more than 2 million square feet. In March the company plans to break ground on the first of a series of additional facilities, each between 200,000 and 500,000 square feet. The expanded campus will be known as SuperNAP-West.
Largest Data Center Project Yet
The company envisions a 500-megawatt Las Vegas campus that will house 31,000 cabinets of servers upon its completion, supported by 200,000 tons of cooling. In an era when data center projects are getting bigger and bigger, the Switch expansion would be the largest single-company campus yet.
Switch CEO Rob Roy said the scope of the project reflects the emergence of cloud computing, which is driving “unprecedented demand” from major technology companies seeking large amounts of data center space. “The client needs that are coming to us are moving to this huge scale,” said Roy. “Cloud changes the entire business model.”
Cloud computing is driving more efficient use of server capacity, allowing companies to fill server racks to capacity. In the data center, full racks translate into higher power densities. Roy says this shift toward high-density deployments has boosted business for Switch, which has been a leader in high-density colocation.
Roy said customers planning for long-term growth want to scale their operations without constantly running out of power and space, and scouting new data center locations. "The expansion of SuperNAP-West demonstrates our unique ability to facilitate client growth without current or future concerns around power, cooling, connectivity or space," said Roy.
Bucking Conventional Wisdom
The scale of the expansion is ambitious, even in an environment where the supply of quality data center space remains tight. But Roy and his Switch team haven’t been afraid to buck conventional wisdom in their approach to data center design and scale.
The company was little known outside its Las Vegas home base prior to 2008, when Switch unveiled its plans for the SuperNAP and its innovative approach to high-density cooling.
Switch has been building a series of colocation centers in Las Vegas since 2000. Its business got a boost in December 2002, when it acquired a former Enron broadband services facility out of bankruptcy. Enron had been seeking to build a commodity bandwidth exchange, and had arranged excellent connectivity for its Las Vegas center.
Connecting Carriers and Customers
That connectivity plays a central role in the network effect driving Switch's approach to a campus ecosystem. Switch aggregates the telecom buying power of its customers through SwitchCORE (Combined Order Retail Ecosystem), a carrier-neutral purchasing consortium. Bandwidth providers then compete for business requirments posted by SwitchCORE. Switch says this approach provides volume deals for its carriers, and favorable rates for its colocation customers.
Switch has also developed two custom-built cooling technologies to support power loads of 1,500 watts a square foot:
A heat containment system known as T-SCIF (Thermal Separate Compartment in Facility). Overhead cooling ducts drop chilled air into the cool aisle, which sits on a slab rather than a raised floor. T-SCIF systems encapsulate each rack, leaving the front open to the cold aisle. The enclosure uses a chimney system to deliver waste heat back into the ceiling plenum, where it can be returned to the cooling units.(see our video tour of a T-SCIF unit).
Cooling is powered by custom units known as WDMD (Wattage Density Modular Design) that sit outside the building and can switch between four different cooling methods to provide the most efficient mode for changing weather conditions. (See our video overview of the WDMD).
Switch doesn't normally disclose the identity of individual customers, but has a client base of military and government customers and large Internet companies. Many customers have chosen Switch because of its focus on cooling high-density racks.
To support those demands, the SuperNAP has a power capacity of 100 megawatts. The site has an additional 150 megawatts available for a current capacity of 250 megawatts, which will be expanded to 500 megawatts as Switch builds additional facilities.
Switch also plans to build up to 500,000 square feet of additional office space at its campus. Here's a look at a Switch illustration of what the final result will look like.
supernap-west-campus-1
An illustration of the plans for the expanded Switch campus, with multiple data centers and office space surrounding the current site of the SuperNAP.
For additional coverage of Switch see our Switch SuperNAPs channel
About the Author
You May Also Like