The Facebook Data Center FAQ
Find everything you ever wanted to know about Facebook’s data centers.
September 27, 2010
With 2.98 billion monthly active users, Facebook is the third-busiest site on the internet, and has built an extensive infrastructure to support this already massive and still growing user base. The social network was launched in February 2004, initially out of Facebook founder Mark Zuckerberg’s dorm room at Harvard University using a single server. The company’s servers are now housed in numerous gigantic data centers around the world. Facebook has not stopped building new data centers and seeking for new data center sites ever since it launched its first company-built and operated server farm in Prineville, Oregon, in April 2011.
Each data center houses tens of thousands of computer servers, which are networked together and linked to the outside world through fiber optic cables. Every time you share information on Facebook, the servers in these data centers receive the information and distribute it to your network of friends.
We’ve written a lot about Facebook’s infrastructure and have compiled this information into a series of Frequently Asked Questions. Here’s the Facebook Data Center FAQ (or “Everything You Ever Wanted to Know About Facebook’s Data Centers”).
How big is Facebook’s internet infrastructure?
Facebook requires massive storage infrastructure to house its enormous stockpile of photos, which grows steadily as users add hundreds of millions of new photos every day, and as it expands its platform capabilities to support video and, more recently, 360-degree video. In addition, the company’s infrastructure must support platform services for more than 1 million web sites and hundreds of thousands of applications using the Facebook Connect platform.
To support that huge activity, Facebook has built four huge data centers with two more sites under construction as of September 2016, and leases additional server space from data center providers in several locations in and outside of the US.
The company’s massive armada of servers and storage must work together seamlessly to deliver each Facebook page. “Loading a user’s home page typically requires accessing hundreds of servers, processing tens of thousands of individual pieces of data, and delivering the information selected in less than one second,” the company said.
Before it started building its own server farms, Facebook managed its infrastructure by leasing “wholesale” data center space from third-party landlords. Wholesale providers build the data center, including the raised-floor technical space and the power and cooling infrastructure, and then lease the completed facility. In the wholesale model, users can occupy their data center space in about five months, rather than the 12 months needed to build a major data center. This allowed Facebook to scale rapidly to keep pace with growth of its audience. But the company has since developed powerful internal data center engineering capabilities, which it has successfully employed to build one of the world’s most massive-scale data center portfolios.
Where are Facebook’s data centers located?
prineville-wideangle
In January 2010 Facebook announced plans to build its own data centers, beginning with a facility in Prineville, Oregon. This typically requires a larger up-front investment in construction and equipment, but allows greater customization of power and cooling infrastructure. The social network has since expanded capacity in Prineville and built data centers in Forest City, North Carolina, Lulea, Sweden, and Altoona, Iowa. It has been expanding capacity in each of those locations continuously by building additional data center facilities. The company is also building data centers in Fort Worth, Texas, Clonee, Ireland, and Los Lunas, New Mexico.
Facebook-Fort-Worth-texas-dc-e1474063669460_0
Facebook has publicly admitted to leasing data center space only in Ashburn, Virginia, but sources have told Data Center Knowledge that it also leases capacity in Singapore.
How big are Facebook’s server farms?
As Facebook grows, its data center requirements are growing along with it. The data center Oregon was announced as being 147,000 square feet. But as construction got rolling, the company announced plans to add a second phase to the project, which added another 160,000 square feet, bringing the total size of the campus to 307,000 square feet – larger than two Wal-Mart stores. Last year, Facebook secured permits to build another 487,000-square foot data center in Prineville.
How many servers does Facebook have?
prineville-rows
“When Facebook first began with a small group of people using it and no photos or videos to display, the entire service could run on a single server,” said Jonathan Heiliger, Facebook’s former vice president of technical operations.
Not so anymore. Facebook doesn’t say how many web servers it uses to power its infrastructure. Technical presentations by Facebook staff suggested that as of June 2010 the company was running at least 60,000 servers in its data centers, up from 30,000 in 2009 and 10,000 back in April 2008.
It’s unclear what that number is today, but it’s bound to be in the hundreds of thousands. In its most recent annual SEC filing, Facebook reported that it owned about $3.63 billion in “network equipment” as of the end of 2015 — up from $3.02 billion in 2014.
What kind of servers does Facebook use?
In a marked departure from industry practice, Facebook has disclosed the designs and specs for its data centers and nearly all hardware they house. In April 2011 the social network launched the Open Compute Project, through which it is releasing the details of its energy efficient data center design, as well as its custom designs for servers, network switches, power supplies and UPS units.
Facebook’s servers are powered by chips from both Intel and AMD, with custom-designed motherboards and chassis built by Quanta Computer of Taiwan and other original design manufacturers. It has also experimented with ARM-powered servers.
“We removed anything that didn’t have a function,” Amir Michael, former hardware engineer at Facebook, said in a past interview. “No bezels or paints.”
Facebook-OCP-windmill
The cabling and power supplies are located on the front of the servers, so Facebook staff can work on the equipment from the cold aisle, rather than the enclosed, 100-degree plus hot aisle.
Facebook’s servers include custom power supplies that allow servers to use 277-volt AC power instead of the traditional 208 volts. This allows power to enter the building at 480/277 volts and come directly to the server, bypassing the step-downs seen in most data centers as the power passes through UPS systems and power distribution units (PDUs). The custom power supplies were designed by Facebook and built by Delta Electronics of Taiwan and California-based Power One.
Facebook contemplated installing on-board batteries on its servers, but settled on in-row UPS units. Each UPS system houses 20 batteries, with five strings of 48 volt DC batteries. Facebook’s power supplies include two connections, one for AC utility power and another for the DC-based UPS system. The company has systems in place to manage surge suppression and deal with harmonics (current irregularities).
For a comprehensive list of servers, network switches, and other hardware Facebook has designed and open sourced through the Open Compute Project, visit our Guide to Facebook’s Open Source Data Center Hardware.
What kind of software does Facebook Use?
Facebook was developed from the ground up using open source software. The site is written primarily in the PHP programming language and uses a MySQL database infrastructure. To accelerate the site, the Facebook Engineering team developed a program called HipHop to transform PHP source code into C++ and gain performance benefits.
Facebook has one of the largest MySQL database clusters anywhere, and is the world’s largest users of memcached, an open source caching system. Memcached was an important enough part of Facebook’s infrastructure that CEO Mark Zuckerberg gave a tech talk on its usage in 2009.
Facebook has built a framework that uses RPC (remote procedure calls) to tie together infrastructure services written in any language, running on any platform. Services used in Facebook’s infrastructure include Apache Hadoop, Apache Cassandra, Apache Hive, FlashCache, Scribe, Tornado, Cfengine and Varnish.
The company’s engineers continue building new infrastructure software components, contributing many of them as open source projects.
How much does Facebook spend on its data centers?
Facebook has invested billions in the infrastructure that powers its social network, which now serves about 1.13 billion daily active users around the globe, most of them (84.5 percent) outside of the US and Canada. It also spends huge amounts of money every year to operate these facilities.
The company doesn’t say how much it spends on data centers exactly, but some of the numbers usually make it into public filings. The data center it is building in Fort Worth, Texas, for example, is expected to cost about $500,000 initially, but as Facebook expands on the site, its investment may reach as much as $1 billion. It is expected to invest about $250 million in the initial build-out of its data center in Los Lunas, New Mexico.
Facebook reported $2.52 billion in capital expenditures on data centers, servers, network infrastructure, and office buildings in 2015. That number was $1.83 billion in 2014, and $1.36 billion in 2013.
It owned about $3.63 billion in “network equipment” as of the end of 2015 — up from $3.02 billion in 2014.
In 2015, Facebook spent $480 million more on operational expenses related to its data centers and technical infrastructure than it did in 2014, according to an SEC filing. The company told regulators it expected this expense to grow further in 2016 as it continues expanding its data center capacity.
What does it look like inside a Facebook data center?
Facebook_Iowa_10
Facebook_Iowa_9
Facebook_Iowa_13
Facebook_Iowa_4
Here’s a photo tour of the Facebook data center in Lulea, Sweden. And here’s a video tour of the same facility, produced by Bloomberg.
How energy efficient are Facebook’s data centers?
fb-cabling
Facebook’s Prineville data center — the first data center the company designed in-house — operates at a Power Utilization Effectiveness (PuE) measurement for the entire facility of 1.06 to 1.08, and the company said its North Carolina data center would have similar efficiency profile. The PUE metric (PDF) compares a facility’s total power usage to the amount of power used by the IT equipment, revealing how much is lost in distribution and conversion. An average PUE of 2.0 indicates that the IT equipment uses about 50 percent of the power to the building.
The cool climate in Prineville allows Facebook to operate without chillers, which are used to refrigerate water for data center cooling systems, but require a large amount of electricity to operate. With the growing focus on power costs, many data centers are designing chiller-less data centers that use cool fresh air instead of air conditioning. On hot days, the Prineville data center is designed to use evaporative cooling instead of a chiller system.
In its cooling design, Facebook adopted the two-tier structure seen in several recent designs, which separates the servers and cooling infrastructure and allows for maximum use of floor space for servers. Facebook opted to use the top half of the facility to manage the cooling supply, so that cool air enters the server room from overhead, taking advantage of the natural tendency for cold air to fall and hot air to rise – which eliminates the need to use air pressure to force cool air up through a raised floor.
Oregon’s cool, dry climate was a key factor in Facebook’s decision to locate its facility in Prineville. “It’s an ideal location for evaporative cooling,” said Jay Park, Facebook’s Director of Datacenter Engineering. The temperature in Prineville has not exceeded 105 degrees in the last 50 years, he noted.
The air enters the facility through an air grill in the second-floor “penthouse,” with louvers regulating the volume of air. The air passes through a mixing room, where cold winter air can be mixed with server exhaust heat to regulate the temperature. The cool air then passes through a series of air filters and a misting chamber where a fine spray is applied to further control the temperature and humidity. The air continues through another filter to absorb the mist, and then through a fan wall that pushes the air through openings in the floor that serve as an air shaft leading into the server area.
“The beauty of this system is that we don’t have any ductwork,” said Park. “The air goes straight down to the data hall and pressurizes the entire data center.”
Testing at the Prineville facility has laid the groundwork for adapting fresh air cooling extensively in North Carolina, even though the climate is warmer than in Oregon. “Comparing our first phase of Prineville with how we plan to operate Forest City, we’ve raised the inlet temperature for each server from 80°F to 85°, 65% relative humidity to 90%, and a 25°F Delta T to 35°,” wrote Yael Maguire on the Facebook blog. “This will further reduce our environmental impact and allow us to have 45% less air handling hardware than we have in Prineville.”
The Delta T is the difference between the temperature in the cold aisle and hot aisle, meaning the hot aisles in Facebook’s new data center space will be as warm as 120 degrees – not a pleasant work environment for data center admins. Mindful of this, Facebook designed its Open Compute Servers with cabling on the front of the server, allowing them to be maintained from the cold aisle rather than the hot aisle. The contained hot aisles in Prineville are unlit, as the area was not designed to be staffed.
What’s different about Facebook’s data center in Sweden?
Like-Ice-Symbol_0
Facebook took a different approach to the electrical infrastructure design at its data center in Sweden, reducing the number of backup generators by 70 percent. Facebook says the extraordinary reliability of the regional power grid serving the town of Lulea allows the company to use far fewer generators than in its U.S. facilities.
Using fewer generators reduces the data center’s impact on the local environment in several ways. It allows Facebook to store less diesel fuel on site, and reduces emissions from generator testing, which is usually conducted at least once a month.
Local officials in Lulea say there has not been a single disruption in the area’s high voltage lines since 1979. The city lies along the Lulea River, which hosts several of Sweden’s largest hydro-electric power stations. The power plants along the river generate twice as much electric power as the Hoover Dam.
“There are so many hydro plants connected to the regional grid that generators are unneeded,” said Jay Park, Facebook’s data center design architect. “One of the regional grids has multiple hydro power plants.”
Park said Facebook configured its utility substations as a redundant “2N” system, with feeds from independent grids using different routes to the data center. One feed travels underground, while the other uses overhead utility poles.
Technical Presentations:
For those interested in more detailed information, here are links to PDFs and videos of presentations about Facebook’s Infrastructure and operations from members of the Facebook Engineering team.
Facebook Engineering Front-End Tech Talk (August 2010)
A Day in the Life of A Facebook Engineer (June 2010)
IPv6 at Facebook (June 2010)
Rethinking Servers & Datacenters (November 2009)
High Performance at Massive Scale at Facebook (Oct. 2009)
Facebook’s Bandwidth Requirements ( Sept. 2009)
Memcached Tech Talk with Mark Zuckerberg (April 2009)
Read more about:
Meta FacebookAbout the Author
You May Also Like