November 29, 2024
Equinix, as one of the largest data center operators globally, faces the ongoing challenge of maintaining consistent uptime across its vast infrastructure. Outages caused by severe weather, elevated temperatures, or technical issues like UPS failures underscore the operational complexities inherent in managing such a scale. However, these incidents also highlight Equinix’s commitment to rapid incident response and continuous improvement, minimizing impacts and reinforcing resilience across its facilities.
Here is a collection of the highlights:
Equinix's SG4 data center in Singapore. IMAGE: EQUINIX
October 2023: DBS, Citi’s Banking Services Resume After Data Center Disruption
Last year, DBS and Citigroup’s payment services went down for several hours, with customers unable to use their cards, access online banking, or withdraw cash. An incident at an Equinix data center caused this downtime. A spokesperson cited a technical issue that raised temperatures within a Singapore data center as the cause. Read more...
Inside the Equinix DC12 data center in Ashburn, Virginia. IMAGE: EQUINIX
March 2018: Equinix Power Outage One Reason Behind AWS Cloud Disruption
In March 2018, a power outage at an Equinix data center in Ashburn, Virginia caused connectivity problems for some Amazon cloud customers – including disruptions for Atlassian, Twilio, and Capital One. Widespread power outages across the East Coast were the result of severe weather, but Equinix said that its contingency plans were unable to prevent service disruptions for some customers. Read more...
January 2017: Equinix: London Data Center Outage Affects Clients Without Redundant Connections
A brief data center outage at an Equinix facility in London caused a short period of downtime for some customers. Reports suggest the loss of power resulted from “routine maintenance” carried out by the facility engineers. SSP, a technology solutions provider that hosts its equipment at the facility, apologized to its customers, noting that it was working with Equinix to establish the root cause. Read more...
July 2016: Equinix Data Center Outage in London Blamed on Faulty UPS
Studies show that UPS failure is the most common cause of data center outages – and this was the cause of a 2016 outage at one of the Telecity facilities in London. The company did not say what exactly went wrong with the UPS, but the outage caused connectivity problems for many BT internet subscribers. BT's spokesperson noted that about one in every 10 attempts to reach a website by its users failed during the outage. Service was restored within minutes. Read more...
Exterior of the 111 8th Avenue building in Manhattan. IMAGE: TACONIC PARTNERS
November 2012: Temperatures Soar at Data Center Inside 111 8th Avenue
On November 1, 2012, temperatures soared inside a Zayo Group data center as problems with a generator shared by Zayo and Equinix forced the company to power down its onsite cooling systems. Customers remained online throughout the utility outage, which was resolved as the company stabilized the temperature and brought the cooling systems back online. Read more...
January 2012: Equinix Outage Means Downtime for Zoho
A power outage in January 2012 in a Californian Equinix data center caused problems for many customers, most notably Zoho. Although power to the data center was restored within seconds, the sudden loss of power is problematic for database-driven applications, which means that short power outages can translate into hours of recovery time for services. That was the case for Zoho. Equinix acknowledged the incident, but did not provide details on the cause of the outage. Read more...
Inside Equinix's PA10 data center. IMAGE: EQUINIX
August 2009: Equinix Paris Facility Hit by Cooling Outage
In August 2009, an Equinix data center near Paris experienced a cooling outage, which left some customers offline for hours. As temperatures in Paris soared, multiple chillers failed, and the standby chiller did not start in time to absorb the load, a spokesperson confirmed. Read more...
July 2009: Equinix Hit by Outages in Sydney, Paris
In July 2009, a power outage affected 200,000 customers across five Australian states. Downtime lasted approximately 12 minutes and was resolved when all systems were restored. Thirty minutes later, another outage at the Equinix PA2 Saint-Denis IBX Center affected several prominent customers, including DailyMotion and ClaraNet. According to Equinix, the Paris outage was caused by human error when a vendor was conducting routine maintenance on a UPS system. Read more...
July 2006: MySpace Outage Blamed on Data Center
In 2006, the then-popular social network hub MySpace was offline for hours as a result of a data center power outage. MySpace’s equipment was located at the Equinix facility in El Segundo and blamed the downtime on the power outage at that data center. However, other Equinix customers did not report any issues. Read more...
Read more about:
EquinixAbout the Author
You May Also Like