How Western Express Recovered After a Tornado Wiped Out Its Data Center

When a tornado hit Nashville in March, it wiped out the transportation company's headquarters, including the data center hosting systems crucial for its operations.

Christine Hall

November 16, 2020

6 Min Read
datacenter destroyed by tornado
A building on the campus of Western Express's corporate headquarters in Nashville after being destroyed by a tornado on March 3, 2020.Western Express

Disaster preparedness is one of the central organizing concepts for data center operators. It's company policy for large data center operators. Smaller, private data centers, on the other hand, often have some parts of their operations covered as policy, while putting off adding others. Disaster preparedness is often in the latter cattegory.

Natural disasters, of course, don't wait until you've covered all your bases. They're happy to arrive just months before you've put your final plan in place, and they tend to come without warning.

This was a lesson learned by Western Express, a major national trucking outfit in the US, when a tornado hit Nashville about half an hour past midnight on March 3 and wiped out the company's three-building headquarters there, including nearly all its IT hardware, leaving the company sitting dead in the water -- almost literally.

Western Express's on-premises IT infrastructure wasn't huge, but it was crucial for its operations. It centered around an IBM Power 9 server running its legacy transportation management system, the ERP system that did all the operational heavy lifting. There were also about 14 commodity Linux servers, mainly running Windows virtual machines on VMware.

David Sivils, VP of IT at Western Express, told DCK that the company did have disaster recovery in place for the IBM box running the ERP software but not for the VMware servers. Luckily, the servers had been backed up at midnight, meaning only about 30 minutes worth of data was lost.

Related:Your Disaster Recovery Plan Is Probably Out of Date

"We did have DR in place in Atlanta, where it was being replicated," he said.

Because of that, he said, his team was able to get the legacy software that runs the system up and running fairly quickly, although the software would be temporarily running on a server in Atlanta, adding latency issues that were problematic for some of the VMs.

"I got the call at two in the morning from the president, and in about seven hours we had most of our workforce up and going on the ERP side," Sivils said.

That included getting hooked up to the server in Atlanta and getting three or four mission-critical vendors connected to the new temporary network.

Sivils got on the phone to try find local colocation space and replacement servers for the machines that ran the VMs. Eventually, he found available colocation space at vXchnge's downtown Nashville data center.

Interior view Western Digital after tornado

datacenter-disaster-recovery-01

Nashville offices of Western Express, destroyed by the March 3, 2020 tornado

Matt Donnelly, Sivils's initial go-to account executive at vXchnge, had also suffered from the disaster. The tornado destroyed the apartment building he lived in.

Related:Can a Data Center Outage Database Help Prevent Outages?

"That day I received word from one of my partners that Western Express was in need of some help," Donnelly told DCK. "So I reached out to David Sivils, and he's basically like, 'Matt, we're standing here in a puddle of water. All of my servers are drenched. I don't know what I'm going to do. Can you help me?' So, I worked with him directly and let him know that we would absolutely be there to help; that we would get the environment stood up in our data center as quickly as possible."

As for servers to put into the colocation space?

"We dried out a couple," Sivils said. "We borrowed some servers from Dell ..., so we were able to get the VMware side of the house up and going. It did take a little bit more time, but it wasn't as pressing as just getting whole company up."

The servers running the VMs were up and running by early that evening.

"To put that in perspective, the cycle of time that we worked through there versus normal negotiations alone -- assessing their environment, figuring out what they need from a space and power perspective, and then contract negotiations -- typically would take us over a month," Pete McPeters, the director of operations for vXchnge's central region in St Louis, told us. "Then we would have a week or two of installation time, because we would have to order the power whips, get an electrician to come in to install them, order the PDUs [power distribution units] to put inside each cabinet, and all those things. We were able to overcome that, all in about a six-hour time frame."

Part of that was because vXchnge put in some extra effort to accommodate the situation, according to McPeters.

"They were able to find the specific type of power whips that Western Express needed existing somewhere else in the data center and get those moved without having to bring an electrician in, which would have been very difficult following a major disaster like that," he said. "We also were able to find some older PDUs that we were able to repurpose and put in the racks to get them up and running.

"The time frame under which we did this was was pretty monumental," he added. "I've been doing this for close to 20 years now, and I think this was probably the fastest install I've ever seen."

Western Express after tornado

datacenter-disaster-recovery-04

Nashville offices of Western Express, destroyed by the March 3, 2020 tornado

Ironically, the biggest roadblock Western Express had to overcome was in an area where Sivils thought he had all his disaster-recovery ducks in a row, and which led to the troublesome latency issues that lasted until August 14.

"That's strictly due to our disaster recovery company in Atlanta being so slow in getting a new production box through IBM back to us," he said. "I think that took exorbitantly long. It took nearly six months and should not have taken that long, so I'm not happy."

Sivils said that when the disaster occurred, he had already been looking for a disaster recovery solution for the servers running VMware. He now has one in place in the colocation space, which he plans to keep.

"We're covered on both sides now," he said.

"We also had an [on-premises] phone system, which was destroyed. However, we had been working on a cloud-based telephony system, but we were only about 60 percent ready for that, so a lot of March also was slamming in a new phone system and getting it all set up and ready to go for our internal workforce. That was a lot of time and effort. We were in the process of moving to the cloud solution for exactly that reason, disaster recovery, but we weren't quite there yet. So timing is everything."

Coincidentally, the disaster left the company well-prepared for the arrival of the COVID-19 pandemic, which was lurking just around the corner.

"Just because of the tornado, we had to have remote workers, so we ordered a lot of new computers and new servers, but we also ordered USB wireless devices, mifi hotspots, headsets, new monitors, so we had all of that in place before COVID actually hit," he said. "When COVID hit, we had all of that remote capability ready, and everything like monitors, USB wireless, displays, keyboards, mice, and headsets, which were on backorder for most of the nation, we already had. We were already prepared for COVID-19 because of the tornado."

About the Author

Christine Hall

Freelance author

Christine Hall has been a journalist since 1971. In 2001 she began writing a weekly consumer computer column and began covering IT full time in 2002, focusing on Linux and open source software. Since 2010 she's published and edited the website FOSS Force. Follow her on Twitter: @BrideOfLinux.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like