Making Your Own Servers Wasn’t Always Sexy
Rackspace CTO on difference between DIY servers 15 years ago and today
Initiatives like Facebook’s Open Compute Project and IBM’s OpenPOWER Foundation are perceived as ecosystems comprised of people that do some of the most cutting-edge thinking about computing on a massive scale. DIY server design is what cool kids like Google and Facebook do, but it hasn’t always been that way.
“It wasn’t sexy,” Rackspace CTO John Engates says, remembering the company’s early years. “Today it’s sexy to build your own servers. Back then it was cheap and scrappy to build your own servers.”
Like for many other tech-startup entrepreneurs in the late 90s and early 2000s (Google included), the local PC shop that sold cheap computer parts was Rackspace’s primary “server” vendor. “And what we were really building was PCs that we were using as servers,” Engates recalls. “You build a PC and slap Linux on it and call it a server. That was how we started our company.”
The reason they went the DIY server route was that it was simply cheaper that way. Once the company started growing and talking to enterprise customers who understood what a real server looked like, it started buying off-the-shelf enterprise-class servers. Dell was its first real server vendor – a relationship that started around 2002. Around the same time Rackspace also started buying “grown-up” storage systems from the likes of NetApp and EMC.
Matching Hardware to Workload
Today, the company is back to building its own servers, but its staff don’t actually put them together, and it’s doing it for very different reasons. There are still cost advantages to it, but it’s more about matching your computing needs with your infrastructure better. It’s more about shaping the roadmap of a server platform so that you don’t have to simply adapt to whatever the latest enterprise server vendors put on the market. “You want to be able to design it,” Engates explains.
Best hardware for a particular workload is the reason so many companies are looking into Open Compute, OpenPOWER, or ARM server technologies, he says. Having multiple suppliers compete for a contract to supply the same server – the server the customer needs – also doesn’t hurt.
Rackspace has bought its version of Open Compute servers from Taiwanese manufacturer Quanta, as well as from HP and Dell, Engates says. Other vendors have been in the mix too.
First Worthy Challenger to x86
The company got involved with OpenPOWER last December for similar reasons. “OpenPOWER represents more opportunity to have a supply-chain advantage and to have economics that are potentially better than what you get off the shelf today,” he says.
OpenPOWER and Open Compute are very different beasts, however. Open Compute is literally open source. The specs and designs available through the project can be used by anyone for any purpose. OpenPOWER provides a way for companies to license the POWER processor architecture from IBM to do development on it.
In the world of processor intellectual property, that’s as open as it gets today. U.K.’s ARM Holdings also licenses its architecture to chip makers. But IBM’s consortium has some heavyweight members, including Google, NVIDIA, Samsung, and Hitachi, among others. Google has already designed its own server based on POWER.
Rackspace’s participation is starting to bear fruit too. Earlier this week the company announced a server spec that for the first time combines OpenPOWER and Open Compute. The design is optimized to run OpenStack, the open source cloud architecture nearly all of Rackspace’s cloud offerings are built on. The company was one of the key forces behind OpenStack’s birth and development.
POWER is a different alternative to Intel’s x86 architecture than ARM is. With ARM, users have to accept lower performance in return for energy efficiency. That’s not the case with POWER, which is competitive with x86 in terms of both performance and price, Engates says. “It’s the first thing that’s shown a lot of promise as an alternative to x86.”
Not Swearing Off ‘Incumbent’ Gear
To be sure, Rackspace-designed Open Compute servers are not the only kind of hardware running in the company’s data centers. They support its public cloud and bare-metal services. Many of its other more traditional services (things like VMware virtual machines) run on traditional enterprise infrastructure.
One reason is the legacy of cross-certification among incumbent vendors. If you want EMC, Oracle, and Cisco to cooperate with each other in supporting an enterprise IT environment running in your data center, that environment better consist of components the vendors have certified to work together.
Another reason is there are lots of enterprises that simply prefer the traditional hardware route. Enterprise customers tend to be more inclined to “buy off-the-shelf enterprise-class gear, because it’s already completely qualified, optimized, and tuned, pre-integrated and ready to go,” Engates says. They know that equipment and want their service provider to offer it to them.
This is a telling dynamic. The "incumbent" gear Rackspace does buy is purchased to satisfy enterprise customers who aren't in the custom, DIY mindset the service provider itself is in. As an end user buying hardware for its own needs (infrastructure it uses to support its cloud services), it has chosen Open Compute because it makes better sense.
As Open Compute matures, however, and more enterprise data center users get comfortable with it -- and there are signs that this has started to happen -- it is a real possibility that companies like Rackspace will find themselves buying fewer and fewer off-the-shelf boxes, replacing them with custom-designed commodity infrastructure optimized for their specific requirements. After all, DIY servers are now sexy.
About the Author
You May Also Like