Standardize the Datacenter: Single-Source for Success

“[S]tandardize on a homogeneous infrastructure” is hard advice for me to hear. Nearly my whole career has involved some aspect of porting or abstraction across market competitors, and aversion to “single-sourcing” has always been among the commandments of good design.

Andy Patrizio writes for me: “… the conventional wisdom for filling out a data center centered on one idea: don’t put all your eggs in one vendor’s basket.” He follows that up, though, with evidence that the most economical results now come when “you buy a whole lot of one type of system.” It’s not that we can’t deal with heterogeneity; it’s just more expensive than it’s worth.

In part, this is a symptom of the success of computing. An increasing share of the total-cost-of-operation (TCO) of computing systems goes to human involvement: installation, diagnosis, repair, and so on. The easiest way to reduce the human component of TCO is to standardize, as Peter ffoulkes of TheInfoPro illustrates with the example of running an airline fleet: “… only train people once …” when they only have to fly one kind of airplane.

Virtualization also plays at least a couple of key roles in the advice to standardize. Virtualization is how contemporary system administrators are able to keep effective usage up while scheduling heterogeneous loads–some hundreds of times more demanding than others.

A final aspect of contemporary datacenter design that Patrizio doesn’t mention, but that further buttresses his argument, is that components are becoming commodities. To buy a standardized server or patch panel or security appliances necessarily sidesteps single-sourcing, or at least its worst hazards. That’s part of the attraction of the Open Compute movement Facebook promotes. Reliance on Open Compute parts benefits not only from widespread knowledge about their use and review of their design, but at least the potential of low-margin pricing.

While there’s still a lot to learn about how to manage datacenters efficiently, the collective vote of the 500 most potent high-performance computers (HPC) in the world, as ffoulkes summarizes it, makes the case that it’s time for standardization.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>