Smart devops–like you–know that they need to keep learning and improving throughout their careers. Datacenter operations change rapidly, and there’s a lot to learn. Commercialization of blade server architecture is barely a decade old, for instance, with the PICMG 2.16 Packet Switching Backplane adopted eleven years ago this month.
Learning better techniques isn’t enough, though. Several recent reports underline that we also need to adjust our goals from time to time. Improvement of efficiency is necessary, but not sufficient, because the best datacenters figure out new strategies that jump beyond mere improvements in technical efficiency.
Leading tech companies throughout Africa and South Asia, for example, aren’t waiting to perfect their power supplies, peering arrangements, or complex page layouts. Instead, they’re hosting simple Web sites on cloud services hosted in North America or Europe that are good enough to be seen on the low-end “feature ‘phones” that most of their end users carry. As several network engineers have explained to me anonymously, their decision-makers waste no time worrying about NoSQL sharding theories or latency esoterica. They set up a new business in a few minutes or hours, based on Amazon Web Services, 4Shared, or another public cloud, and focus on very rapid growth in their chosen market.
In a slightly different way, plenty of more traditional-looking high tech companies have completely given up on server or network engineering: they focus entirely on their own application, implemented within a commodity-priced public cloud.
Still closer to datacenter operations as we’ve known them, Belinda Yung-Rubke acknowledges the importance of expert troubleshooting and deep protocol analysis in application performance management (APM). Her advice, though, is to save those high-caliber weapons for cases that truly require them. Network-based APM solutions can solve many routine problems, giving the specialists a chance to concentrate on the thorniest challenges, and thus make the best of their expertise.
Jeffrey Kaplan also writes about a trade-off. He advocates for cloud-based management tools with “powerful new analytics”. Kaplan knows that “there is always a risk that the service provider might use … metadata for … proprietary purposes.” Assumption of this risk, though, gives the opportunity for the “greater insight” from well-designed benchmark data that would be prohibitive to obtain any other way. For Mark Williams, the big gain of cloud-based software-as-a-service (SaaS) server monitoring is its light weight and out-of-the-box effectiveness: with SaaS server monitoring, you can have answers about server state in less time than you take to read this posting.
The marketplace around datacenter operations has become exceedingly rich: if you face a problem or need, it’s likely that someone else has attempted to package a solution available for SaaS purchase. Identify where your true long-term goals and strengths are, and concentrate on those; for everything else, be prepared to relax a bit, buy an answer from someone outside, and make the most of your finite time. Remember that a big part of “making the most” is not just solving what’s in front of you, but re-thinking how your datacenter works to better align it with the larger goals of your organization. In upcoming installments of the Application Monitor, we’ll look at several specific clever uses of cloud resources to make the most of datacenters.