One of engineers’ hardest jobs is to do what matters. It’s attractive–seductive, even–to hunt for keys where the light is better, but winners keep in mind the goal is to get to the keys, even–especially–when that’s outside the comfort zone.
This idea applies throughout DevOps, more than most practitioners realize. Here are a few examples:
Part of the evidence that doing what matters is difficult is that so many figures of speech try to capture aspects of it. We collectively talk about “bottlenecks“, “weak links”, “closing the barn door after the horse has left”, “leverage points”, the “Theory of Constraints”, and so on. In principle, these concepts are obvious.
It’s apparent, though, that many professionals have trouble applying them correctly. Mike Cuppett is right in his “ Customer Experience–Don’t Forget the NOW” to emphasize that “… investing to bring all components up to 99% delivery success improves the customer experience more than investing to get the back-end components up to 99.999%”. He correctly promotes “… smart investments … focused on business impact – improved customer experience – rather than meeting an old-fashioned IT metric”, because it is so easy to slip into habits of focusing on familiar technical realms. However undeniable the arithmetic of performance optimization is, humans seem to have a hard time remembering that super-insulating does little to keep a house with broken windows warm.
Gabriel Lowy makes related points in his “Holistic Unified User Experience Assurance“: as pleasant as it is to optimize “traditional technology domain silos, such as server, network, application, operating system or security …”, their determination of over-all end-user experience (EUE) is, at best, partial. Rapidly-changing traffic (consider audio and video loads) and topology (cloud) put a premium on measurement of what is actually happening, as opposed to assumptions that performance is much like what it was in the past.
The few publicly-available measurements reinforce this: performance is not only changing, it’s clearly degrading, and in sometimes-surprising ways. Web performance specialist Tammy Everts makes this clear in her latest Radware report. Remember that this is no reason for despair; it simply means that wisely-chosen low-cost improvements yield great gains.
A final related instance of doing what matters appears in Nick Hardiman’s “Network emulation for the cloud“. As mentioned just above, the construction and responsibilities of our networks are both changing radically; old rules-of-thumb are poor guides to today’s winning ways. Even though methodical network testing, including modelling and emulation, challenges administrators comfortable with existing practices, it’s the right thing to do, and pays off far better than “… guesswork that must be rooted out.”
What parts of application performance management (APM) are hard for you? What distracts you from the measurement and optimization you know will give you the biggest “bang”?