Application Performance Management (APM) remains in its infancy. There’s no simple formula for applying it usefully; expect that you’ll need to study both APM techniques and the specifics of your own organization’s needs to fit the former to the latter.
That’s part of what I read in Larry Dragich’s recent “Real-Time Monitoring Metrics – The Magical Mundane“. Certainly we’ve learned a lot in the past few years about metrics for real user monitoring (RUM) and end-user experience (EUE). The difficulty remains, though, that applications can vary in so many dimensions, including:
With so many variables in play, it’s impractical to “buy” APM and expect it to yield a simple, correct answer. At the same time, with so many variables in play, it’s crucial to take full advantage of APM and related best practices; measurement of the sort APM provides is the only hope for effective control of application delivery.
That’s also where your own knowledge and experience enters as an essential component. Expect to customize any APM solution so that it provides meaningful results for your situation. Dragich rightly emphasizes the importance of “baseline comparisons” in managing and monitoring APM outputs.
I don’t understand all Dragic has written. Much of that comes from my own stumbling around figurative language (if an application executes in “oscillating winds”, how does it “outline high and low watermarks”?). I know, though, that he’s on target in promoting basic de-trending techniques such as comparison of metrics over daily, weekly, and averaged-weekly cycles. For your own APM, you’ll want to put together at least rudimentary comparisons across all the bulleted items above, much like the temporal cycles Dragich highlights.
Keep the goal in mind. Dragich has it just right: “… there are things you may not want to think about all the time, but you have to think about them long enough to …” know what patterns matter for your applications. Take advantage of APM to detect those important patterns.