In today’s increasingly complex IT world, change is the only constant. IT must be proactive in their performance management approach in order to succeed in such an environment. How do you stay proactive in you approach? One strategy is to employ tools which automatically adapt and configure to your environment. Otherwise, you may be left with monitoring blind spots.
Let me give you example to show you how this can happen. First of all, many monitoring tools do not have automatic configuration or self-maintaining features. Consequently, these tools may not be maintained properly as the time goes by and slowly but surely become much less effective. This effect will be even more troublesome if your environment features constant application changes, modernizations, and migrations. The tool will simply not keep up.
In the past, a typical APM purchase went as follows: A CIO would identify the need for a tool due to slow performance, customers complaining, etc. They would come to several APM vendors and outline their goals. The CIO decides on a tool and works on an integration plan. Now, depending on the specifics, they might not get it up and running in a timely fashion. The client and integration team needs to figure out the alerts, SLAs, etc. Finally, after a couple months the tool is in place and monitoring the critical applications. The Operations team meets their goals and the CIO gets his bonus. Mission accomplished. But what happens after?
Most of the time, particularly after the initial problems have been solved, the new tool just stays in place monitoring the initial applications. It doesn’t get used as much since the initial performance problems have already been identified and addressed. The monitoring tool may start to fall out of step with the environment and applications. Functionality starts to go in and out. New applications start to come into the environment, such as mobile, which may not be supported. New technologies likes ESBs are added into the mix. Increasingly, more and more complexity is added into the environment and your monitoring tool becomes out of date. If you are not proactive you won’t get visibility into components, entry points and transactions. You will end up in a dangerous position of not knowing what you don’t know. This becomes a very difficult conversation to have with the CIO and heads of the business units. They would probably say something along the lines as: “We spent all this money, why isn’t this working?”
So what is the key to avoiding this problem? Key criteria for an APM solution should be auto discovery of application dependencies and automatic configuration. With auto discovery of observed tiers, you will gain the visibility into interdependencies which you may have not known existed. At the very least, you will have known unknowns, specifically, on a tier by tier level.
To our readers, have you ever had problems keeping your APM tool up to date with changes to your environment? If so, how did you remedy the situation?