Why IT Operations is Like an Action TV Series

Nir Livni, Product Management Director at Correlsense, writes about IT operations, citing W. Edwards Deming who said “In God we trust; all others must bring data.”

I like watching the series “24,” though I can’t really explain why. Every time they nearly get the bad guys, there’s some sort of twist in the plot and they need to start all over again. For example, I’m sure that you are familiar with the following classic scene: The Counter Terrorism Unit (CTU) chopper is following a suspect that is driving a black van. The suspect’s van enters a tunnel, but the van doesn’t leave the tunnel. Instead, a number of different vehicles leave the tunnel at the same time, and the suspect is probably in one of them. By the time they figure out that the black van has been left empty in the tunnel they have already lost the suspect! They shout “We have lost visual!!!” and are back to looking for the bad guys all over again—then they call Jack Bauer.

IT operations is just like the CTU; the CTU is responsible for making sure that life goes on without any unpleasant surprises. Similarly, IT operations needs to do the same in its own space and make sure that the business keeps on running and that business transactions are being executed properly and on time.

When something is about to go wrong, the CTU and IT operations are expected to prevent it before it affects anyone. So they set up the war room, call everyone in, and start doing their detective work to find the needle in the haystack. If they don’t find it and something goes wrong, then the results are significant: either people get hurt (in the CTU’s case), or business is impacted.

IT Operations: The War Chest

So which tools could IT operations use to find out that there is a problem, identify the root cause of it, and resolve the issue?

For example, IT operations could use HTTP network appliances that help see every HTTP transaction and measure its response time. These network appliances are just like the CTU’s choppers—they do not have adequate visibility into the data center. They can indicate that something is wrong with the response time of a transaction, but they cannot show why the response time of the transaction is high and they cannot provide the visibility needed for resolution.

IT operations also uses event correlation and analysis (ECA) tools. ECA tools are like CSI detectives (yes… that’s another one I watch…), and rely on other tools to collect information for them, just like the CSI detective who collects evidence from a crime scene. ECA tools are just as effective as the products they rely on to provide them with the data. The issue with ECA tools is that, just like in a crime scene, the thief does not usually leave his ID behind, so all you are left with is just clues, and no accurate data to work with.

Additional tools that IT operations relies on are:

  • Dashboards that monitor server resource consumption.
  • J2EE/.Net tools that are capable of performing drill down diagnostics in application and database layers.
  • Synthetic transaction tools.
  • Real User Measurement (RUM) tools.

With all of these monitoring tools, IT operations still finds itself in a situation where all lights are green while users are complaining about bad response times. In spite of all of the investment in monitoring tools; the infrastructure that IT operations is accountable for is still unpredictable. Why?

A Simple Example

Perhaps it’s best to take a look at this classic example: one of our customers had a problem with a wire-transfer transaction. The liability for the problem kept on going back and forth between the Operations team and the Applications team, who were pointing fingers at each other as to who was responsible for the issue. “All lights are green,” said Operations. “We tested the application and it works just fine,” said Applications. Simply put, no existing monitoring tool could point out the problem.

So what was the problem? The answer is simple: it appears that, by design, wire-transfers for over $100,000 were querying the mainframe nearly 100 times, while other transfers would query it only a few times. Same end-user, same application, same transaction, but just a single parameter made the transaction take a whole different path, which made the difference between a 3-second and a 2-minute response time.

What Exactly Are You Monitoring?

Now the question remains, why can’t existing monitoring tools identify the problem? The reason is simple. Traditional monitoring tools monitor the infrastructure and not the transactions. In a complex heterogeneous infrastructure, there are many tools for monitoring each and every component, but no single spinal cord that is able to show how transactions behave across components. None of the tools are able to deterministically correlate a single request coming into a server with all of the associated requests going out of a server and keep on doing so throughout the transaction path. Just like the chopper that could not figure out which of the vehicles coming out of the tunnel contained the suspect who came into the tunnel in the first place.

This situation raises some strategic questions regarding your monitoring approach. How effective is a monitoring framework without that business context? Are you supposed to just to make sure the servers are up and running and applications are responding, or is your end goal to make sure that the business transactions are being executed as intended and on time?

“In God We Trust; All Others Must Bring Data”

Applications are tricky, transactions are tricky, and they become even trickier in a complex heterogeneous infrastructure that is composed of multiple platforms, operating systems, application nodes, tiers, databases and where communication between components is in different protocols back and forth for every single click of a button by the end-user.

Only by tracing each and every transaction activation throughout its entire path—100 percent of the time for all transactions and across all components—will you be able to systematically collect the granular information required in order to get business-contextualized visibility into your data center. This kind of visibility is a key factor in identifying problems effectively when—or even before—they arise.

W. Edwards Deming said; “In God we trust; all others must bring data.” I think he was absolutely right. IT operations can use choppers, or CSI crime-lab detectives, or Jack Bauers. They all have their roles, but when it comes to fast and effective problem identification as well as many other IT related decision-making processes (that’s a whole different article…), real accurate data is required—no partial data, no assumptions.

Transaction management provides you with that data, and by doing so, it provides the IT organization with visibility and predictability. After all, wouldn’t it be great if you could go to sleep at night knowing that your infrastructure is reliable? That is, unless you want to play the role of the CTU Director…