Start seeing with SharePath free download here

Have any questions? Just call us 508-318-6488

February 9, 2012

No Comments

The True Constants of Cloud-Based Application Performance Management

We are seeing more and more awareness of application performance management these days in less mature IT organizations who until recently were solely focused on infrastructure performance. New sites that are dedicated to helping those less familiar with the art are being launched, Application Performance Engineering Hub is a good example of one which covers “The latest news and opinions about application performance engineering”.

The newest wave of best practices and methodologies for application performance management is, of course, around cloud computing.

Application performance management in the cloud can be a frightening prospect. After all, how is one supposed to manage things like end user experience monitoring, analytics gathering, and transaction monitoring when everything about cloud management seems to scream change?

At first blush, it would seem like trying to build a house on quicksand. Agile development, on-demand servers, rapid deployment of virtual servers… you want to try to build an application performance management scheme on that kind of infrastructure? Good luck with that.

But it’s not as crazy as one might think, and people are getting it done all of time. So how do they do it?

Instead of focusing on all the things that do change within a cloud environment, application performance management vendors should really focus on the things that don’t change: the constants that must be measured no matter what. Break it down in that method, then it becomes a lot clearer.

What doesn’t change are business transactions and the users’ expectations of performance. Those are the constants in any APM metric that will always be true. If you keep transactions and user expectations as you anchor in monitoring a cloud environment, then monitoring all of the rest becomes a whole lot easier.

That may sound good, but how does that actually work?

One way would be to track every step of a transaction, and maintain the user and business context along the way. That’s a brute-force application, though, that requires a huge amount of planning and configuration, since–as has been thoroughly pointed out by now–cloud environments can be exceedingly complex. Still, it can be done, and with the right tools, it can be done pretty well.

Another way is to take a higher-level approach and model real-time transaction behavior. Look at what’s supposed to happen with a user’s transaction and see how reality differs from the expected. You can then compare the the results with what’s expected and use the delta to start figuring out where the delays are coming from in the application.

Focusing on the application in application performance management is, utlimately, the best approach for cloud-based application performance management, because at the end of the day, the user’s interaction with the application and the transaction metrics will be the best determinate of what’s right and what’s wrong with the app.

January 16, 2012

No Comments

Business Transaction Management – the Next Generation of Business Service Management

Why a New Generation? What’s Wrong with the Old One?

Traditional systems management tools focused on monitoring the health of individual components. Tools like IBM Tivoli, BMC patrol, CA Unicenter, and HP Openview, initially focused on management of servers, services, and resources. In those days, the equation was relatively simple – 100% CPU utilization = bad, 10% CPU utilization = good.

However, the increasing complexity of applications introduced numerous new enterprise application components including databases, connection pools, Web servers, application servers, load balancing routers, and middleware. The business service management (BSM) industry followed shortly after, and began offering tools for database management, network traffic monitoring, application metrics mining, and analyzing Web server access logs.

Each of these business service management tools “speaks” a different language – database management tools speak in “SQL statements,” network traffic tools use “packets,” while systems monitoring report in “CPU and disk usage.”

What Happens When the Application Crashes or Hangs? What Do You Do if a Single Transaction Suffers Slow Response Times?

In comes the war room. To cope with the proliferation of information sources, enterprises developed the notion of the war room. Whenever slow response times or poor performance of critical applications is detected, relevant personnel are grouped together into a room for brainstorming and joint monitoring. This involves a large amount of professionals, since a single transaction may flow through several infrastructure components. For example, a financial transaction will trigger an HTTP request to an Apache Web server installed on top of Red Hat Enterprise Linux, which in turns calls a WebSphere application server on a Windows machine, flowing through an MQSeries queue, eventually querying an Oracle database.

Members of the war room typically include Java and J2EE performance experts, Microsoft Windows system managers, Unix (Linux, Solaris, HP-UX, etc.) system managers, database administrators, network sysadmins, and proxy specialists, just to name a few. This is a lengthy process that can take thousands of man hours to complete.

The New Paradigm – Business Transaction Monitoring

The new generation of systems monitoring and management tools, widely referred to as Business Transaction Management (or BTM), offer a new approach. Instead of monitoring SQL statements, TCP/IP packets, and CPU utilization, Transaction Management tools view everything from an application perspective. In the world of transaction management, an application is considered as a collection of transactions and events, each triggering actions on the infrastructure. The goal is to track every transaction end-to-end and correlate to the information collected from the infrastructure. Such an end-to-end view enables to quickly isolate and troubleshoot the root cause of performance problems and start tuning proactively. This application-centric information base enables a group of professionals working together to speak the same language and focus on facts, rather than guesswork.

According to IDC (Business Transaction Management – Another Step in the Evolution of IT Management), BTM will likely become a core offering of established IT system management vendors, since it can contribute to almost every aspect of IT management – ranging from performance management, SLA management, and capacity planning, to change and configuration management (CMDB).

December 2, 2011

No Comments

Business Transaction Management – A New Generation

Traditional systems management tools focused on monitoring the health of individual components. Tools like IBM Tivoli, BMC patrol, CA Unicenter, and HP Openview initially focused on management of servers, services, and resources. In those days, the equation was relatively simple–100% CPU utilization = bad, 10% CPU utilization = good. However, the increasing complexity of applications introduced numerous new enterprise application components including databases, connection pools, Web servers, application servers, load balancing routers, and middleware. The business service management industry followed shortly after, and began offering tools for database management, monitoring of network traffic, mining application metrics, and analyzing webserver access logs. Each of these business service management tools “speaks” a different language – database management tools speak in “SQL statements,” network traffic tools use “packets,” while systems monitoring report in “CPU and disk usage.”

So what happens when the application crashes or hangs? What do you do if a single transaction suffers slow response times?

In Comes the War Room

To cope with the proliferation of information sources, enterprises came up with the notion of the war room. Whenever slow response times or poor performance of critical applications is detected, relevant personnel are grouped together into a room for brainstorming and joint monitoring. This involves a large amount of professionals, since a single transaction may flow through several infrastructure components. For example, a financial transaction will trigger an HTTP request to an Apache Web server installed on top of Red Hat Enterprise Linux, which in turns calls a WebSphere application server on a Windows machine, flowing through an MQSeries queue, eventually querying an Oracle database. Members of the war room typically include Java and J2EE performance experts, Microsoft Windows system managers, Unix (Linux, Solaris, HP-UX, etc.) system managers, database administrators (DBAs), network sysadmins, and proxy specialists, just to name a few. This is a lengthy process that can take thousands of man hours to complete.

The New Paradigm – Business Transaction Monitoring

The “new generation” of systems monitoring and management tools, widely referred to as Business Transaction Management (or BTM), offer a new approach. Instead of monitoring SQL statements, TCP/IP packets, and CPU utilization, Transaction Management tools view everything from an application perspective. In the world of transaction management, an application is considered as a collection of transactions and events, each tiggering actions on the infrastructure. The goal is to track every transaction end to end and correlate to the information collected from the infrastructure. Such an end-to-end view enables to quickly isolate and troubleshoot the root cause of performance problems and start tuning proactively. This application-centric information base enables a group of professionals working together to “speak” the same language and focus on facts, rather than guesswork.

According to IDC (Business Transaction Management – Another Step in the Evolution of IT Management), BTM will likely become a core offering of established IT system management vendors, since it can contribute to almost every aspect of IT management–ranging from performance management, SLA management, and capacity planning, to change and configuration management (CMDB).

October 19, 2011

No Comments

Transaction Management – Linking Business and IT

Transaction management is the IT process that links the business process with the IT infrastructure. By tracing transactions, you know where problems exist.

Most businesses have IT–a complex infrastructure of hardware components that interact in order to provide a service or application to its users. The users are either paying customers or employees and associates who, in turn, provide services or products to paying customers. Those users are the lifeblood of business—the source of revenue that enables the business to thrive.

Transaction management solutions provide the link between all of the processes and interactions that run throughout your IT and the business’ source of income—the individual transaction activations.

How does transaction management do this? The concept is elegantly simple, yet yields complete visibility into the entire system. Every piece of data that enters or leaves each server, along with the resource consumption of every process in every server, is linked to a transaction activation. In this manner, not only is IT able to connect the different “dots” within a data center, but IT can connect to the most important dot of them all, the user.

Any problem that arises within the IT infrastructure will always be identified by the user (unless it is a false alarm), with the power to “connect between the dots” and see every parameter and statement within its business context (the transaction activation), and problem resolution becomes much less difficult.

The following video gives a clear view of the importance of transaction management and its ability to link business and IT.

September 27, 2011

No Comments

End-to-End Monitoring of Transactions

End-to-end monitoring solutions should monitor all of the infrastructure’s components, yet some solutions only monitor one part of the bigger picture.

The term “end-to-end” relative to transaction monitoring is very over-used. End-to-end Website monitoring, end-to-end server performance monitoring, end-to-end database monitoring, and end-to-end network performance monitoring. The list can go on and on when, in fact, they only provide monitoring for one small part of the bigger picture, as illustrated below.

What End to End Means to different solutions

Why True End-to-End Transaction Monitoring is Different than Monitoring Server Performance

Monitoring server performance traditionally refers to ensuring that CPU utilization and memory consumption have not reached maximum levels. However, this does not promise true application availability. Traditional end-to-end monitoring tools provide a dashboard that shows the health of each one of the individual components within the data center. But what if someone forgot to re-connect a network cable after maintenance or what if a load balancer was configured incorrectly and is not executing a proper round robin? An alert will indicate that something is wrong with the servers that are still connected when, in fact, the problem is that traffic is simply not being equally distributed. A network performance monitoring solution could be implemented, which would aid by monitoring the usage of the network, but that is not a single end-to-end solution anymore.

What if a runaway process in one server is “bombing” a second server with requests? The server monitoring solution will blame the server that is being bombed by the server which has a runaway process running on it as opposed to indicating the source of the problem.

Real User Monitoring: An End to End Solution?

If you only take the user’s tier into account, real user monitoring could be considered an end-to-end solution for that tier alone. With real user monitoring, you can see both ends of a transaction—hence the claim to provide end-to-end. What about everything in the middle? What action is to be taken when a problem is detected by the end-user experience tool? What if the root of the problem resides within the database or the mainframe? Shouldn’t an end-to-end solution be able to take care of everything, including triage within the data center? Website performance monitoring is important for understanding the quality of service that your customers are receiving and there is a lot of value in that, but an end-to-end solution should also help you find the cause of the problem along with simply reporting its existence.

End to End Transaction Monitoring Tools Deliver

What do the following components have in common: the user’s desktop, a firewall, a proxy, a load balancer, a Web server, an application server, a message broker, a database and a mainframe? The answer is transactions. The only way to provide a true end-to-end solution is by monitoring every transaction from the moment any user clicks any button, and continuing all the way through each tier. Only business transaction management (BTM) solutions can promise that.

Performance monitoring tools that are not showing end-user performance are not focusing on what is most important to the business. Website monitoring tools that send synthetic transactions to the site and check response times are not showing what users are really experiencing. Database monitoring is important, but without knowing the context of that problematic SQL server transaction, resolution can be a shot in the dark.

Transaction Monitoring With BTM

End-to-end solutions must be able to monitor all of the infrastructure’s components. Transaction monitoring solutions do that automatically since they monitor the object that ties all of those different components together—the transactions. BTM solutions enable a drill down that begins from each transaction type that is running on the system and ends with the smallest event that composes a single transaction instance. In this manner, not only are you assured that everything is running smoothly, but when things start to go wrong you can perform immediate triage and resolution.

SharePath—The Most Comprehensive Transaction Monitoring Solution

SharePath by Correlsense is the only single solution on the market today that can provide true end-to-end transaction monitoring. From the moment a user clicks any button in a monitored desktop or Web-based application, SharePath monitors the transaction through the entire infrastructure, through proxies, Web servers, load balancers, application servers, message brokers, databases and mainframes. Learn more.

August 3, 2011

No Comments

Reducing Risk During a Data Center Migration

Although data center migration projects are multifaceted—from construction to cooling to provisioning software and storage—the primary objective is to facilitate the delivery of IT services to end users in a cost-effective manner.

If applications are not performing up to par after a data center migration, it doesn’t really matter how good the cooling, power and servers are running. That is why it is imperative to invest in the right monitoring tools as part of the data center migration. The Importance of Benchmarking before a Data Center Migration

The Importance of Benchmarking before a Data Center Migration

Consider a scenario where a business-critical application is moved from an old data center to a new one as part of a data center migration. Users start complaining that the application is not running the same as it did before the data center migration. As the CIO or data center director, you may feel that this is impossible. After all, the new data center has more powerful servers, more capacity and more network bandwidth!
Tip #1: In order to avoid this scenario, it is important to benchmark application performance from the end user perspective both before and after the data center migration.

Consider a scenario where a business-critical application is moved from an old data center to a new one as part of a data center migration. Users start complaining that the application is not running the same as it did before the data center migration. As the CIO or data center director, you may feel that this is impossible. After all, the new data center has more powerful servers, more capacity and more network bandwidth!

Tip #1: In order to avoid this scenario, it is important to benchmark application performance from the end user perspective both before and after the data center migration.

Consider another scenario: one of the applications that needs to move as part of the data center migration has a large number of dependencies, increasing its complexity. It calls external web services, the mainframe, and third-party applications. However, how can you be certain what all of those dependencies are in order to ensure they are available in the new data center after the migration?

All dependencies of the application must be automatically detected prior to the data center migration in order to rule out human error.

Mitigating risk after a Data Center Migration

Once the new location has been established and wired, the servers have been put into place, and all software has been installed, you are ready to start funneling traffic to your new masterpiece. What can you do if—despite all of your meticulous planning—traffic starts hitting your new application and performance is unacceptable?

Millions of dollars have been invested in the data center migration, project milestones have been met, the applications are up, but if users are not happy with performance that is all that matters.

Tip #2: Make sure that you don’t make any assumptions about the state of the data center after the data center migration. The only way to know for certain there are no problems once the applications have been moved is to automatically detect every individual transaction from each and every user and follow its path throughout the data center. By accounting for all transactions through all hops, you will always know when or if service is disrupted. Most importantly, you will know exactly what the problem is so you can fix it instead of rolling back to the old data center.

Data Center Migration and SharePath

Data center relocations are projects that bring significant risk to the business. This risk can be mitigated with a solid plan that focuses on understanding the current business transaction workload and data center configuration, as well as the future data center configuration and associated business transaction workloads.

  • SharePath by Correlsense mitigates the technical and financial risks associated with the complex projects like data center migrations.
  • SharePath quickly identifies the behavior of business transactions and their subordinate technical transactions.
    This transparency greatly improves the relationship between IT staff and business personnel challenged with the implementation of high risk, multimillion dollar projects. Click here to watch a video.

July 12, 2011

No Comments

Transaction Monitoring -The Four Approaches

This article discusses transaction monitoring and the 4 approaches of network appliances, Java/.NET deep dive, end-user measurements, and agent based monitoring.

You know that your enterprise is in dire need of a transaction monitoring solution. Transaction latencies are in the sky, some transactions do not even make it through, mysterious bottlenecks pop up during peak use, and users call in with complaints. Implementing a transaction monitoring solution is unavoidable.

The problem is, when you look at all of the transaction monitoring solutions out there they all look the same. Every vendor seems to make the same claims: end-to-end transaction monitoring, full visibility into transactions, link IT to business processes, low overhead, perform root cause analyses, find application performance problems before they impact your clients, and so on.

So, how can you make sense of all the claims and find the transaction monitoring solution that is right for you? This article reviews the various transaction monitoring approches and presents the advantages and disadvantages of each.

1. Transaction Monitoring: Four Solution Types

  • Network Appliances: Includes all solutions that collect data by non-intrusively listening to network traffic.
  • Deep Dive Profilers: J2EE/.NET code instrumentation. Includes all solutions that use bytecode instrumentation (or Java/.NET Hooks) in order to collect thorough code-level metrics.
  • End-User Monitoring: This approach utilized end-user latencies either by connecting an appliance to the network at the Web server or by installing agents on the client side.
  • Agent-based Transaction Tracing: Solutions where agents are installed on all relevant tiers and all transactions are traced everywhere.
  • Transaction Monitoring with Network Appliances
  • What they do: perform transaction monitoring by network sniffing in order to identify application events (or transaction segments), and then reconstruct the entire transaction by fuzzy matching.
  • How they do it: network appliance solutions usually connect to a port mirror in order to collect the traffic, and then try to reconstruct the entire transaction. Information needs to be collected directly from every node that is of interest.
  • Main advantage: transaction monitoring by connection to the network means that no overhead is added and it is simple to install.
  • Main drawback: transaction monitoring has to be done by an algorithmic approach since they cannot collect data directly from the servers, which leads to inaccurate transaction metrics and topology.

Correlsense SharePath vs. Network Appliance Solutions

SharePath can provide data from within the servers, enabling resource consumption and all of the actual parameters of the transaction segment to be collected without requiring fuzzy matching, which can be inaccurate. SharePath is much more flexible, is very non-invasive, and simply records what goes in and out of the server, and it can dive-in to collect metrics from encrypted data packets.

2. Transaction Monitoring with Deep Dive into the Java/.NET Code

  • What they do: these transaction monitoring tools provide deep diagnostics into Java applications to the code level. They are used by J2EE/.NET experts in order to locate problems before deployment.
  • How they do it: transaction monitoring is done by bytecode instrumentation (or Java hooks) that retrieve data from the nodes that are running J2EE/.NET applications. This is done by utilizing the class loading mechanism of the interpreter (JVM for J2EE or CLR for .NET) that, in order to intercept specific classes or methods, calls within the application.
  • Main advantage: provides a lot of rich information for developers. This type of transaction monitoring brings up the specific line of code that is the cause of the problem.
  • Main drawback: transaction monitoring cannot be done for all of the transactions running on the system (up to 10% for short periods of time), implementations are lengthy and invasive, and the person ultimately responsible for application performance may be completely overwhelmed and/or clueless by how to utilize the massive amounts of information retrieved.

SharePath vs. Deep Dive

Transaction monitoring by deep dive cannot load test at full capacity during development or monitor all transactions during production like SharePath can with its very low overhead. SharePath can be used with one of the many J2EE and .NET profilers that are available on the market today (some are for free) in order to aid application development before deployment. Because of its ease of use and high-level view, SharePath can be used by IT operations and infrastructure teams to trace transactions at all tiers, providing a full topology of the transaction flow. Conversely, deep dive solutions provide metrics only at the application server, and they have limited horizontal visibility. Lastly, SharePath’s architecture is environment-independent and can be deployed on any server, not just the .NET or J2EE server. The product’s technology is based on process wrapping and, therefore, is non-invasive, which enables fast and clean implementation.

3. Transaction Monitoring with the Help of End-User Measurements

  • What they do: this transaction monitoring approach is based on managing the application by monitoring the end-user response times and then performing customer analytics and system heuristics from the Web server outward.
  • How they do it: there are two general strategies to implement transaction monitoring with this approach. The first is by installing an agent on the user’s computer, either on the desktop or in the browser with the help of a Java script. The second is by installing a network appliance on the Web server. Some solutions have servers around the world that test performance from different regions.
  • Main advantage: provides valuable metrics about what your customers are experiencing. Transaction monitoring with this approach puts customers first.
  • Main drawback: transaction monitoring stops at the Web server. The few solutions that let you peek beyond that can only provide very limited metrics. You will know that there is a problem, but you will have no idea where to find it.

SharePath vs. End-User Monitoring

SharePath can be extended with an end-user monitoring solution in order to cover all bases. The product finds the specific location of problems versus only alerting about them, providing important insight and saving time. Real end-user monitoring products do not provide much information in the development phase. However, with SharePath, you can test your application pre-deployment.

4. Agent-based Transaction Monitoring

  • What they do: software agents are deployed along the application path across the different tiers so that a unified view of the entire application is provided. What characterizes these solutions is that the full flow of every single one of the transactions running on the system is recorded in real time, at thousands of transactions a second. This solution is just as valuable to IT management as it is for the application development team.
  • How they do it: agents installed at each tier send data collected to a central repository for processing. Agents may be installed with the help of JVM/CLR instrumentations at the application server (one technical execution approach), or they may be installed as kernel modules (drivers), shared objects (Unix), or DLL files (Windows) at the operating system level. Agents may also be installed at databases, MQ middleware servers and legacy mainframe computers. Every event or transaction segment is recorded along with all of its real parameters and then accurately re-constructed.
  • Main advantage: transaction monitoring is done all the time for every single transaction. This is the only true end-to-end solution that includes the middle and all of the important data that is to be collected from the servers in a way that does not weigh down the system.
  • Main disadvantage: by themselves, these transaction monitoring solutions cannot get deep into the code or see what is happening at the browser level.

SharePath vs. Other BTM Solutions

SharePath’s deployment is faster and is not as labor intensive as other BTM solutions that need to perform code instrumentations—for example, SharePath installs drivers, DLL files, objects, etc. SharePath can be easily used by anyone, and it works at the operating system level (wraps processes as opposed to bytecode instrumentation) and is therefore environment independent. Residing at the OS level means that SharePath can better monitor resource consumption beyond CPU consumption (IO, network, and can be customized to collect anything). View videos.

June 9, 2011

No Comments

Transaction Monitoring: Traditional vs. BTM

Transaction monitoring includes both traditional and new approaches. Before making a purchase, read this article to learn the differences and make the right choice.

Even experienced IT professionals can confuse traditional monitoring tools and the newer generation of business transaction monitoring (BTM) tools. In today’s market, competing messages sound very similar, making it difficult to differentiate between the old and the new. This article highlights the differences in the traditional and the newer generation of transaction monitoring tools.

Traditional Monitoring

Traditional tools monitor the performance of each component individually and display all of these metrics on a “single pane of glass.” End-to-end performance monitoring means that you can see the performance of every component in one centralized console. So, for example, the resource consumption of the servers, the threads that the application is running, the throughput of the network components, and the calls to the database all displayed in their own section.

When traditional tools monitor transactions, they pick up various segments of transactions throughout the data center without stitching them together into one full transaction flow.

For example, the database monitor picks up all of the SQL statements that it sees and displays them on the central dashboard along with their response times, while the real user monitor picks up all of the requests that are sent out to the data center and displays them on the same dashboard along with their response times. If an application slowdown occurs and all monitors (including the application server monitor which was not mentioned) are showing erratic response times for various transactions, the real user measurements only show that the user is experiencing a problem but does not show where the problem is within the data center. The silo-specific tools, then, do not have the context of the CICS program names, SQL statements, and the Web service calls which are showing erratic performance. The result is that the IT professional is stuck with a glut of confusing, disparate information on the dashboard.

How to Identify Traditional Monitoring Tools

  • They are typically sold as product suites. Vendors develop or acquire separate server monitoring tools, network monitoring tools, application performance management tools and real user measurement tools, and then offer them bundled as an end-to-end package.
  • These tools tend to be pricey and are difficult to implement, not to mention their limited visibility due to the lack of correlation between tiers. On the upside, they can provide more thorough metrics within the application server, which is why the new generation seeks to complement the traditional tools as opposed to replacing them.

The New Generation of BTM Tools

The new BTM tools connect every process within the data center to a click of a user at the desktop.

End-to-end means that the user request and the related activity within the proxy, Web server, app server, database server, MQ and mainframe are all connected as a single transaction instance. The resource consumption at each component can still be seen, but at the granularity of a single transaction segment.

For example, if service levels are starting to degrade, the new generation of tools not only picks up the performance degradation that the user is experiencing, but they also immediately know what is causing the specific degradation down the line.

Learn more about how SharePath—the new generation of monitoring—can help your enterprise thrive in managing today’s complex applications.

May 16, 2011

No Comments

Transaction Monitoring Software: Jargon Proliferation

This article on the jargon proliferation within the transaction monitoring discipline reviews common phrases used today to describe how to link business and IT.

There are several ways to trace a transaction. Although different approaches to transaction tracing may yield different outcomes, no matter how you do it, you should be able to trace a transaction completely, from end-to-end.

This can be confusing. Why should monitoring for transaction performance mean something different with every approach? The reason is that the rather nascent transaction monitoring software discipline is still in the “early adopter” phase, not unlike Blu-ray vs. HD DVD.

Monitoring for transaction performance is a clear next step for many enterprises, but when it actually comes to putting a transaction monitoring system in place, it is very hard for many IT professionals to understand the value proposition.

Transaction Monitor? Transaction Trace?
Several buzzwords get tossed around these days describing transaction monitoring software. Interestingly, there is often no link whatsoever between the buzzword and the actual technology; vendors that use the same words to describe their products can have completely different offerings at the end of the day, as we reviewed in this article, “Transaction Monitoring – the Four Approaches.”

It is also interesting to see that the words “monitor” and “trace” are synonyms in English, and in fact, it seems as though “transaction monitor” and “transaction trace” are used pretty much interchangeably in the industry. However, when you really think about it, they can mean different things. The only place where it seems that transaction tracing and transaction monitoring have been defined side-by-side is at Doug Mcclure’s blog. Intuitively, you could define transaction tracing as the topology of the transaction (over the different tiers of the infrastructure) and that transaction monitoring includes all of the metrics of that transaction (i.e., events summoned and resource consumption).

Why Monitor Transactions?

Monitoring and tracing transactions is the way to link business and IT.

The “business” is the user initiating transaction activations—clicking on the “add to shopping cart” button or performing a stock trade, for example. The majority of interactions between the user and the UI (be it a Web page or desktop application) create transactions. The business depends on these transactions, and since the business has spent, in most cases, millions of dollars to allow users to perform these transactions, not only should they work, but they should do so within a time that the user is willing to wait. Time is money, after all.

The “IT” is everything the infrastructure does in executing a transaction. A good example is that annoying SQL statement that has just caused the application to hang because it was activated 100,000 times. Without the link, the database administrator is going to have a tough time figuring out how to keep the problem from occurring again. With the link, that single statement is now tied to a specific transaction and user. All the way from the back end, the DBA can understand the business impact of the statements, because she knows exactly what the intent of the user was and how long it took to complete the transaction from end to end.

Transaction Monitoring with SharePath
SharePath links every transaction to all of the events that the transaction invoked within the infrastructure, giving you the power to link your business with your IT and to proactively ensure that each customer is satisfied.

April 20, 2011

No Comments

Doctor IT

Is your IT system experiencing problems?

IT systems today are extremely complex and it can be hard to isolate the causes of performance problems and bottlenecks. When users complain about slow response time, or when servers crash, hang or need to be rebooted or re-installed, Correlsense’s DoctorIT service can find and root out the source of the problem.

Does your IT system suffer from any of the following?

  • Slow applications
  • Bottlenecks
  • Frequent crashes
  • User complaints about response time
  • Need to restart or reboot often
  • Poor performance
  • Servers get stuck or “hang”
  • Have to reinstall
  • 100% CPU usage

DoctorIT can help
Correlsense’s DoctorIT can solve your hard-to-detect performance and bottlenecks problems in your critical applications. Guaranteed.

Correlsense’s proprietary tools and technology combined with clear methodology developed through extensive in-the-field experience, allow DoctorIT to isolate the bottlenecks that slow your applications and reduce your system’s response time.

Serve your customers better
DoctorIT solutions will bring your IT system to its full potential, allowing you to provide the excellent service that you want to be known for.

DoctorIT will streamline your IT system, releasing bottlenecks, improving poor performance and reducing overload. Your system will work more smoothly, maintenance will be easier and customers will be more satisfied.

Correlsense technology at work
Correlsense tools monitor cross tier transactions, from end-to-end. They penetrate the maze of information to find the relevant bits, building a view of the entire system and each of its services and applications as a whole, and providing an executive, actionable assessment for optimizing performance.

Your IT system servers will no longer hang, no longer need to be rebooted, restarted or re-installed, and will no longer tie up 100% CPU. Slow applications will speed up, poor performance will disappear, and most importantly, user complaints will be a thing of the past.

Contact us to learn more about our problem resolution expert service.