Start seeing with SharePath free download here

Have any questions? Just call us 508-318-6488

November 21, 2011

No Comments

Why IT Operations is Like an Action TV Series

Nir Livni, Product Management Director at Correlsense, writes about IT operations, citing W. Edwards Deming who said “In God we trust; all others must bring data.”

I like watching the series “24,” though I can’t really explain why. Every time they nearly get the bad guys, there’s some sort of twist in the plot and they need to start all over again. For example, I’m sure that you are familiar with the following classic scene: The Counter Terrorism Unit (CTU) chopper is following a suspect that is driving a black van. The suspect’s van enters a tunnel, but the van doesn’t leave the tunnel. Instead, a number of different vehicles leave the tunnel at the same time, and the suspect is probably in one of them. By the time they figure out that the black van has been left empty in the tunnel they have already lost the suspect! They shout “We have lost visual!!!” and are back to looking for the bad guys all over again—then they call Jack Bauer.

IT operations is just like the CTU; the CTU is responsible for making sure that life goes on without any unpleasant surprises. Similarly, IT operations needs to do the same in its own space and make sure that the business keeps on running and that business transactions are being executed properly and on time.

When something is about to go wrong, the CTU and IT operations are expected to prevent it before it affects anyone. So they set up the war room, call everyone in, and start doing their detective work to find the needle in the haystack. If they don’t find it and something goes wrong, then the results are significant: either people get hurt (in the CTU’s case), or business is impacted.

IT Operations: The War Chest

So which tools could IT operations use to find out that there is a problem, identify the root cause of it, and resolve the issue?

For example, IT operations could use HTTP network appliances that help see every HTTP transaction and measure its response time. These network appliances are just like the CTU’s choppers—they do not have adequate visibility into the data center. They can indicate that something is wrong with the response time of a transaction, but they cannot show why the response time of the transaction is high and they cannot provide the visibility needed for resolution.

IT operations also uses event correlation and analysis (ECA) tools. ECA tools are like CSI detectives (yes… that’s another one I watch…), and rely on other tools to collect information for them, just like the CSI detective who collects evidence from a crime scene. ECA tools are just as effective as the products they rely on to provide them with the data. The issue with ECA tools is that, just like in a crime scene, the thief does not usually leave his ID behind, so all you are left with is just clues, and no accurate data to work with.

Additional tools that IT operations relies on are:

  • Dashboards that monitor server resource consumption.
  • J2EE/.Net tools that are capable of performing drill down diagnostics in application and database layers.
  • Synthetic transaction tools.
  • Real User Measurement (RUM) tools.

With all of these monitoring tools, IT operations still finds itself in a situation where all lights are green while users are complaining about bad response times. In spite of all of the investment in monitoring tools; the infrastructure that IT operations is accountable for is still unpredictable. Why?

A Simple Example

Perhaps it’s best to take a look at this classic example: one of our customers had a problem with a wire-transfer transaction. The liability for the problem kept on going back and forth between the Operations team and the Applications team, who were pointing fingers at each other as to who was responsible for the issue. “All lights are green,” said Operations. “We tested the application and it works just fine,” said Applications. Simply put, no existing monitoring tool could point out the problem.

So what was the problem? The answer is simple: it appears that, by design, wire-transfers for over $100,000 were querying the mainframe nearly 100 times, while other transfers would query it only a few times. Same end-user, same application, same transaction, but just a single parameter made the transaction take a whole different path, which made the difference between a 3-second and a 2-minute response time.

What Exactly Are You Monitoring?

Now the question remains, why can’t existing monitoring tools identify the problem? The reason is simple. Traditional monitoring tools monitor the infrastructure and not the transactions. In a complex heterogeneous infrastructure, there are many tools for monitoring each and every component, but no single spinal cord that is able to show how transactions behave across components. None of the tools are able to deterministically correlate a single request coming into a server with all of the associated requests going out of a server and keep on doing so throughout the transaction path. Just like the chopper that could not figure out which of the vehicles coming out of the tunnel contained the suspect who came into the tunnel in the first place.

This situation raises some strategic questions regarding your monitoring approach. How effective is a monitoring framework without that business context? Are you supposed to just to make sure the servers are up and running and applications are responding, or is your end goal to make sure that the business transactions are being executed as intended and on time?

“In God We Trust; All Others Must Bring Data”

Applications are tricky, transactions are tricky, and they become even trickier in a complex heterogeneous infrastructure that is composed of multiple platforms, operating systems, application nodes, tiers, databases and where communication between components is in different protocols back and forth for every single click of a button by the end-user.

Only by tracing each and every transaction activation throughout its entire path—100 percent of the time for all transactions and across all components—will you be able to systematically collect the granular information required in order to get business-contextualized visibility into your data center. This kind of visibility is a key factor in identifying problems effectively when—or even before—they arise.

W. Edwards Deming said; “In God we trust; all others must bring data.” I think he was absolutely right. IT operations can use choppers, or CSI crime-lab detectives, or Jack Bauers. They all have their roles, but when it comes to fast and effective problem identification as well as many other IT related decision-making processes (that’s a whole different article…), real accurate data is required—no partial data, no assumptions.

Transaction management provides you with that data, and by doing so, it provides the IT organization with visibility and predictability. After all, wouldn’t it be great if you could go to sleep at night knowing that your infrastructure is reliable? That is, unless you want to play the role of the CTU Director…

October 19, 2011

No Comments

Transaction Management – Linking Business and IT

Transaction management is the IT process that links the business process with the IT infrastructure. By tracing transactions, you know where problems exist.

Most businesses have IT–a complex infrastructure of hardware components that interact in order to provide a service or application to its users. The users are either paying customers or employees and associates who, in turn, provide services or products to paying customers. Those users are the lifeblood of business—the source of revenue that enables the business to thrive.

Transaction management solutions provide the link between all of the processes and interactions that run throughout your IT and the business’ source of income—the individual transaction activations.

How does transaction management do this? The concept is elegantly simple, yet yields complete visibility into the entire system. Every piece of data that enters or leaves each server, along with the resource consumption of every process in every server, is linked to a transaction activation. In this manner, not only is IT able to connect the different “dots” within a data center, but IT can connect to the most important dot of them all, the user.

Any problem that arises within the IT infrastructure will always be identified by the user (unless it is a false alarm), with the power to “connect between the dots” and see every parameter and statement within its business context (the transaction activation), and problem resolution becomes much less difficult.

The following video gives a clear view of the importance of transaction management and its ability to link business and IT.

September 23, 2011

No Comments

Why is it always on a Friday? (Part 1)

The weekend is tomorrow; you’ve been head-down all week, nose to the grindstone making sure that your people and your apps are doing well , everything is running smoothly and your end-users are experiencing great service. Now is the time to go home, make sure your wife is OK and to “monitor” your family. Suddenly, on Friday afternoon you get the call – something is wrong… and it’s not one of the “easy ones” (they never are on Friday): you’re calling a war room. REALLY!?!?

It’s called the War Room for good reason – that’s just what it is. You’ve been attacked, and you need to fire back with everything you’ve got. The problem can be one of 100 different issues – latencies are cropping up in your transactions, end users are starting to get timed out, and are growing more and more frustrated. The pressure is on to solve the problem immediately. What’s driving you crazy is that all the screens are showing green! Nothing is indicating a problem anywhere along the pipe, so there’s no one to blame. DBA’s, Network, System Admin, app owners – they’re all happy with their dashboards and more than happy to show you. And that’s great for them, but with end-user transactions timing out, what’s to stop your customer from going straight to the competition?

In fact, end user experience monitoring would show that your users are just plain fed up. You and your teams need to figure out what, exactly, is going wrong.

IT can be a needy bunch. They work hard, and they’re stuck behind the scenes where they get very little accolade (thank God I at least have this blog to complain!). If things go well, end users never even think of them. In fact, the only time they get any attention at all is when things start to fall apart – and that’s just the way it should be.

So like children, your IT team is suddenly all over you. Each person is trying to prove that on their end, everything is fine. It’s green across the board. But you know that somewhere in between those green lights, there is a pile-up.

What are you going to do in this situation?

How do you save the operation from days of latency and lost customers?

July 12, 2011

No Comments

Transaction Monitoring -The Four Approaches

This article discusses transaction monitoring and the 4 approaches of network appliances, Java/.NET deep dive, end-user measurements, and agent based monitoring.

You know that your enterprise is in dire need of a transaction monitoring solution. Transaction latencies are in the sky, some transactions do not even make it through, mysterious bottlenecks pop up during peak use, and users call in with complaints. Implementing a transaction monitoring solution is unavoidable.

The problem is, when you look at all of the transaction monitoring solutions out there they all look the same. Every vendor seems to make the same claims: end-to-end transaction monitoring, full visibility into transactions, link IT to business processes, low overhead, perform root cause analyses, find application performance problems before they impact your clients, and so on.

So, how can you make sense of all the claims and find the transaction monitoring solution that is right for you? This article reviews the various transaction monitoring approches and presents the advantages and disadvantages of each.

1. Transaction Monitoring: Four Solution Types

  • Network Appliances: Includes all solutions that collect data by non-intrusively listening to network traffic.
  • Deep Dive Profilers: J2EE/.NET code instrumentation. Includes all solutions that use bytecode instrumentation (or Java/.NET Hooks) in order to collect thorough code-level metrics.
  • End-User Monitoring: This approach utilized end-user latencies either by connecting an appliance to the network at the Web server or by installing agents on the client side.
  • Agent-based Transaction Tracing: Solutions where agents are installed on all relevant tiers and all transactions are traced everywhere.
  • Transaction Monitoring with Network Appliances
  • What they do: perform transaction monitoring by network sniffing in order to identify application events (or transaction segments), and then reconstruct the entire transaction by fuzzy matching.
  • How they do it: network appliance solutions usually connect to a port mirror in order to collect the traffic, and then try to reconstruct the entire transaction. Information needs to be collected directly from every node that is of interest.
  • Main advantage: transaction monitoring by connection to the network means that no overhead is added and it is simple to install.
  • Main drawback: transaction monitoring has to be done by an algorithmic approach since they cannot collect data directly from the servers, which leads to inaccurate transaction metrics and topology.

Correlsense SharePath vs. Network Appliance Solutions

SharePath can provide data from within the servers, enabling resource consumption and all of the actual parameters of the transaction segment to be collected without requiring fuzzy matching, which can be inaccurate. SharePath is much more flexible, is very non-invasive, and simply records what goes in and out of the server, and it can dive-in to collect metrics from encrypted data packets.

2. Transaction Monitoring with Deep Dive into the Java/.NET Code

  • What they do: these transaction monitoring tools provide deep diagnostics into Java applications to the code level. They are used by J2EE/.NET experts in order to locate problems before deployment.
  • How they do it: transaction monitoring is done by bytecode instrumentation (or Java hooks) that retrieve data from the nodes that are running J2EE/.NET applications. This is done by utilizing the class loading mechanism of the interpreter (JVM for J2EE or CLR for .NET) that, in order to intercept specific classes or methods, calls within the application.
  • Main advantage: provides a lot of rich information for developers. This type of transaction monitoring brings up the specific line of code that is the cause of the problem.
  • Main drawback: transaction monitoring cannot be done for all of the transactions running on the system (up to 10% for short periods of time), implementations are lengthy and invasive, and the person ultimately responsible for application performance may be completely overwhelmed and/or clueless by how to utilize the massive amounts of information retrieved.

SharePath vs. Deep Dive

Transaction monitoring by deep dive cannot load test at full capacity during development or monitor all transactions during production like SharePath can with its very low overhead. SharePath can be used with one of the many J2EE and .NET profilers that are available on the market today (some are for free) in order to aid application development before deployment. Because of its ease of use and high-level view, SharePath can be used by IT operations and infrastructure teams to trace transactions at all tiers, providing a full topology of the transaction flow. Conversely, deep dive solutions provide metrics only at the application server, and they have limited horizontal visibility. Lastly, SharePath’s architecture is environment-independent and can be deployed on any server, not just the .NET or J2EE server. The product’s technology is based on process wrapping and, therefore, is non-invasive, which enables fast and clean implementation.

3. Transaction Monitoring with the Help of End-User Measurements

  • What they do: this transaction monitoring approach is based on managing the application by monitoring the end-user response times and then performing customer analytics and system heuristics from the Web server outward.
  • How they do it: there are two general strategies to implement transaction monitoring with this approach. The first is by installing an agent on the user’s computer, either on the desktop or in the browser with the help of a Java script. The second is by installing a network appliance on the Web server. Some solutions have servers around the world that test performance from different regions.
  • Main advantage: provides valuable metrics about what your customers are experiencing. Transaction monitoring with this approach puts customers first.
  • Main drawback: transaction monitoring stops at the Web server. The few solutions that let you peek beyond that can only provide very limited metrics. You will know that there is a problem, but you will have no idea where to find it.

SharePath vs. End-User Monitoring

SharePath can be extended with an end-user monitoring solution in order to cover all bases. The product finds the specific location of problems versus only alerting about them, providing important insight and saving time. Real end-user monitoring products do not provide much information in the development phase. However, with SharePath, you can test your application pre-deployment.

4. Agent-based Transaction Monitoring

  • What they do: software agents are deployed along the application path across the different tiers so that a unified view of the entire application is provided. What characterizes these solutions is that the full flow of every single one of the transactions running on the system is recorded in real time, at thousands of transactions a second. This solution is just as valuable to IT management as it is for the application development team.
  • How they do it: agents installed at each tier send data collected to a central repository for processing. Agents may be installed with the help of JVM/CLR instrumentations at the application server (one technical execution approach), or they may be installed as kernel modules (drivers), shared objects (Unix), or DLL files (Windows) at the operating system level. Agents may also be installed at databases, MQ middleware servers and legacy mainframe computers. Every event or transaction segment is recorded along with all of its real parameters and then accurately re-constructed.
  • Main advantage: transaction monitoring is done all the time for every single transaction. This is the only true end-to-end solution that includes the middle and all of the important data that is to be collected from the servers in a way that does not weigh down the system.
  • Main disadvantage: by themselves, these transaction monitoring solutions cannot get deep into the code or see what is happening at the browser level.

SharePath vs. Other BTM Solutions

SharePath’s deployment is faster and is not as labor intensive as other BTM solutions that need to perform code instrumentations—for example, SharePath installs drivers, DLL files, objects, etc. SharePath can be easily used by anyone, and it works at the operating system level (wraps processes as opposed to bytecode instrumentation) and is therefore environment independent. Residing at the OS level means that SharePath can better monitor resource consumption beyond CPU consumption (IO, network, and can be customized to collect anything). View videos.

March 14, 2011

No Comments

Correlsense Unveils SharePath RUM Express, Enterprise and Cloud Editions to Manage Web-based Application Performance

Correlsense, a provider of IT Reliability™ solutions, today announced the availability of SharePath RUM Express, the only free, purely software-based solution for measuring the end-user experience of cloud-based and on-premise applications. In addition, Correlsense also launched the SharePath RUM Enterprise and Cloud editions, which offer expanded capabilities at an affordable price guaranteed to be 85 percent less expensive than comparable real user monitoring solutions available today.

Customer-facing applications, such as online banking and e-commerce, are used by businesses to drive revenue and ensure business continuity. Real user monitoring (RUM) lets businesses isolate problems and pinpoint bottlenecks that exist in the data center, network or applications. With this information easily accessible, enterprises can proactively identify and fix problems before customers are affected, reducing the number of service complaints and solidifying customer loyalty.

“We operate a complex, multi-site Web application, and our customers’ experiences are very important to us,” said Matty Roter, director of IT and operations at SuperDerivatives, Inc., the derivatives benchmark and leading multi-asset front office system provider. “We chose SharePath RUM because it lets us monitor and analyze performance without installing any components on the client side. We installed SharePath on Web servers in our New York and London data centers. Using SharePath RUM, we are able to identify application and networking challenges and take proactive action to correct them, ensuring optimal performance for our customers at all times.”

IT teams can use SharePath RUM to monitor service levels in public cloud environments such as Amazon Elastic Compute Cloud (EC2), as well as for privately hosted applications. The solution is preconfigured for easy, rapid deployment. Within minutes, businesses gain multi-level visibility into the user experience, including the time it takes a transaction to complete from a click at the desktop through the IT infrastructure and back.

Other valuable metrics SharePath RUM Express offers include the speed at which the data center processes user requests, the network delays between the end user and the data center, and the time it takes a browser to completely render a Web page. SharePath RUM Express measures every end-user interaction, unlike some solutions that rely upon synthetic transactions and consequently deliver a less reliable picture of true user experience.

The free version of SharePath RUM Express, along with supporting resources, is available for download at This perpetual license supports two Web servers and one application. The SharePath RUM Enterprise Edition offers additional scalability for multiple Web servers and applications at a significantly reduced cost when compared to any other solution on the market.

“IT operations are struggling to manage application performance in agile environments that experience frequent and extensive change,” said Oren Elias, CEO of Correlsense. “With SharePath RUM, enterprises gain visibility into what the real user experience is and the improvements that are needed, leading to greater IT reliability and better customer experiences with online applications.”

About Correlsense
Correlsense SharePath provides a breakthrough in IT Reliability™ by enabling for the first time both a bird’s-eye and detailed view of how business transactions perform across the four dimensions of end-users, applications, infrastructure and business processes. While other service management and performance management applications focus on identifying problems at individual components (servers, databases, etc.), SharePath automatically detects and traces each entire transaction path, from a click in the browser through all its hops across data center tiers. With the ability to record and correlate individual transaction activations across both physical and virtual components, IT gains full visibility of the transaction metrics required to ensure IT Reliability™ for packaged and homegrown applications. The rich data from SharePath is used by major enterprises to rapidly pinpoint and solve problems and to gain unprecedented insights for their IT service management initiatives such as ITIL.

November 22, 2010

No Comments

Transaction-Based Capacity Planning

Managing day-to-day IT operations is like piloting a large freightliner; some days, the trip can be smooth sailing. Other days are fraught with stress. Why is our e-commerce site slowing down at 11:00 pm every Thursday? How will I support the roll-out of a new service without adding more hardware? Having the analytics needed to accurately isolate problems and make informed business decisions is important. This paper discusses the Correlsense and Metron approach to expertly capturing and correlating transaction-level data, and how this information can be used by enterprises to determine whether they have sufficient capacity at both the service and component levels to meet the needs of the business.

Transaction-Level Data Drives Business Planning
Every IT shop has a dashboard of information that shows what’s happening at any given time. For most IT organizations this is made up from an arsenal of tools, with each one able to only handle a specific task, or tools that only conduct random samplings, which, in turn, does not give a complete snapshot of what’s really happening.

Understanding how service levels are affected by infrastructure and application components requires studying how transactions are performing. A transaction includes everything that happens in the data center from the moment the user clicks a button until they get a response. Transactions are what drive business. Transactions are what the user experiences, transactions are what are traversing through the various infrastructure, network, and application components. Simply put, transactions are the common denominator that links the business and all of the various elements of IT.

Capacity Planning Challenges
The following comments are typical of the challenges facing capacity managers today:

We can monitor and understand the performance of our estate at the component level, but we are struggling to determine the performance of our applications and services and how they are affected by the components.

Capacity planning is an important step in forecasting how IT can support the growing needs of the business. One of the challenges that many capacity planning prospects and customers have is simply knowing which servers and components serve which applications to begin with. Without this knowledge it is difficult to go through a proper capacity planning process.

Many solutions focus on monitoring the individual IT components and creating usage models to predict their utilization along with business growth. In simple application architectures this will suffice, but in complex and dynamic architectures, where applications communicate with each other, it is difficult to predict the true impact of business growth of a specific department or product lines because the complexity makes it hard to tell which infrastructure items (servers and their logical software components) are serving which business applications.

The main challenges of capacity planning, with respect to infrastructure planning and the optimization of resources, are:

  • What is the expected business growth next year, and how will it affect our infrastructure?
  • Which infrastructure items (servers and logical components) are serving which business application or business transaction?
  • Which application infrastructure can be consolidated without degrading service levels?
  • How will the relationship between the underlying components affect both the services and, in a wider context, the business that uses them?
  • How do the requirements of constantly changing environments impact the monitoring requirements?

These challenges are very hard to address when:

  • The IT infrastructure is complex, dynamic and based on a multi-tier structure
  • Multiple applications continuously interact with each other across multi-platform infrastructures

We’ve consolidated and virtualized our servers and now they’re running at much higher utilization, but now our service performance targets don’t seem to be as relevant.

How to Jointly Address the Challenges
The transaction is the missing link which correlates across all of these components. By being able to trace transactions, Correlsense SharePath is capable of providing the required contextual visibility which enables capacity planning for complex environments, resulting in better and more accurate capacity planning. By feeding transaction contextual data into Metron’s Athene modeling software, this data can then become the basis for addressing all of the capacity questions faced by an organization.

About Correlsense SharePath
Correlsense offers software that provides enterprises with an IT Reliability™ platform to keep business applications working properly.

The company’s flagship product, SharePath, provides unprecedented visibility into how transactions perform across the enterprise’s infrastructure, both from a birds-eye view and down to the very deep transaction details, which helps to rapidly pinpoint and resolve performance problems. Using patent-pending transaction path detection technology, SharePath traces every discrete transaction from the click of an end-user through each hop in the data center, while maintaining its context, 24×7, with negligible overhead and 100% accuracy, as it is purpose-built for a production environment.

By being able to record and correlate every transaction activation across both physical and virtual components, IT gains full visibility and transaction contextual metrics, which is required to ensure IT Reliability for both packaged and homegrown applications.

The rich data from SharePath is used by enterprises to rapidly pinpoint and solve problems and to gain unprecedented insights that can help to:

  • Reduce time to isolate and resolve performance issues, eliminating finger-pointing and the need for “war rooms” whenever a performance issue arises
  • Reduce the risks in rolling out new services, and reduce the length of these rollouts
  • Understand how configuration changes impact application performance and service levels
  • Optimize applications and their use of infrastructure resources to allow a better user experience, and enable infrastructure consolidation
  • Improve the capacity planning process

About Metron Athene

Athene Structure

Athene, the most scalable product in its class, provides enterprise ITIL-aligned Capacity Management, automatic performance analysis and reporting for physical and virtual environments. Athene enables the capacity manager to quickly identify what systems to focus on first, where the potential capacity ‘pinch points’ will occur and what to do about them.

CPU Trend

With the widest and most flexible range of automated data capture mechanisms businesses around the globe use Athene to:

  • Better understand how the underlying infrastructure components are performing
  • Analyze performance trends to ensure IT infrastructure continues to meet the requirements of the business
  • Accurately monitor the current service levels relative to the environment and predict how these may change based on real-life business scenarios.
  • Manage infrastructure costs by modeling hardware and workload scenarios, preventing over-expenditure on hardware/software and assuring optimal levels of capacity
  • Diagnose the true cause of system performance problems
  • Reduce the skills and manpower required to actively manage performance and capacity
  • Provide a single pane of glass to view enterprise capacity and performance management
  • Reduce virtual and physical server sprawl

Correlsense and Metron
The primary goals for capacity planning are to plan infrastructure resources based on expected future demand; maintain quality of service, minimize ‘surprises’ (such as performance degradations and outages) and the costs associated with correcting them; and optimize resource utilization to enable consolidation and reduce costs.

SharePath has the ability to see how transactions perform across the infrastructure, and can therefore create an accurate dynamic auto-detected topology map. This real-time topology map provides the missing link between the business applications and transactions being executed in the data center and the IT infrastructure they rely upon. In addition, SharePath knows for each transaction type (e.g., “login”, “send_money”, “buy_stock”…) which tiers are utilized, and what workload it consumes on each and every tier (e.g., Web, application, database, ESB, Web services, etc.) for each transaction type, volumes, and by which department.


Athene is designed to capture data from the widest range of infrastructure components possible. Capacity and performance information is stored in a central database, analyzed, and provides the core output for the Capacity Management process. Athene monitors current performance, analyzes recent behavior, reviews past trends and predicts future service levels, with advice and exception reporting on alarms and alerts.

By combining the core strengths of the two products, SharePath can provide the necessary data to Athene, so that, for example, based on 30% expected growth in the Sales department, Athene can know exactly which servers and components will be affected, which transactions are activated, and what the volumes are.

transaction-based capacity planning

Metron can then use this data to build more accurate “transaction-based capacity planning” workload models, which better fit the complex and dynamic architectures of today’s IT infrastructure. The outcome is a more accurate prediction of:

  • Which infrastructure components should be strengthened or provisioned (or deprovisioned)
  • How application consolidation impacts business, and which departments and transactions are affected
transaction-based capacity planning

The benefits to the customer are:

  • Reduced risk of capacity planning mistakes/wrong assumptions for complex and dynamic architectures
  • Improved service delivery
  • Cost savings by optimizing resources with business demand and growth

The combination of Correlsense SharePath and Metron Athene provides a complete capacity management solution that allows organizations to understand the performance of both the core system and the critical services. By combining business forecast information, IT organizations can predict whether they have sufficient capacity to meet the needs of the business at both the component and service levels. Using these complimentary products, enterprises can move to the next level of virtualization performance assurance by optimizing the performance of the virtual infrastructure, while maintaining the required business service levels.

In summary, the utilization of Correlsense SharePath and Metron Athene can provide valuable insight into the performance of enterprise data center environments never available before, along with a sound basis from which to build a comprehensive customer-focused capacity management process.

© 2010 Metron
Metron, Metron-Athene and the Metron logo as well as Athene and other names of products referred to herein are trade marks or registered trade marks of Metron Technology Limited. Other products and company names mentioned herein may be trade marks of the respective owners. Any rights not expressly granted herein are reserved.

September 7, 2010

No Comments

Application Performance Management Solutions – Quick Tips

Choosing a tool for Application Performance Management requires careful consideration. This article reviews 3 key considerations for APM solutions.

Any organization that is running business-critical and time-sensitive applications is in need of some sort of performance management system. When researching what type of performance management tool to invest in, things can get confusing. Surfing the web yield a sea of empty marketing messages, as the term “application performance management” is vastly over used. Below are a few considerations to think about when choosing an application performance management (APM) solution.

1. An application performance management offering must include an end-user monitoring capability
How else can you know how your application is really performing?
End-user monitoring tools provide a wonderful snapshot of how applications are performing. When it comes to Web application management, there is a long list of network appliances that serve the purpose—giving you a plug and play solution—with no need to install anything on the user’s desktop. But what happens when the end-user monitoring tool that you are using for your Web application management shows that latencies have gone wild? You need an application performance management solution that knows how to connect between the latencies that users are seeing and the problem that is causing the latency in the data center. Now, of course, you don’t want your investment to eat up your entire IT budget, so purchasing more than one performance management tool is simply out of the question. Network appliances cannot make the full connection to enable true APM.

2. An application performance management tool should aid with data center management
Why settle for an application performance management tool that sees only the application server?
True application performance management cannot be done with run-of-the-mill server management software. Conversely, true performance management solutions can perform data center management. The need for application performance monitoring tools to provide data center management comes from the complex, distributed, and interdependent nature of applications these days. Application performance monitoring solutions must take into consideration the entire data center if they are to perform proper triage of a problem. Many data center management tools use information that is collected from server management software that is installed on various tiers. The problem is that you end up collecting a whole bunch of resource consumption metrics that do not correlate to what the end user is actually experiencing. Not monitoring what the end user sees means that users could be experiencing major problems with the application. However, you won’t know about it unless they contact customer service. The end result is that there are many cases where server management software shows availability is fine, yet to the view of the end user, transactions are taking too long to process.

3. Application Performance Management – Monitoring Network Performance
How good is your application without the network that it is sitting on?
Monitoring network performance also plays a role in application performance management. Imagine that your end-user monitoring tool shows that latencies are too high, while the latencies that you measure within your application server are only a small percentage of the total latency that your users are seeing. Only by monitoring network performance will your application performance management tool provide the performance management solution that will cover all of your bases.

Transaction Management Provides the Single Solution
Monitoring every transaction from the end user through the network and to data center is the only way for an application performance management solution to cover all of the bases that are listed above. This discipline is known as transaction management. Read more about Correlsense SharePath for transaction management, and view our videos.

July 13, 2010

No Comments

Customer, Partner and Product Momentum Propels Correlsense to Leadership Position in Transaction Management

Correlsense, a provider of IT reliability solutions, today announced that its recent customer and partner acquisitions, as well as the accolades it has received from industry groups, are propelling the company into a leadership position in the transaction management market.

Correlsense is experiencing significant growth within the enterprise market, specifically with banking, insurance and online retail customers. Businesses are using Correlsense SharePath transaction path detection software to manage every transaction from the end user through each hop in the data center. Recent achievements include:

  • Correlsense was chosen for the Red Herring North America 100 award, which identifies the most promising new companies of 2010. Past members of the prestigious list have included Facebook, Google and numerous other game-changing businesses that have positively affected the way people live and work.
  • SharePath received recognition from Network Products Guide for the 2010 Best Products and Services Award. SharePath won the Business Transaction Management category. This respected annual award honors products and services that represent the rapidly changing needs and interests of the end users of technology worldwide.
  • The company signed strategic partnership agreements with OnX Enterprise Solutions, a Toronto-based solution provider of mission-critical computing environments, and London-based Fusion Business Solutions, an ITIL and service management systems integrator. These two new partners are facilitating Correlsense’s growth into Canada and Europe, respectively. Both regions represent previously underserved markets, which are home to countless enterprises eager to gain greater control over the business performance of their IT operations.
  • Correlsense’s SharePath Real User Measurements Free Edition is experiencing a strong uptick in the number of sign-ups from enterprises. Offered as a fully functional, free, one-year subscription, the solution provides a real-time view of application performance from the end-user perspective, including service level agreements, availability and response times. Visit to learn more.

“This is a great time not only for Correlsense, but also for any enterprises interested in driving IT improvements,” said Oren Elias, CEO of Correlsense. “We’re leading the way toward faster problem resolution and more consistent IT functionality, and the market has been extremely responsive to that offering.”

About Correlsense
Correlsense SharePath provides a breakthrough in IT Reliability™ by enabling for the first time both a bird’s-eye and detailed view of how business transactions perform across the four dimensions (4D) of end-users, applications, infrastructure and business processes. While other service management and performance management applications focus on identifying problems at individual components (servers, databases, etc.), SharePath automatically detects and traces each entire transaction path, from a click in the browser through all its hops across data center tiers. With the ability to record and correlate individual transaction activations across both physical and virtual components, IT gains full visibility of the transaction metrics required to ensure IT Reliability™ for packaged and homegrown applications. The rich data from SharePath is used by major enterprises to rapidly pinpoint and solve problems and to gain unprecedented insights for their IT Service Management initiatives such as ITIL. For more information please visit

May 10, 2010

No Comments

First International Bank of Israel Achieves IT Reliability with Business Transaction Management Solution from Correlsense

Correlsense, a provider of business transaction management solutions, today announced that the First International Bank of Israel (FIBI) is improving customer service and IT reliability using Correlsense’s SharePath solution. By automatically monitoring all business transactions in real time, SharePath lets FIBI identify problems, pinpoint bottlenecks and ensure a high level of service from its IT systems.

FIBI is using Correlsense’s SharePath to manage, analyze and audit its IT systems’ performance and management initiatives to help increase revenues, as well as improve employee and customer satisfaction levels. SharePath automatically and rapidly generates detailed, multi-tier models of every step an application takes within the FIBI IT infrastructure, making it easy for the company to capture important application data without impacting production performance.

“SharePath lets us provide a higher quality of service for FIBI’s customers while allowing us to maximize the utilization of the bank’s IT resources,” said Amnon Beck, FIBI’s CIO. “With SharePath, we can monitor, triage and resolve problems that occur as a result of a whole spectrum of reasons in the bank’s IT systems, Internet and intranet applications.”

Built on the premise of business transaction management, SharePath monitors 100 percent of applications in real time, offering companies a complete view of IT systems’ data for deep diagnostics, change management, problem resolution and more. While traditional monitoring tools provide partial visibility into the activity of IT components, SharePath provides both a bird’s-eye and detailed view of system behavior by monitoring complete transactions. In this way, financial institutions like FIBI can manage their banking systems to help ensure that the response times of stock orders and other critical applications are completed in the least possible time, while pinpointing and neutralizing bottlenecks.

“Modern IT systems present a great challenge to many organizations dealing with growing user volumes, as FIBI experienced. By implementing SharePath, this institution improves customer satisfaction by enabling the identification of problems before they cause damage,” said Lanir Shacham, Correlsense CTO. “SharePath enables full control while mapping out all of the IT systems and the activities that flow through them in real time to provide IT reliability.”

Correlsense Resources:

About Correlsense
Correlsense SharePath provides a breakthrough in IT Reliability™ by enabling for the first time both a bird’s-eye and detailed view of how business transactions perform across the four dimensions (4D) of end-users, applications, infrastructure and business processes. While other service management and performance management applications focus on identifying problems at individual components (servers, data bases, etc.), SharePath automatically detects and traces each entire transaction path, from a click in the browser through all its hops across data center tiers. With the ability to record and correlate individual transaction activations across both physical and virtual components, IT gains full visibility of the transaction metrics required to ensure IT Reliability™ for packaged and homegrown applications. The rich data from SharePath is used by major enterprises to rapidly pinpoint and solve problems and to gain unprecedented insights for their IT Service Management initiatives such as ITIL. For more information please visit

March 2, 2010

No Comments

IT Systems Eventually Add Up to Enormous Pains

I’m in pain. My upper back is killing me. I’m not sure what it is – whether it’s my recent athletic activities (marathon/triathlon); my surfing swimsuit, which has become more and more friendly with the icy cold water; or just my working in front of a screen all day. I don’t know. What I do know is that the pain started five years ago and has been gradually increasing until a month ago, when I strained my back and couldn’t move for a couple of days. Last week, it happened again. I made a small movement I’ve done all my life and suddenly I could hardly breathe. Unbelievable.

All the different medicine folks I’ve seen say, on principal, the same thing – something is wrong with my body stability, so there is pressure on my muscles and that’s why my back hurts. I don’t think you need a doctor for that. I’ve tried different techniques – massages, shiatsu, tweena, acupuncture, stability modification exercises. Now I have a new physiotherapist – this guy is built of muscles, and the pain he inflicts on me when I visit him and he moves my spine is probably the most wrenching experience I have encountered so far – and he believes that I need to strengthen the muscles of my upper back. I’m like, ”Dude? I run, bike, swim and surf. What else can I do?” It turns out that none of my athletic activities actually works on the muscles of my upper back. Who’d figure? He believes that once I build my upper back muscle system, my pain will slowly go away. So I’m in the middle of this program. Wish me luck.

What did I learn from this? Small problems that keep popping up will eventually turn into a BIG problem that will linger. I see it in IT shops all the time – an error was there a few times in the past, and nobody knew why. Then suddenly, the application is stuck in over-resource usage in bursts that eventually choke the entire server. There’s a small increase in volume that one day reaches the tipping point. You get the picture. And we all apply our daily/weekly/monthly maintenance activities, but we maintain what is familiar. The new problems are waiting around the corner, and they are not the ones we know how to prevent. So the secret for a healthy IT system is to pay attention to the small annoying things that happen and ask yourself: could this be a warning sign for a bigger problem to come? If everything is fine just now, enjoy it. But don’t neglect your responsibilities. Make sure you have end-to-end visibility into the infrastructure so you can get to the bottom of every malfunction, even small glitches in the log. Test yourself again and again in terms of load and capacity. Be alert all the time because while you may have strong legs that can run for hours, be aware that you, too, may have the upper back of an old man.

This time around, for my own health, I’m applying a method that usually I don’t recommend to my customers: I’m adding more capacity. I’m not generally a strong believer in throwing more horsepower at a problem, but adding more resources may sometimes be the right solution. Let’s hope that muscle guy is right, because I know a few 70-year-olds who are in a better shape than I am right now.