Start seeing with SharePath free download here

Have any questions? Just call us 508-318-6488

March 10, 2014

No Comments

drawing

The First Key to Oracle Forms Performance: Track All Requests Through Every Hop

(First in a Five-Part Blog Series)

Enterprise APM involves many programming languages, databases, infrastructure components, and applications. Add multiple end-points to that – browsers, rich clients, and terminals – and performance management can be an issue. For commercial applications like Oracle Forms and eBusiness Suite, the technology stack may be more manageable but optimizing performance is still a challenge.

One of the first tools that network administrators learn is often the traceroute. Hop-by-hop, it analyzes latencies and is a quick — if primitive — way to identify trouble spots in your network. It worked pretty well in the old days, when all of the resources I was trying to access were in the same place.

In today’s distributed networking environment, troubleshooting performance issues requires a lot more than a couple traceroutes — even if all the required resources are sitting on the “same” computer, thanks to the magic of virtualization. Application performance management (APM) tools have arisen in response to the increasing complexity of our network infrastructure. But even APM solutions have a tough time with many of today’s applications. Oracle Forms is one of those environments that most APM systems struggle with.

Oracle Forms’ multi-tiered architecture, combined with today’s modern network infrastructure, presents multiple challenges to anyone tasked with optimizing and troubleshooting its performance. In this series of blog posts, we’ll investigate the five keys to performance success in Oracle Forms.

Oracle Forms

Today, we start with the importance of tracking all requests through every hop. This is something we first learned with the traceroute, but which quickly got terribly complicated as middleware, distributed architectures and virtualization took hold, and as applications evolved beyond the bounds of a single server and the good old client-server days.

There are often many servers for each tier in a typical Oracle Forms implementation. Changes are frequent, fields are numerous, and upgrades and system tweaks happen often. In an ideal world, IT help desk staff can track a single session from the client through the web server, the app server, and back to the database. Successful performance monitoring and management requires the ability to track activity through all of these tiers and servers.

In the world of Oracle Forms, you also need to make sure you don’t limit yourself to Java and .Net. If you want a complete picture, you have to be able to track and meter single end-user activity across the entire stack and all technologies, including Apache, OC4J, Forms Runtime, and Oracle Database. While point monitoring and troubleshooting solutions might be able to diagnose some basic problems, even a patchwork collective of them won’t be able to perform true root cause analysis for most performance issues that face modern enterprises.

In our next post, we’ll take a look at the importance of the user experience, and the frustrations that face both the user who dares to report performance issues and the help desk support representative tasked with solving the user’s problems.

(Get all five blog posts all at once: Download our white paper describing all five keys to Oracle Forms performance success)

January 29, 2014

No Comments

insurance-topology

The Transaction Tracing Opportunity

Corporate computing environments can be complex involving many different programming languages, applications, middleware components, and endpoints. Many composite applications integrate new software, legacy code, and packaged solutions. Enterprise application performance monitoring works by solving the transaction tracing problem.  This means finding a way to identity application latency by tracking a transaction from a mobile application or browser (or a Windows desktop), across the application server and messaging system, to one or more databases and then back to the user. (And that is just a simple example.)

This is called the “holistic” approach to performance monitoring.  What matters to the end user is not the seek time on the disk, the latency across the network, or the SQL statement running out of control.  The user just wants to know why the application is running sloooooow.

This application performance monitoring tool knows the architecture of the entire IT infrastructure, so it knows what components are involved in each type of transaction.  The monitoring tool presents a dashboard to the IT operations staff, so they can monitor end-to-end performance.  If the application is operating within service levels then everything is green.  If not, the dashboard turns red.  Either way the tools lets the analyst click on components in the topology and drill down to see the response time of each piece.

The performance monitoring tool presents two possibilities to the analyst:  (1) the dashboard is red or (2) the dashboard is green.

Dashboard is Red

If the help desk starts getting calls that the system is slow, the support persons consults the application monitoring dashboard.  If the dashboard shows red then the application is operating outside norms or established thresholds.  With the end-to-end performance monitoring tool, the analyst selects the application marked red and then clicks to show the tiers or components that make up the transaction.  The tool breaks down the application into each tier–LDAP, shared services, web server, proxy, messaging, and application server–showing the response time in each.  Then the analyst clicks on any of the components, the performance monitoring tool shows the response time for each transaction marking in red those components that are operating outside norms.

Dashboard is Green

What do you do when users calls and the dashboard is green?  Green means the application is operating within the agreed service level, say, 99%.  But that number is only an average; it does not include all events.

If the user is experiencing latency and the dashboard is green, the analyst cannot infuriate the customer by telling them that everything is green and he or she cannot reproduce the problem.  Instead the analyst needs to drill down into the application to get transaction-level details.  If the top level dash board shows green, drill down into the application the user is using.  Then drill down into each component until you get transaction-level details.  Sort these by response time.  There the analyst sees that a particular transaction is taking too long.  Click on the transaction to see the details of, for example, the Java methods or the LDAP calls. It could be that this user is looking up something or entering data that is not so frequently used.  Then analyst could discover, for example, that the LDAP group lookup (&(cn=something)(objectclass=group)) is taking too long.  This could indicate a corruption the in the LDAP for that particular group or a coding issue.

This is the basic approach to using the performance monitoring tool to solve the transaction tracing problem.

February 21, 2012

No Comments

Traveling Transaction Modeling

Over the past few years, traveling has become a major part of my life. I travel a lot for business, so I’m on the road a good chunk of the time. One of the most interesting (and daunting) things about getting from point A to point B is planning the route. This is most difficult when I have a meeting in New York City, because there are four possible (and reasonable) ways to get there. I could fly, go by rail, drive or take a bus.

Right away, the bus is a non-starter. It takes too long, the station is far from my home, and to be honest, it’s just not comfortable. Even though it’s the cheapest option, the quality of the seats multiplies by the length of the ride comes out to… well, you know how it is.

Flying should be the best bet. The flight takes only about 30 minutes. But with all the security at the terminal these days, the time it takes to get to the airport and the cost of the NYC taxi to wherever I’m going, it’s too much.

The train is fine, with good scheduling most of the time, and it does get you downtown at no extra cost, but if you want the best cabin, you have to pay for it. It’s expensive and depending on which train you ride, it can take nearly five hours to get to where you’re going.

Driving at least gives you some independence (and great music!), but it’s tiring and with gas prices where they are, it isn’t exactly a bargain. Tack on the cost (both financial and mental) of parking in New York City, and driving suddenly looks like a pretty bad plan.

All of these options have one problem in common – the scheduling is never accurate. You can’t be sure your flight will leave on time. Your bus could break down, you could hit traffic, or your train could be delayed for some reason. You always take the risk that the meeting will start without you, or not start at all.

What if we could track the data on that? Aggregate it based on EVERY ride, 24/7, all year long – how many minutes did we lose because of a flat tire, a mix-up on the runway, or a traffic jam? How buses perform in January, etc. If we had that information, we could filter it down and make the best decision on how to get there. Not just from an average, not just in general, but from real data. Can we have a database that tracks every individual ride, on every option, every day, and then aggregates it into a clear picture?

How often is the train delayed on Sunday? Can I get a comparison between today and last year? How many times has this bus line had to stop for a flat tire in March? Does it happen more in winter or in summer?

Now apply this to an application transaction. One little click of a mouse generates thousands of options, rather than just four. There are network devices, hardware (web tier, DD or – God forbid – a mainframe), you name it. Luckily, we do have a way to monitor them all. And the information we get is based on real data, not on an average.

Now who says that IT shows less progress than the travel industry? For now I’ll take my traveling decision based on 2 dimensions only – Price and Time.

May 16, 2011

No Comments

Transaction Monitoring Software: Jargon Proliferation

This article on the jargon proliferation within the transaction monitoring discipline reviews common phrases used today to describe how to link business and IT.

There are several ways to trace a transaction. Although different approaches to transaction tracing may yield different outcomes, no matter how you do it, you should be able to trace a transaction completely, from end-to-end.

This can be confusing. Why should monitoring for transaction performance mean something different with every approach? The reason is that the rather nascent transaction monitoring software discipline is still in the “early adopter” phase, not unlike Blu-ray vs. HD DVD.

Monitoring for transaction performance is a clear next step for many enterprises, but when it actually comes to putting a transaction monitoring system in place, it is very hard for many IT professionals to understand the value proposition.

Transaction Monitor? Transaction Trace?
Several buzzwords get tossed around these days describing transaction monitoring software. Interestingly, there is often no link whatsoever between the buzzword and the actual technology; vendors that use the same words to describe their products can have completely different offerings at the end of the day, as we reviewed in this article, “Transaction Monitoring – the Four Approaches.”

It is also interesting to see that the words “monitor” and “trace” are synonyms in English, and in fact, it seems as though “transaction monitor” and “transaction trace” are used pretty much interchangeably in the industry. However, when you really think about it, they can mean different things. The only place where it seems that transaction tracing and transaction monitoring have been defined side-by-side is at Doug Mcclure’s blog. Intuitively, you could define transaction tracing as the topology of the transaction (over the different tiers of the infrastructure) and that transaction monitoring includes all of the metrics of that transaction (i.e., events summoned and resource consumption).

Why Monitor Transactions?

Monitoring and tracing transactions is the way to link business and IT.

The “business” is the user initiating transaction activations—clicking on the “add to shopping cart” button or performing a stock trade, for example. The majority of interactions between the user and the UI (be it a Web page or desktop application) create transactions. The business depends on these transactions, and since the business has spent, in most cases, millions of dollars to allow users to perform these transactions, not only should they work, but they should do so within a time that the user is willing to wait. Time is money, after all.

The “IT” is everything the infrastructure does in executing a transaction. A good example is that annoying SQL statement that has just caused the application to hang because it was activated 100,000 times. Without the link, the database administrator is going to have a tough time figuring out how to keep the problem from occurring again. With the link, that single statement is now tied to a specific transaction and user. All the way from the back end, the DBA can understand the business impact of the statements, because she knows exactly what the intent of the user was and how long it took to complete the transaction from end to end.

Transaction Monitoring with SharePath
SharePath links every transaction to all of the events that the transaction invoked within the infrastructure, giving you the power to link your business with your IT and to proactively ensure that each customer is satisfied.

April 15, 2011

No Comments

Transaction Management–The Importance of Understanding the Transaction Behavior Model

One of my chores at home as a husband is doing the food shopping (I also take out the trash, play with the kids, and even cook Saturday evening dinner – truly husband of the year!). Food shopping should actually be very easy to do. The decision maker (a.k.a. wife) hands me the list, and I need to deliver. However, sometimes the instructions I get are not accurate enough and I need to use my judgment (while in the field) and decide which product I should buy.

For example, last week I had to buy olive oil. Since I didn’t know exactly what kind to buy, and since there are tons of them, I found myself going back and forth across the shelves, trying to figure out which one to choose.

Marketers would pay a fortune to understand how to influence my decision-making process when I shop. One of the interesting ways to do so is by observing the shoppers’ behavior while they shop, analyze the data and get conclusions that will improve their ability to influence. Using this method, they try to analyze how people behave when they buy and what makes them eventually to get a decision. For example, in order to determine the best location on the shelf, they would see: how many people pass by; how many stopped; how long they stayed in front of the shelf; and whether or not they made a purchase, bought the competition, or didn’t buy at all.

Understanding the behavior of “objects” in order to learn how best to impact or change them is a concept that works well in almost any discipline. For example, lately I heard about an implementation of Thermodynamics theories, where in order to understand how to build effective emergency exits, the human behavior (many people trying to rush out of a closed space) was modeled using the motion of particles and their microscopic behavior (random movements of many elements).

This concept makes a lot of sense. How can you impact something, improve something, or make a wise decision if you do not have a thorough understanding of the behavior of the element you are handling, right?

The same concept can be applied to Application Performance Management (APM). A transaction management solution which has cross-tier transaction behavior analysis and is capable of detecting changes in transaction and application behavior is key in order to be able to:

  • Understand which elements impact end-user experience during disruptions
  • Effectively isolate performance problems
  • Assure service levels for the business users
  • Identify and mitigate bottlenecks and by doing so, improve overall end-user experience

Transaction management has many facets. Since the market does not have a unanimous definition of what transaction management is, many vendors can claim to have this capability, however, when diving into these solutions you could see their capabilities are significantly different. When choosing a transaction management solution, enterprises are encouraged to pay attention to whether the solution has the capability of automatically generating transaction and application behavior models. This can be achieved by a two-step process:

1. Tracking every individual transaction (shopper) across all elements (shelves), and meter how much time the transaction (shopper) spends on each element (shelf) and how many times it invokes each element (how many times the shopper comes back to each shelf).

2. Analyzing this mass of contextual data to automatically generate the transaction model of behavior. For the observed period of time, we can see how much time the transactions spent processing in each element, how many times each element was invoked, and what the difference is between transactions that met their SLA (customer bought the product) or exceeded the SLA (customer bought the competition).

The result is a transaction management solution that provides most valuable information that any performance analyst is looking for as soon as a disruption is noted. In other words, what is the element that causes the degradation, what is the change (among all other changes that happened during last day or week) that has actually caused the disruption, and what can and should be done to improve overall end-user experience?

When looking for a transaction management solution, make sure that it is capable of automatically building the transaction behavior model. This is a key capability that will turn your Application Performance Management from one big mess to an almost completely scientific decision making process.

January 4, 2011

No Comments

SharePath Features

SharePath is an innovative Business Transaction Management (BTM) platform based on patent pending technology that automates the monitoring and management of transactions across all tiers of an application. SharePath’s unique technology offers the following features and benefits:

  • SharePath tracks transactions from the moment any User clicks any button from their desktop through the entire Data Center providing complete transparency into your application.
  • SharePath monitors the SLAs of all transactions that are running through your system 24×7.
  • SharePath drastically cuts Mean Time to Resolution. If a specific SLA is not met; SharePath uncovers which application node is responsible for the latency and enables you to drill down to the cause of the service disruption.
  • SharePath is used for impact analysis whenever the application or infrastructure changes in development or production.
  • SharePath enables Business and IT to resolve issues through a common language – business transactions.

Advantages of SharePath

SharePath is typically installed, set-up, and producing value in just a couple of weeks and is a fraction of the cost of the traditional application management products it complements. Highlights of its breakthrough technology include:

  • Full coverage – See the path any transaction takes, start to finish
  • Environment Independent – Unlike other vendors, SharePath supports all environments and application components
  • Non-Invasive – There is no need for development and integration. No need even for code instrumentation
  • Resource Independent – Innovative technology allows monitoring of anything measurable
  • Auto-discovery – SharePath auto-discovers and learns your application without the need for manual profiling
  • Easy Deployment – The installation process has been carefully designed to be fast and easy. SharePath does not even require to restart the managed application

Answers to Pressing Business Challenges — Our Customers’ Drivers

IT Operations:
“I wish I knew what each transaction was doing and how it was performing, but I cannot afford the overhead.”

Applications:
“Another day wasted in a war room for a problem that was not mine…”

Customer/User:
“I don’t really care that all your systems are green, I still cannot use the online application because it’s too slow!”

CIOs:
“I need to talk to the business about technology in the context of business transactions and business processes, not systems and infrastructure.”

January 3, 2011

No Comments

Java Bytecode Instrumentation Limitations

A real transaction management product needs to follow through the transaction between different types of application-related components such as proxies, Web servers, app servers (Java and non-Java), message brokers, queues, databases and so forth. In order to do that, you need visibility to different types of transaction-related data, some of which only exists at the actual payload of each request. Java, since it is an interpreter, hides parts of the actual code implementation from the Java layer. The Java Virtual Machine (JVM) itself is written in C, therefore there are operating system-specific pieces that are not accessible from the Java later, and thus not accessible to the bytecode instrumentation (BCI). Also, different Java packages utilize native code for reasons of performance or code reuse. This native code (libraries loaded to the JVM) is not accessible by BCI since it is not written in Java.

For example: Let’s say that we need some data that is part of the handshake between the Java application server and Oracle. Using BCI, you can only get access to data stored and managed by the Oracle JDBC driver. If some of the implementation is part of a .dll/so file the JDBC driver is using, we will not be able to access it by using BCI.

Another example: if we want to use features of TCP/IP packets for tracing a transaction between two servers, the actual structure of packets is not accessible from the Java layer, since it is done by operation system libraries that are utilized by the JVM itself.

Read my previous blog posts that introduce bytecode instrumentation and discuss how bytecode instrumentation affects transaction tracing.

Overall, Java bytecode instrumentation is a useful concept that makes the lives of Java developers much better by giving potential visibility into every single class and method call. The deeper the visibility, the higher the overhead. However, a transaction management solution for a production environment cannot allow itself to impact such magnitude of overhead, requiring the trace to be limited by design. A limited trace has to be tailored to every Java application, which makes implementation in real-life scenarios a much more costly task. More so, there are pieces of data that can be crucial if you want to trace transactions across more than just one Java hop, and they are available only at the lower layer than the Java code, thus not accessible by BCI and limiting the ability to trace transactions in the real world.

December 27, 2010

No Comments

Java Bytecode Instrumentation and Transaction Tracing

Imagine you want to debug your code, or better yet, profile your code during run time. Bytecode instrumentation (BCI) (see my last post) is a perfect solution, since by using BCI an external tool can add code to every beginning and ending of every method call within every class….thus allowing you to measure performance and gather method-related data (variable info and such). However, these performance metrics have to be stored somewhere, and have to be sent somewhere as well so the developer can actually look at the output. Applying this kind of procedure to too many method calls will eventually cause the code to execute very slowly since for every method called, you need to gather and send the performance data. This slowness is called “overhead.” In other words, bytecode instrumentation adds overhead by definition.

In order to create a “transaction trace” for a transaction arriving into a Java Virtual Machine (JVM), one can intercept using BCI all the method calls that are serving HTTP calls to create a unique key (based on session or not). Now this key can be passed (different techniques can do this) to every method that is being called by the “parent” method. Each method metric is reported with this key to a repository, thus creating a potential “method trace” of the specific incoming HTTP request. This basic concept is what is used by the different Java-oriented products for transaction management.

In order to improve the above and adapt more to a production environment, you’ll want to reduce the full method trace to just incoming/outgoing calls and maybe a few in between. This cannot be accomplished generically, since if you are not passing the “trace key” between every method, you may have a problem following the trace uniquely. The solution is to create bytecode which is specific to the Java application you are trying to create a trace for.

Java application servers such as WebSphere or WebLogic (Oracle today) are at the end of the day just another Java application execute by a JVM. This is the exact reason a custom solution for transaction tracing has to be created for every version of a Java application server since classes and methods are changing their names and internal structure. Of course the same goes to a standalone JVM that is executing a Java application of some sort.

The bottom line is that you can either use a generic trace that will intercept all the classes and method calls and eliminate the ability to use it in a production environment, or use a more tailored implementation for a specific application version and try to create a partial trace that will reduce the overhead. Of course, keep in mind that for every new type of application, you will need to create a new version of your BCI as well.

Next post: More bytecode instrumentation limitations

December 20, 2010

No Comments

Java Bytecode Instrumentation: An Introduction

This post is not a usual one since I simply want to address a technical question: “what is Java bytecode instrumentation (BCI)” and also explain what can and can’t be done with BCI regarding the problem of transaction tracing. It’s just that I’ve been asked about it again and again, and there is a real confusion out there in the market. Vendors that ONLY do BCI (e.g., CA Willy, dynaTrace, AppDynamics, etc.) are claiming to be a transaction management solution, although there are limitations to what they can do in Java environments, and they have zero visibility to non-Java topologies.

BCI is a technique for adding bytecode to a Java class during “run time.” It’s not really during run time, but more during “load” time of the Java class. I’ll explain: Java, for those who are not familiar, is a fourth generation language, which means you write Java code—e.g., create a *.Java file—you compile the code—e.g., creating a *.class file, which is written in bytecode, and when you execute it, an interpreter—the Java.EXE—is responsible for actually executing the commands written in the bytecode format within the *.class file. As with any interpreter, since we are not dealing with real object code, one can manipulate the actual code written in the executed file.

For example, let’s say you want to add functionality to a Perl/PHP/JSP/ASP code—that’s easy. You could simply open the file in a text editor, change the code, and next time it was executed it would behave differently. You could easily write a program that changes the code back and forth as you wish as a result of some user interface activity.

With bytecode it’s the same concept, only a bit trickier. Try to open bytecode in a text editor—not something you want to work with…but still possible ☺. Anyhow, the way to manipulate the actual bytecode is by intervening during the class loading flow and changing code on the fly. Every JVM (Java Virtual Machine) will first load all the class files (sometime it will do it only when really required, but that doesn’t change the following description) to its memory space, parsing the bytecode, and making it available for execution. The main() function, as it calls different classes, is actually accessing code which was prepared by the JVM’s class loaders. There is a class loader hierarchy, and there is the issue of the classpath but all that is out of the scope of this post…So the basic concept of bytecode instrumentation is to add lines of bytecode before and after specific method calls within a class, and this can be done by intervening with the class loader. Back in the good old days, with JDK <1.5, you needed to really mess with the class loader code to do that. From JDK 1.5 and above, Java introduced the Java agent interface, which allows writing Java code that will be executed by the class loader itself, thus allowing the manipulation of the bytecode within every specific class, and making the whole process pretty straightforward to implement, thus the zillion different products for Java profiling and “transaction management” for Java applications.

Next up: What does bytecode instrumentation have to do with transaction tracing?

September 13, 2010

No Comments

New Webinars on Transaction Management and Capacity Planning

Last month we announced a partnership with Metron to provide advanced solutions for transaction-based capacity planning. To showcase our combined offering, we’re holding two live Webinars in October.

We’ll tackle common problems such as:

  • What is the best approach for predicting the true impact of business growth on a specific department or product line?
  • Which application infrastructure can be consolidated without degrading service levels?
  • How will the relationship between the underlying components affect both the services and, in a wider context, the business that uses them?

By tracing transactions, Correlsense SharePath provides the required contextual visibility which enables capacity planning for complex environments, resulting in better and more accurate capacity planning. By feeding transaction contextual data into Metron’s Athene modeling software, this data becomes the basis for addressing all of the capacity questions faced by an organization. We hope you’ll join us!