Our last post focused on user experience, and how important it is to know the right questions to ask when performance issues hit. This is no easy task, as the user experience is impacted by so many factors, from storage speed to database query efficiency to network issues, just to name a few.
In this post, we’re going to focus on how all those factors work together, and how quickly “as-implemented” can diverge from “as-designed.”
Nobody sets out to have a slow enterprise application. Sometimes the delta between vision and reality slowly grows. But sometimes they never met in the first place. For example, it’s pretty common to find test and pre-production systems in use in production environments — running undetected and sapping resources from the production servers. Nobody plans to have resources being sapped by ghost processes, but it happens!
And that’s a simple example of the kind of interference that can plague Oracle Forms implementations from the very start. Often, there are many possible factors to a problem, and unlacing the complex knot of interactions can be a difficult but necessary step. What portion of the performance degradation that you’re witnessing is due to, say, network issues versus database-specific problems? Saying that it’s both goes against the fundamental principles of root cause analysis. One of these two factors is likely the primary cause; fix it and the other problem may well disappear.
In fact, for any given Oracle Forms performance issue, at least seven subsystems are potentially involved. At one end, the client web browser may be at fault. At the other end, the database itself. In the middle, the web and application servers. Or, none of these could be the problem, and the systems connecting the database to the application server, the application server to the web server, and the web server to the client could be the issue.
One very successful tactic to isolate the root cause of a performance issue is to find out what changed by creating a timeline of events leading up to the performance issue. For diagnostic purposes, this timeline is often short — a week or two — but the problems can often come from much further back than just a couple of weeks.
As time progresses, the gap between what the theoretical map of the system “as-designed” was supposed to do and what the “as-implemented” system is currently doing increases, and the complicated knot of potential interactions grows. This drift in system knowledge can happen easily, and is typically due to the numerous and frequent changes — both documented and undocumented — that are made to components, systems and servers. As new servers come online, and others go offline, it can be difficult to keep track of all the changes.
In the case of our ghost server processes, and in many other cases, the automatic detection of performance-related processes can show all of your components, how they interact, and which ones are integral in real-time. This can mitigate the need for a lengthy, manual and often fruitless search of the system when something goes wrong.
In our next post, we’ll talk about how performance analytics can optimize bottlenecks and manage rollouts and migrations without creating problems somewhere else in the system or for the end users.
(Get all five blog posts all at once: Download our white paper describing all five keys to Oracle Forms performance success)