Start seeing with SharePath free download here

Have any questions? Just call us 508-318-6488

March 24, 2014

No Comments

drawing

The Fifth Key to Oracle Forms Performance: Keep Your Stakeholders Aware

(Part Five of a Five-Part Blog Series)

In Part Four we discussed how performance analytics can help you both troubleshoot and plan your rollouts and migrations. In this last part, I want to tackle the monitoring and alerting you’ll need to have in place when your best laid plans and preparation inevitably fall short of reality. As always, I’ll focus on Oracle Forms.

Oracle Forms

As we mentioned earlier, auto-detection of potential and actual problems is required to have a true monitoring solution — if there are no bells and lights going off when a process is barreling (or about to barrel) out of control, your monitoring system isn’t worth the storage it’s taking up.

The best monitoring scenario would let IT support staff and Help Desk professionals see historical performance issues, real-time issues and potential future threats. It would provide reliable diagnostics prior to upgrade and migration executions, and it would be able to track a single session from the client through to the web server, the application server and the database.

Good performance monitoring and management tools must be able to track the activity of the client through multiple tiers and servers to be effective. They must provide a sufficient level of context to permit the Help Desk professional to ask the right questions.

There are a number of products available today that monitor Forms applications, but they are point solutions and thus miss much of the activity you need to monitor. If you are unaware of problems until they happen, you can’t prevent them or solve them as quickly.

Furthermore, when problems do happen, stakeholders — whether they’re upstream or downstream — can often feel that the system is unreliable. Depending on the severity of the issue and the difficulties involved in troubleshooting it, the competency of the poor IT or Help Desk staffer could even be called into question.

However unwarranted the accusation might be, ultimately, IT is accountable for the upkeep and performance of your Oracle Forms installation — just as the complaining user is responsible for executing the business functions that they rely on Oracle Forms to perform.

The most effective way to turn these dual accountabilities from an adversarial position to a team effort is to keep stakeholders informed so that everyone affected knows what is happening and why. Being able to provide insights into service levels is crucial, whether via dashboards, email reports or other notification methods.

Well, I hope you found this tour of the Five Fundamental Keys to Oracle Forms Performance useful. We’re offering a full white paper that combines all five of these fundamentals in a single document that explores the topic even further. Visit www.correlsense.com/five-keys-for-performance-management-of-oracle-forms-and-e-business-suite/ to get your own copy.

Good luck out there!

March 19, 2014

No Comments

drawing

The Fourth Key to Oracle Forms Performance: Use Performance Analytics

(Part Four of a Five-Part Blog Series)

In our last post we explored how quickly “as-implemented” can diverge from “as-designed,” and what this can mean when it comes to troubleshooting Oracle Forms performance issues. Today we’ll talk about how performance analytics can optimize bottlenecks and manage rollouts and migrations without creating problems somewhere else in the system or for the end users — the kinds of interaction problems we discussed previously.

Advanced analytics is often described as a technique for understanding data through a combination of both descriptive and predictive statistics. In fact, it’s often used interchangeably with the term “predictive analytics” because of the power it has to reveal trends and potential problems before they become visible to the proverbial unaided eye. We’ll talk more about predictive analytics in a few moments.

Oracle Forms

Advanced analytics has advanced to the point where colleges like North Carolina State University are offering degrees in the subject. While it is certainly complex enough to warrant post-graduate study, there are some fundamental concepts of advanced analytics that we can discuss here.

The first concept is often called “Big Data” and involves the collection, storage and processing of massive data sets. In the networking world where we live, the best way to evoke the Big Data concept is to think about trying to keep track of the “Internet of Everything” the rapidly-approaching world when nearly everything has an IP address. IPv6 came about because of the rapidly depleted pool of available IP addresses, and it can accommodate as many as 3.4×10^38 addresses — 340 undecillion devices if you’re looking for the right word for a number that big. And that number (or even a tiny fraction of it) is just one of several dimensions that you need to track, store and process. The others include time, traffic data, etc. Got the picture yet? That’s Big Data.

The second concept is simulation. Once you have enough data points to effectively recreate a time-ordered series of events, you can attempt to simulate what happened in the past — to essentially re-create an incident. Simulation, especially when effectively visualized, can be an extremely powerful tool for anyone tasked with answering “what went wrong.” But it’s of little help in answering “what could go wrong” without some assistance from the last two concepts we need to discuss here.

Our third concept is predictive analytics. Predictive analytics allows us to extend our simulations into the future to answer “what if” questions. But, as we explored in an earlier post, it can be very difficult to know what the right questions are. You need a framework for identifying (and assigning some probability to) the most likely alternative scenarios. That requires our final concept.

The final concept is optimization, a complex mathematical process that, to quote Wikipedia, “consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function.”

Put these four concepts together and you get a sense of the real power of advanced analytics. Without advanced analytics, it is extremely difficult to isolate problems, and even more difficult to prevent them from happening at all. With advanced analytics you can see how changes affect your system and users before upgrades and migrations, and view the impact afterwards as well. In this way, you can optimize bottlenecks and manage rollouts and migrations without creating problems somewhere else in the system or for the end users. Preempting problems can be infinitely more effective than solving them after the fact.

So how do you harness advanced analytics in Oracle Forms troubleshooting? You can buy your own Big Data appliance, hire an analyst and train them on Oracle Forms, or look for an application performance management (APM) solution that can tackle these Big Data challenges–however small your network may be.

In our next and last post, we’ll tackle the monitoring and alerting you’ll need to have in place when your best laid plans and preparation inevitably fall short of reality.

(Get all five blog posts all at once: Download our white paper describing all five keys to Oracle Forms performance success)

March 17, 2014

No Comments

drawing

The Third Key to Oracle Forms Performance: Understanding How It All Comes Together

Our last post focused on user experience, and how important it is to know the right questions to ask when performance issues hit. This is no easy task, as the user experience is impacted by so many factors, from storage speed to database query efficiency to network issues, just to name a few.

In this post, we’re going to focus on how all those factors work together, and how quickly “as-implemented” can diverge from “as-designed.”

Nobody sets out to have a slow enterprise application. Sometimes the delta between vision and reality slowly grows. But sometimes they never met in the first place. For example, it’s pretty common to find test and pre-production systems in use in production environments — running undetected and sapping resources from the production servers. Nobody plans to have resources being sapped by ghost processes, but it happens!

And that’s a simple example of the kind of interference that can plague Oracle Forms implementations from the very start. Often, there are many possible factors to a problem, and unlacing the complex knot of interactions can be a difficult but necessary step. What portion of the performance degradation that you’re witnessing is due to, say, network issues versus database-specific problems? Saying that it’s both goes against the fundamental principles of root cause analysis. One of these two factors is likely the primary cause; fix it and the other problem may well disappear.

Oracle Forms

In fact, for any given Oracle Forms performance issue, at least seven subsystems are potentially involved. At one end, the client web browser may be at fault. At the other end, the database itself. In the middle, the web and application servers. Or, none of these could be the problem, and the systems connecting the database to the application server, the application server to the web server, and the web server to the client could be the issue.

One very successful tactic to isolate the root cause of a performance issue is to find out what changed by creating a timeline of events leading up to the performance issue. For diagnostic purposes, this timeline is often short — a week or two — but the problems can often come from much further back than just a couple of weeks.

As time progresses, the gap between what the theoretical map of the system “as-designed” was supposed to do and what the “as-implemented” system is currently doing increases, and the complicated knot of potential interactions grows. This drift in system knowledge can happen easily, and is typically due to the numerous and frequent changes — both documented and undocumented — that are made to components, systems and servers. As new servers come online, and others go offline, it can be difficult to keep track of all the changes.

In the case of our ghost server processes, and in many other cases, the automatic detection of performance-related processes can show all of your components, how they interact, and which ones are integral in real-time. This can mitigate the need for a lengthy, manual and often fruitless search of the system when something goes wrong.

In our next post, we’ll talk about how performance analytics can optimize bottlenecks and manage rollouts and migrations without creating problems somewhere else in the system or for the end users.

(Get all five blog posts all at once: Download our white paper describing all five keys to Oracle Forms performance success)

March 12, 2014

No Comments

drawing

The Second Key to Oracle Forms Performance: Manage the User Experience with Meaningful Transaction Names

(Part Two of a Five-Part Blog Series)

Enterprise APM involves monitoring applications across multiple technology stacks. In our last post, we talked about how important it was to “field deep” — to understand what’s going on at every hop, layer and stack involved in the performance equation for your Oracle Forms installation. In this post, we’re going to go shallow. We’re going to focus on the end-user experience.

So you have Oracle Forms. It’s working well most of the time, but suddenly, the call comes into your Help Desk: there’s a performance issue. Maybe it’s a form that doesn’t load. Maybe it’s a form that doesn’t finish loading. How does your Help Desk Tier 1 Support Rep handle the trouble ticket? Of course, they start by asking questions — hopefully the right ones. And there are definitely both right and wrong questions!

Oracle Forms

The wrong question to ask is one couched in technical jargon. There’s a big difference between asking, for example, “was the system slow at start-up, querying or loading?” and asking “when you entered the information on page three and hit submit, did you get a blank screen?” But getting to the second question — and understanding the implications of the response to it — is more difficult than it seems.

To ask the first question, you just have to understand the technical details of the system. But answering that first question requires that same depth of understanding — one that most of your Oracle Forms business users (the ones calling in the trouble ticket) don’t have. It’s unrealistic to expect the average end user to be able to respond accurately to that question.

So you need to get to that second question. To get there, you have to understand the end-user experience at the business level. You can guess at the right question, or you can refer to your Application Performance Management (APM) system for some insights. But are those insights technical in nature, or business-oriented? If your APM system delivers information in business language rather than in codes or high tech language, then IT staff knows what and how to ask users.

This means having transaction-level insights — including the name of the Forms window, the action performed, and item label used by the end user — at the fingertips of your support staff. Having all of this information in front of the support tech, along with the response time information, will improve the communication with users who are experiencing problems.

IT requires specific answers that actually aid the repair or upgrade process.  To get to those specific answers, you need to ask the right questions — ones that the user can understand and answer accurately. How well does your APM system fare in helping your team ask the right questions and quickly address problems? How many support staff members do you have to involve in order to get your answers? I encourage you to ask yourself the right questions, not just your end users!

In our next post, we’ll talk about “application drift” — how far away from reality is your most current mental map of your system?

(Get all five blog posts all at once: Download our white paper describing all five keys to Oracle Forms performance success)

March 10, 2014

No Comments

drawing

The First Key to Oracle Forms Performance: Track All Requests Through Every Hop

(First in a Five-Part Blog Series)

Enterprise APM involves many programming languages, databases, infrastructure components, and applications. Add multiple end-points to that – browsers, rich clients, and terminals – and performance management can be an issue. For commercial applications like Oracle Forms and eBusiness Suite, the technology stack may be more manageable but optimizing performance is still a challenge.

One of the first tools that network administrators learn is often the traceroute. Hop-by-hop, it analyzes latencies and is a quick — if primitive — way to identify trouble spots in your network. It worked pretty well in the old days, when all of the resources I was trying to access were in the same place.

In today’s distributed networking environment, troubleshooting performance issues requires a lot more than a couple traceroutes — even if all the required resources are sitting on the “same” computer, thanks to the magic of virtualization. Application performance management (APM) tools have arisen in response to the increasing complexity of our network infrastructure. But even APM solutions have a tough time with many of today’s applications. Oracle Forms is one of those environments that most APM systems struggle with.

Oracle Forms’ multi-tiered architecture, combined with today’s modern network infrastructure, presents multiple challenges to anyone tasked with optimizing and troubleshooting its performance. In this series of blog posts, we’ll investigate the five keys to performance success in Oracle Forms.

Oracle Forms

Today, we start with the importance of tracking all requests through every hop. This is something we first learned with the traceroute, but which quickly got terribly complicated as middleware, distributed architectures and virtualization took hold, and as applications evolved beyond the bounds of a single server and the good old client-server days.

There are often many servers for each tier in a typical Oracle Forms implementation. Changes are frequent, fields are numerous, and upgrades and system tweaks happen often. In an ideal world, IT help desk staff can track a single session from the client through the web server, the app server, and back to the database. Successful performance monitoring and management requires the ability to track activity through all of these tiers and servers.

In the world of Oracle Forms, you also need to make sure you don’t limit yourself to Java and .Net. If you want a complete picture, you have to be able to track and meter single end-user activity across the entire stack and all technologies, including Apache, OC4J, Forms Runtime, and Oracle Database. While point monitoring and troubleshooting solutions might be able to diagnose some basic problems, even a patchwork collective of them won’t be able to perform true root cause analysis for most performance issues that face modern enterprises.

In our next post, we’ll take a look at the importance of the user experience, and the frustrations that face both the user who dares to report performance issues and the help desk support representative tasked with solving the user’s problems.

(Get all five blog posts all at once: Download our white paper describing all five keys to Oracle Forms performance success)

July 10, 2012

No Comments

A Mixed Crowd at Velocity 2012

Recently I attended Velocity 2012, a web performance and operations conference in California where industry leaders and professionals meet and share ideas.  I was able to roam the exhibit halls and speak with a lot of the vendors at their booths (and gather an impressive collection of free t-shirts in the process!).

Many of the regular suspects in our space were there, but I was surprised to see that the only large vendor from the application performance monitoring space was Compuware. Aside from this lack of  large APM vendors, other attendees included content delivery network vendors, vendors from the synthetic monitoring space, cloud testing companies (including two from Sweden, it must be something in the water over there) and a few on the database monitoring and scalability side. I also noticed booths for companies like Facebook, Demonware and Apple, who were there for recruiting purposes. Salesforce.com’s performance engineering team had a both too.

The audience seemed to be a heterogeneous crowd including IT ops and development professionals as well as people from Fortune 500 and Fortune 1000 companies.  It would be interesting to find out the true demographics of enterprise vs. developer folks – there seemed to be more of an enterprise focus than I expected.

I talked to a lot of vendors, and one of the questions I asked almost all of them was “how is the show?”. The answers were mixed: some said there were a lot of small and medium developer-focused businesses, while others said there were lots of enterprise companies. Some vendors said the audience was not sophisticated enough, looking for very basic monitoring tools, while others said the audience was too sophisticated. It is never surprising to hear mixed reports from exhibitors on the true make-up of a trade show crowd!

I noticed a large range of vendors too, from those presenting simple pings about whether a site is up or down to vendors presenting complex APM tools.

A final takeaway from the event was the increasing trend of big hardware companies acquiring smaller companies and moving into the software space.  It was interesting to see how some of the companies there were evolving their businesses, thinking 50 or 100 years ahead.

Overall, I left Velocity with interesting insight into the web operations environment and some promising networking opportunities.

May 15, 2012

No Comments

Cool Vendors Grow Up

I was excited to see the recent news that Gartner analysts Jonah Kowall and Will Cappelli will be writing regular “cool vendor notes.” They offer great insight into the emerging players in our industry. This also marks the moment when companies begin to grow up:

“These research notes are fun to write and cover interesting innovative smaller technology companies to keep an eye on. Looking at the past cool vendors we have highlighted there are some phenomenal examples of companies which started small, were pointed out and grew up and were purchased or went public.”

Correlsense was awarded “cool vendor” status in 2009 for our pioneering tracing technology with non-Java applications, and it’s amazing how much has happened in 3 years.

We were included as a “Visionary” in Gartner’s APM Magic Quadrant last September. A feat that did not go unnoticed by Jonah:

@Correlsense sure shows you can go from cool vendor to magic quadrant participant in a couple years of features and focus.

We recently launched version 2.5 of our flagship product SharePath. This milestone included great new features, most importantly code level visibility. We added many new customers in various industries, recruited new employees, and gained more venture capital funding. This is just a sampling of all the growing up we’ve done.

I’d like to thank Jonah and Will for the shout out and we look forward to reading their future notes!

To see the full post on Jonah’s blog, click here:
http://blogs.gartner.com/jonah-kowall/2012/04/21/great-news-for-application-performance-monitoring-apm-startups-cool-vendors-in-apm.

May 7, 2012

No Comments

Correlsense Announces New CEO Ken Marshall

New Leader Brings Track Record of Growing Technology and Services Firms

FRAMINGHAM, MA | May 7, 2012 – Correlsense, a leading provider of application performance management software, announced that Ken Marshall is joining the company as CEO. Marshall brings his deep experience in leading sales, marketing, and services teams as well as driving rapid growth in emerging technology and services firms. Correlsense’s co-founder and former CEO Oren Elias will remain with the company as EVP of Strategy and Business Development. He will focus on strategic sales and partnership opportunities.

Marshall brings over 25 years of senior management experience in technology product and services companies with extensive expertise in business planning and development, operational management, and sales and marketing best practices. He currently serves as Chairman for the Extraprise Group, a provider of integrated CRM solutions, which he founded in April 1997. Previously, he was Chairman and CEO of Carbonflow, a provider of SaaS applications for the emissions reduction market, President and COO of Giga Information Group, an information technology advisory company, and President and CEO of Object Design, the leader in database products based on object technologies. Object Design was ranked #1 in the Inc. 500 during his tenure. He started his career at Oracle where he was a Group Vice President and founded their consulting services business. He currently sits on the board of Actuate Corp. and Streambase Systems.

“Ken’s proven track record in leading fast-growing technology companies is a great asset to Correlsense,” said Bruce Golden, General Partner at Accel Partners and Correlsense Director. “He has the skills to help Correlsense enhance its market leadership and capitalize on the tremendous growth opportunities in the application performance management software market.”

“I am very excited to join Correlsense and work closely with the founders and existing management team to maximize the overall market opportunity,” said Ken Marshall. “The team has built a compelling product, innovative business model, and a growing base of enthusiastic customers. These strengths put Correlsense in a position to excel in the fast-growing application performance management market.”

About Correlsense:
Correlsense is a leading provider of application performance management solutions. The company’s flagship product, SharePath, enables IT organizations to be more agile when introducing new business services in physical, virtual or hybrid environments. Leveraging a unique approach that combines both transaction tracking and automatic application behavior modeling, SharePath provides IT organizations with actionable knowledge on how to avoid disruptions, maintain high service levels and improve end-user experiences. The company was recently recognized as a “Visionary” in Gartner’s “Magic Quadrant.” Correlsense was founded in 2005 and is privately held. For more information please visit www.correlsense.com.

Media Contact:
Frank Days
Correlsense
+1 508 318 6488 x211
frank.days@correlsense.com

February 7, 2012

No Comments

3 Big Trends on My Mind

It’s safe to say we’re in the midst of big trends emerging in the IT industry. Technologies continue to develop, particularly mobile and cloud. The overall economic health of US and Europe remained precarious, forcing organizations to continue trimming fat and improve efficiencies. IT professionals are learning to do more with less and reevaluate how their organizations are structured (think DevOps). Here is how I envision these trends emerging going forward:

1) Cloud Computing is here to stay. But what form will it take?

With the success of public cloud providers such as Amazon and Salesforce, I think it is safe to say cloud adoption will continue to increase in the coming months and years. It is interesting to note that large telecom firms (such as Verizon) will be gearing up their offerings. Microsoft and HP are also planning to expand their capabilities. These three firms are developing their services under an IAAS model (infrastructure as a service) which I believe will be the dominant trend. It seems like more and more companies everyday are saying to themselves: “Why do I want to run my own data center?” With these large-scale IAAS offerings, smaller companies can outsource these operations and save themselves some headaches. Now, the real question is: “Is this secure?” Will firms trust these software behemoths with their sensitive data and mission critical applications? It will be interesting to see how this develops.

In regards to the private cloud, there are several interesting options out there for dealing with virtualization management, metering and chargeback systems, automated configuration, identity management, self-service provisioning, application management, and more. To be blunt, I don’t think these offerings have matured enough to bring the visions of “consumerization of IT” into full effect. Many times, these solutions can be more trouble than they’re worth, especially for companies with scarce resources.

2) Mobile technologies will drive the Enterprise

2011 could be coined the year of mobility, think of the explosion of IPhones, tablets, Android, mobile applications, etc. We also can’t forget the tragedy of RIM and their service fiasco. The mobile trend will continue in the upcoming years but with an added caveat, it will be increasingly important in driving the enterprise. The BYOD (bring your own device) has already been a monumental shift in several organizations, and will only increase. Managing these mobile devices and applications will be the crucial goal of CIO’s in the future.

3) With Cloud and Mobile Growing, APM will play a bigger role

With the continued importance of cloud and mobile technologies, application performance management will play an even bigger role in IT departments than it does now. The overwhelming majority of mobile users expect their applications to have similar performance levels to those in traditional settings. Demonstrating SLA compliance in cloud environments will be crucial to vendors and users alike.

I am excited to see how these trends will continue to develop over time!

November 18, 2011

No Comments

Is BSM Dead?

An event occurred recently which marked the end of an era. BSMdigest, an online publication covering our space, relaunched as APM digest. What exactly does this mean? This proves that the term BSM was never well equipped for describing our industry. The term should have never been applied in the first place. In my view, BSM is dead for a multitude of reasons.

The crucial problem which caused the demise of BSM is the actual products never lived up to the hype. The industry began overusing the term almost immediately after its initial inception. BSM became a fancy marketing term for several, incredibly varied products. Furthermore, if you look at most of the BSM tools available versus the marketing terminologies used to describe them, there is an enormous disconnect. BSM is a catch-all phrase for anything and everything. However, some BSM products are simple remedy tools. None of the BSM offerings could offer you a full view of your IT from a transactional perspective.

Here is an example which helps illustrate these problems. I recently took a look at a BSM company’s website to view their offerings and product descriptions. They had 36 products all grouped under BSM! This three-letter company put more emphasis on utilizing the term than describing what the products actually do.

The industry has been trending away from BSM for a little while now. The launch of APMdigest seems to be the exclamation point. Going forward there are still many interesting points to consider though. What exactly is the right term for our industry? Is it APM? Something else?

Do you think BSM is dead? What do you see as the correct term for our industry?