Business applications continually become more complex as we have found ever more elegant ways to describe the logic behind process and automate the delivery of this logic.
This seems obvious when you see (for example) the manual steps in business be replaced by web-based applications. Where we used to hand write sign credit card slips with carbon paper inserts and these were ripped apart with a copy given to the customer and other copies passed up the chain to finance departments and then through the banking system, this process is now entirely digital, with information collected from the customer via a signal from their mobile phone transmitted to the point of sale (EPOS) device and then processed through a local series of applications and sent via encrypted connections to a datacenter and the cloud for further processing through thousands of applications across many businesses. Millions of similar transactions are happening each second.
The old model of monitoring just the performance and availability aspects of each physical computer and application now make a lot less sense. Because problems are more often the result of subtle interactions between thousands of sub-systems that are each handling millions of processing steps, leading to potentially noticeable problems. Knowing that a point of sale device is not responding to a request is one thing; but dealing with a customer request to pay for goods taking over ten seconds to complete when it should take two seconds is much more involved.
The old “war-games” type control center view of monitoring with people sitting in an auditorium looking at statistics on massive screens is now less valuable to the business of business. Personally, I still love to visit “control-centers” and love to see the big screens and flashing red lights to indicate events – it’s very cool, just not very effective at solving modern issues.
The issue is this: As business applications have become more complex, it has become less likely that a performance issue is related to a single sub-system, and when an event takes place it can take the shared expertise of hundreds of people from many teams to solve it. The old model of getting everyone together to review what is happening is becoming impractical. I’ve been in war-room meetings where everyone explains how their specific systems are working within the agreed parameters, and it can take hundreds of man hours to identify that it was the combined interaction of a dozen systems that has led to the issue, and then it can take days or weeks to make the needed changes to solve the issue, and further months to implement changes to stop this type of event happening again.
The issue, quite simply, is that the complexity of the application environments has grown faster than the technology deployed to manage it. This means problems now take longer to solve than they used to. You can either accept this as the new normal or you can look for ways to solve it.
I think there is a way of solving this issue.
Today’s monitoring paradigm is based on two basic principles:
1. Monitor system parameters
2. Write and maintain logic that describes how these system parameters impact business.
This model has an issue – the issue is scale.
Monitoring everything isn’t the issue. But using custom logic (often called scripts) to describe the impact on business is the issue. Businesses today have millions of lines of custom scripts that must be updated and tested with every change to the system environment. If these code bases are not maintained, then you cannot effectively monitor your environment.
Building and maintaining these code bases is one of the most expensive tasks IT departments, application developers and entire businesses perform. Choosing not to maintain these code bases leads to increased risk and lower performance.
These code bases make it much more difficult for businesses to improve, because every change demand’s updates and deep testing.
What if you could change this?
What if instead of building and maintaining massive libraries of custom code, you could automatically understand how your business is performing?
The funny thing is, the technology to do just this already exists inside your business, it is just not being used in this way. Every complex application stack uses systems to interconnect every subsystem with every other subsystem, this is called middleware. And each middleware system manages the flow of messages between every subsystem. These messages contain all the information required to understand every single interconnected step within every transaction.
If you would read every message, you would be able to abstract from the technology a complete view of your business.
And here’s where it gets interesting if you can then use this abstracted view of the business as a guide you can then drill down into every associated sub-system to see how its performance is affecting the business.
Now you can see what the user sees at a quantitative and qualitative level. You can spot the early indicators of subtle changes that will impact the user experience, in the future. You can be pro-active and predictive. And you can then take actions to avoid potential issues before they happen. And it doesn’t matter how complex your application environment becomes because you can continually scale your perspective to keep ahead, zooming out or in to see what is important at any given time. And it can all be done on a single pane of glass (SPOG) without the need for the war-games type control rooms (with all its associated coolness and ego-boosting glory)
Only one company offers this solution – www.nastel.com
Complexity Fatigue Solved!