It’s no surprise that as businesses grow and take on more orders, their transaction processing demands grow in tandem. And it might not be surprising that the process to monitor and ensure the smooth operation of the applications that manage those transactions becomes more difficult.
These applications communicate with each other using what’s known as middleware and what might be surprising is that firms need to pay as much attention to keeping their middleware running smoothly as they do to their transaction processing.
Middleware is computer software that interconnects applications. The software consists of a set of services that allows multiple processes running on one or more machines to interact. Essentially, it connects two or more applications that need to exchange data.
Key to success in handling this business growth is the ability to ensure that the ever-growing transaction load is processed rapidly, thus avoiding customer attrition or regulatory penalties.
This, in turn, means an ongoing effort to reduce latency and improve performance.
Low-latency middleware monitoring is particularly difficult as the tolerances are low and the risk of affecting performance via the act of measuring is high. In addition, the resources necessary to handle the load in a global environment are not uniform. Demand may increase in the U.S., decrease in northern Europe and increase in Asia Pacific, for example, and then suddenly change.
Scaling up the hardware in every location is not cost effective; in fact, it is cost prohibitive. The solution to this is elasticity. This means that the capability to handle changing loads grows when the load increases and correspondingly shrinks when it is not needed. Using today’s cloud-based infrastructure, a shared pool of resources can provision the necessary computer processing power and middleware throughput when needed and de-provision it, so it can be used by other locations when it is no longer needed.
Let’s explore a financial institution that manages large-scale funds transfers via a cloud architecture. To achieve the lowest latency, this firm augmented their existing middleware software with network-based middleware appliances. However, they had a number of issues to overcome in order to effectively deliver their service to the enterprise. These issues included: business growth, additional regulation, a requirement for consolidation and mobility of applications.
The company had been using several different monitoring tools for their middleware estate that were not integrated into one central system. This setup made it difficult for IT to come to any conclusions about application availability and performance as they only saw a fractured view of the complete enterprise. Integrated middleware monitoring allowed the company to better manage its low-latency processes, turning the unknown into a competitive advantage.
The company brought in a new monitoring solution to help address their issues. To handle the “good problem” of business growth, they utilized an active data grid to transparently share resources in their private cloud. Instead of constantly installing “fat clients” when users needed access, they were provided with web consoles.
Regulatory pressures were handled by implementing a single security model and a logging process across all middleware. Consolidation was handled via the new monitoring system. It was able to subsume the information feeds from the existing tooling and eventually could be used to replace them. This provided a single point of control for all middleware, resulting in reduced costs for management and resolution of problems.
The requirement to support mobility was now handled by the elasticity of the middleware solution delivered by the new appliances. And in kind, the new monitoring solution was able to scale elastically to handle the changing loads.
For middleware monitoring to provide real world business benefits, it needs to be proactive and identify problems before users are affected and business processes are disrupted. In fact, it should provide a closed-loop methodology for managing known problems and preventing their reoccurrence. They considered this a cycle for continuous monitoring improvement. The company learned that this was one of the most effective ways they could improve productivity and reduce the cost of problem management.
Performance Saves Cash
Fast performance with minimal latency and maximum reliability is increasingly touted as a competitive advantage for firms that manage funds transfers and other financial processes.
Firms like this one that embrace global middleware monitoring can tout their ability to offer minimal latency and maximum reliability while maintaining the exponentially rising flow of data across multiple interrelated applications.
The funds transfer company in our brief study utilized monitoring to be better able to play with the biggest and most demanding customers and juggle the dynamic changes in load that a private cloud infrastructure makes possible. The company was also expanding its presence to additional markets, and knew that the current business growth was not a temporary phenomenon. They could not risk increasing rates of error and failure as the volume of transactions increased.
They knew that performance not only saves cash but also makes money. The business with the least latency in its financial process wins. They would provide the highest levels of service to their customers, and thus, retain them. Plus, higher levels of service and the available resources to create new ones would attract still more business. Their efforts in comprehensively managing their global middleware cloud have proven to be successful.