How to improve your Application Performance Monitoring (APM)
There are several ways to improve your Application Performance Monitoring (APM)
We all know that when you add lanes to a highway, the improvement in journey time is short lived, as more vehicles start to use the new road, creating more traffic, there are then more chances for accidents and this results in slower journey times.
The same seems to happen with technology, Moore’s law is the observation that the number of transistors in a dense integrated circuit doubles about every two years. And this drives performance, which is quickly consumed by operating systems and software advancements. Sure, every year the amount of processing increases dramatically, and this allows for more complex ideas to be expressed digitally and this then creates increased complexity and increased chances of something breaking.
In the 1980’s an algorithm was developed that allowed very weak signals to be used to transmit and store data accurately. This algorithm (called Partial Response, Maximum Likelihood or PRML) allowed for massive increases in throughput and storage and was a driving force in digital wireless and wired communication, and was possibly the most important technological improvement of its decade. Every time your spouse complains that you are not listening, and you respond with “I heard exactly what you said”, this is the biological equivalent of PRML in action, but I digress…
Today’s IT environments are so complex, that it is literally impossible for a single person to understand it all. It often takes hundreds of people to just run systems, with thousands more developing them.
The technology used even a decade ago is no longer practical to use to monitor modern systems, and yet, the cost of change to these systems has proven to be too great, so almost every enterprise is still using technology with its roots in systems delivered in the 1960’s.
Think about it Unix, Linux and C were first delivered in the early 1970’s while much of the ideas behind the mainframe were from the 1950’s and 60’s. Hypervisors and virtual machines – 1960’s! ARPANET, TCP/IP the 7-layer model and the concept of “internet” started in the 1980’s, and many of the issues around security, micro-payments and performance were identified very early on, but as still issues today.
Where this leaves us is with massive complexity, where millions of metrics must be observed, monitored and analyzed continually, and from this data can be derived information that can then be used to make decisions. But with so much to monitor and analyze, decision making becomes bogged down in trying to see what’s important. It doesn’t matter how pretty the graphs and gauges are, if there is too many to monitor, it’s always going to be too easy to miss critical information. There are two areas of technology that are critical in trying to simplify how you monitor complex environments.
This is where you automatically stitch together an understanding from the underlying data of how technology impacts business. Instead of looking at all of the parameters of every IT sub-system you instead see the impact IT has on users. This is often referred to as transactions monitoring, transaction tracking or business flows. But the result is you can identify subtleties that indicate the early points of performance issues before they become important and can take steps to solve.
Without abstraction you are left having to describe every integration point and consider every scenario that could possibly every happen. In IT terms this means writing and maintaining millions of lines of script (code) to describe how things fit together. This is too cumbersome and too expensive for modern systems. Any monitoring system that relies on custom coding to work will cause you long term costs and limit your ability to grow. Abstraction is the fundamental of all management.
Using historical data to predict future events is the future. Knowing the probability of an event happening allows you to focus and prioritize. Machine Learning (M.L.) algorithms are often referred to by the marketing term “artificial intelligence” or A.I. but ML is not AI, it’s just math, albeit complex, state-of-the-art math, that allows smart people to be predictive. The trick is to deliver ML technology that business and technology people can use to be predictive without the need for PhD’s in data science to build code. Real world, real-time ML systems create new ways for people to understand information. Putting Business Abstraction together with Machine Learning within the framework of enterprise monitoring delivers a new way to control the availability and performance of complex environments.
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics