How To Drive More Effective Automation With A New Approach To AIOps
The world’s leading innovators are increasingly relying on cloud-native technologies and multicloud environments to improve their digital agility and accelerate transformation. However, this increases complexity, which often leaves IT and business teams struggling to manage dynamic technology stacks.
AIOps, the merging of AI and IT operations, is often heralded as a solution, but only addresses a fraction of the problems in many cases — triaging alerts and reducing the level of data to something more manageable. To unlock its full potential, organizations should seek more sophisticated use cases for AIOps that extend beyond IT operations — think AI for automating DevOps, or even better, BizDevSecOps.
There’s huge potential for such AIOps to accelerate digital innovation by enabling business, development and operations teams to automate modern cloud environments and drive self-healing applications. This is a much more comprehensive approach to taming the data explosion, which demands a new form of AIOps. To support this, AI needs to learn instantly from broad, well-structured, high-quality data, rather than simply analyzing historical patterns in unstructured datasets. Organizations must therefore rethink the way they capture, process and operationalize digital services data.
AIOps is what it eats.
It’s widely acknowledged AI is only as smart as the data its users feed it. Accordingly, organizations focus on improving observability across their technology environments to ensure they’re capturing data from all available sources to enrich the insights their AI produces. However, the use of cloud-native technologies and multicloud environments have led to an explosion of complexity that makes it difficult to capture the petabytes of digital services data produced each day in a standardized and holistic way. Business, operations and development teams use multiple monitoring tools to capture different sources of data. As a result, data collection and delivery formats are inconsistent, making it difficult to retain the structure and context that AI needs to drive effective automation.
Adding further complexity, applications and microservices are often developed in different ways to make them observable. In the old world, with just a handful of monolithic applications to support, developers manually added lines of code to provide the observability needed to identify and debug problems. In today’s cloud-native world, with transactions hopping across tens or even hundreds of microservices and applications built from a mix of open-source libraries and custom code, developers capture logs, metrics and traces using a variety of frameworks. While primarily intended for debugging, this just adds to the data explosion, especially when code is pushed into production. Manual approaches simply can’t keep up and also increase the risk that applications could be either under- or over-instrumented – capturing too little or too much data. As a result, digital services data becomes more inconsistent and fragmented, further increasing the barriers to AIOps automation.
OpenTelemetry: Eyes Bigger Than Its Appetite?
These issues contribute to the growing interest in OpenTelemetry, which has become the emerging standard for pre-instrumenting open-source software libraries, cloud services and custom code with observable telemetry. OpenTelemetry aims to consolidate various open-source tracing formats to capture observability data from any source. This provides standardization that ensures cloud-native applications are observable, right from the beginning of the development process.
However, while OpenTelemetry broadens the scope of the environments organizations can observe, it also further fuels the data explosion. It doesn’t matter how much data they have, it’s what their teams can do with it that counts. Organizations must therefore focus on how they operationalize OpenTelemetry data alongside all their other sources, to derive actionable insights for AIOps automation. To do so, they should identify how to harness the rich context and meaning behind all the data they capture, regardless of its source. It’s critical that this process is automatic, so it doesn’t create more work for already stretched teams, who simply don’t have the time to manually correlate, clean and process mountains of data.
Feeding AIOps With Rich Context
Organizations should begin with the understanding that the three traditional pillars of observability — metrics, logs and traces — aren’t enough by themselves. As organizations extend observability to more BizDevOps use cases beyond debugging (such as application self-healing) they need to also provide more context about their environments to enable greater precision. This should include data that provides insight into real user sessions, dependencies and code-level visibility into applications and microservices. All these data sources must be available in real time, from a single source, to give an AIOps solution the full context it needs to make the right decisions, at the right time.
To enable this, organizations should consolidate all their observability data — both open-source and commercial — into a single data model. This enables better standardization in data structure and consistency, making it easier to operationalize. It’s also important to automate data context capture in order to enable data model updates in real time. Manual attempts to draw data from different observability and monitoring solutions take too much time and effort, so AIOps insights will always be out of date. Finally, organizations should use an AIOps engine that can discover, document and update causal dependencies, rather than simply reaching a conclusion from data correlation. This ensures reliable and unbiased insights to drive precise automation.
A Hunger For Success
With a single data model and causal AI engine underpinning their solution, organizations can feed AIOps with rich context and real-time insights, so they can drive precise multicloud automation and develop self-healing applications. Instead of merely taming the data explosion, AIOps can eliminate manual workloads and empower teams to focus their efforts on tasks that drive greater business value, such as optimizing services and creating new digital experiences.
This shift will see traditional AIOps evolving into a more sophisticated approach that extends the value of observability across the entire software delivery lifecycle, helping to accelerate business transformation and put organizations on a firmer footing to thrive in our increasingly digital future.
This article originally appeared on forbes.com, to read the full article, click here.
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics