Successful DevOps practices generate large amounts of data, so it is unsurprising that this data can be used for such things as streamlining workflows and orchestration, monitoring in production, and diagnosis of faults or other issues.
The problem: Too much data. Server logs themselves can take up several hundred megabytes a week. If the group is using a monitoring tool, megabytes or even gigabytes of more data can be generated in a short period of time.
And too much data has a predictable result: Teams don’t look directly at the data, but rather set thresholds whereby a particular level of activity is believed to be problematic. In other words, even mature DevOps teams are looking for exceptions, rather than diving deeply into the data they’ve collected.
That shouldn’t be a surprise. Even with modern analytic tools, you have to know what you’re looking for before you can start to make sense of it.
1. Stop looking at thresholds and start analyzing your data
2. Look for trends rather than faults
3. Analyze and correlate across data sets when appropriate
4. Look at your development metrics in a new way
5. Provide a historical context for data
6. Get to the root cause
7. Correlate across different monitoring tools
8. Determine the efficiency of orchestration
9. Predict a fault at a defined point of time
10. Help to optimize a specific metric or goal
This article originally appeared on techbeacon.com. To read the full article, click here.
Nastel provides multiple machine learning methodologies that learn and improve their analysis over time without any dependency on writing rules. The methodologies include:
- Anomaly Detection
- Bayesian Conditional Probability
- Graph Analysis
- Root Cause Analysis
To learn more, click here.