Open-box machine learning for AIOps solves 'black-box' mysteries

AIOps – To borrow a concept from a legendary car commercial, today’s artificial intelligence is “not your father’s” AI. Any technology — or multiple technologies in the case of AI — that does not advance becomes irrelevant, but AI is more powerful and better suited to many more commercial applications today than it was 10 years ago. And it continues to develop.

One of the primary reasons businesses flock to AIOps, and specifically to machine learning (ML), is because it works in a human-like manner, learning from experience as it processes hundreds of thousands or millions of examples of the thing it has been tasked with figuring out. The classic example is how Google trained AI to identify cats in YouTube videos without providing explicit rules for the AI to use.

As the use of AI has expanded into more use cases, though, the inability of humans to understand precisely how AI makes decisions has become problematic. If a car company doesn’t know how AI operates an autonomous vehicle, how can the executives responsible in the event of serious accidents prevent them? This lack of transparency in AI is known today as AI’s “black box” problem.

‘Black Box’ Hampers AI’s Expansion

“At the moment, some machine learning models that underlie AI applications qualify as ‘black boxes,’” according to “What it means to open AI’s black box,” an article by two AI experts at consulting firm pwc. “That is, humans can’t always understand exactly how a given machine learning algorithm makes decisions … To reach the point where AI helps people work better and smarter, business leaders must take steps to help people understand how AI learns.”

Unlocking the black box is so vital for mission-critical uses of AI, such as next-generation weapons, that it has become a priority for the Defense Advanced Research Projects Agency (DARPA). The federal agency that pioneered AI launched the Explainable AI (XAI) program to address the issue in 2016.

“Continued advances promise to produce autonomous systems that will perceive, learn, decide and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users,” writes David Gunning, program manager for DARPA’s Information Innovation Office.

This article originally appeared on ciodive.com To read the full article, click here.

Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously.  To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:

  • Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
  • Raw information handling and analytics speed
  • End-to-end business transaction tracking that spans technologies, tiers, and organizations
  • Intuitive, easy-to-use data visualizations and dashboards

If you would like to learn more, click here.