Machine Learning Tips for Companies That Don’t Want to Upset or Annoy Their Employees: Eye on A.I.

Machine Learning Tips for Companies That Don’t Want to Upset or Annoy Their Employees: Eye on A.I.

Although many companies talk about artificial intelligence, it’s likely that the majority of their employees aren’t actually using machine-learning technologies in the workplace.

One big reason for that is while executives may be excited about A.I., employees may feel threatened or even insulted that managers would force them to use tools that that they fear will one day replace them.

As FedEx senior data scientist Clayton Clouse said during an A.I. conference in San Francisco last week, “We shouldn’t expect that people will jump up and down and be excited when we say, ‘Hey, we’re going to be augmenting your job with A.I.’”

Citing a survey about A.I. from McKinsey, Clouse said that while the majority of companies polled by the consulting firm said they were implementing A.I. either in their business or through pilot projects, “only 6% reported that their employees were actually using the system the way they should be used.”

The employees, it turns out, are skeptical about A.I., especially machine-learning tools intended to automate decision-making in some way, Clouse said. If workers don’t trust the A.I. tools to do as good of a job as them, they simply aren’t going to use them, he explained.

To get employees to trust A.I. tools, Clouse said that companies must carefully debut their A.I. projects in multiple stages and communicate to workers just how the products are intended to help. During an A.I. product’s testing phase, or beta test, managers should choose the appropriate employees who are excited about using the tools, as opposed to a randomly selecting a bunch of people who may resent having to attend more meetings than they have to.

Companies also can’t simply rely on their “data nerds” to help test the A.I. products, Clouse said. They need a handful of “general users” who can “speak the same language” as the rest of their colleagues who lack technical pedigrees. Once the beta testing phase is over, companies should hold small workshops so that workers understand how the tools work—and their limitations.

It should be noted that machine-learning tools often make their predictions based on so-called “confidence scores.” These scores are used to explain how likely a machine-learning-powered cybersecurity tool believes an anomaly in a corporate network is actually a legitimate threat worth investigating.

Managers need to tell workers about the confidence score settings of their machine learning tools, so that employees don’t get caught off-guard, Clouse explained. For instance, a member of a corporate cyber security team may be less likely to get annoyed with a machine learning-powered cybersecurity tool that’s set to flag as many anomalies as possible as security threats, if he realizes that management agreed on the setting.

Ultimately, the goal for management is to debut an A.I. tool into the workplace that employees will actually want to use. Companies should not automatically assume that workers will want those A.I. products, and need to think hard about how they debut them—or else risk the consequences.

This article originally appeared on fortune.com To read the full article and see the images, click here.

Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously.  To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:

  • Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
  • Raw information handling and analytics speed
  • End-to-end business transaction tracking that spans technologies, tiers, and organizations
  • Intuitive, easy-to-use data visualizations and dashboards

If you would like to learn more, click here.