With more innovation in artificial intelligence (AI), more unexpectedly dangerous uses — like the OpenAI and Amazon cases — are emerging. Each one is a good reminder to consider the ethics and implications surrounding artificial intelligence and machine learning (ML).
First, it’s important to note that artificial intelligence does not, on its own, have any intention (good or bad) or the will to misbehave. Even the most sentient-seeming AI lacks intention or motivation; it’s simply interpreting the data it’s given. Ultimately, with any ethical questions that arise from AI, humans are responsible for creating AI within an ethically sound framework.
With that premise in mind, how can we, as humans, create capable and effective artificial intelligence that works ethically?
1. Identify Potential Pitfalls
An important part of developing AI within the bounds of an ethical framework is to fully understand the implications and potential pitfalls that a certain technology might have. Take the example of OpenAI, which develops AI for natural language processing — a seemingly innocuous use case. The company garnered international headlines when it was able to generate “deep fakes” — that is, totally made-up stories that appear to be news. Ultimately, OpenAI made the choice to release a reduced version of their artificial intelligence.
I recently came up against a similar prospect within my own company, which uses AI to optimize pricing. A recent study suggested that AI-based pricing algorithms might collude and artificially raise prices at the expense of the consumer. I believe it is the core responsibility of AI creators to identify all of the cases in which their technology might be used by looking at potential pitfalls and solving for those issues. In my company’s case, we decided to use a hybrid system that creates more control.
2. Understand Human Biases
As I stated earlier, artificial intelligence is unaware of its “unethical” behavior because, in fact, it has no awareness. It’s simply learning from — and taking action on — the available dataset. Inevitably, artificial intelligence will take on the biases of whichever datasets it trains on. To ensure it follows your ethics, you should give it broad rules to follow.
For instance, if you use data from lenders that have historically denied loans to minorities, your AI will inevitably be biased against minorities. If companies had given more loans to minorities, the AI might learn something different — but if it doesn’t get that chance, it cannot transcend the human emotions or decisions that created the training set. Unlike human biases, AI biases are easily identified and significantly easier to correct.
This article originally appeared on forbes.com To read the full article, click here.
Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:
- Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
- Raw information handling and analytics speed
- End-to-end business transaction tracking that spans technologies, tiers, and organizations
- Intuitive, easy-to-use data visualizations and dashboards
If you would like to learn more, click here.