Artificial Or Human Intelligence? Companies Faking AI
Artificial intelligence is a hot topic across the board — from enterprises looking to implement AI systems to technology companies looking to provide AI-based solutions. However, sometimes the technical and data-based complexities of AI challenge technology companies to deliver on their promises. Some companies are choosing to approach these AI challenges not by scaling back their AI ambitions, but rather by using humans to do the task that they are otherwise trying to get their AI systems to do. This concept of humans pretending to be machines that are supposed to do the work of humans is called “pseudo AI”, or more bluntly, just faking it.
Faking it Till You Make It with AI
Most notably, a number of companies have been claiming to use artificial intelligence in order to automate parts of their services such as transcription, appointment scheduling, and other personal assistant work, but in reality have been outsourcing this work to humans through labor marketplaces such as Amazon Mechanical Turk. While artificial intelligence is actually used for the solution in part or at all, these companies have not been truthful with their claims of using computers to perform these services.
CNBC published an article critical of Sophia, the AI robot from Hanson Robotics. When the company was approached by CNBC with a list of questions they wanted to ask, Hanson Robotics responded with a list of very specific questions for Sophia. In a followup video, CNBC questions whether the robot is meant to be research into artificial intelligence or is just a PR stunt. They also provided the responses. Even the owner of Hanson Robotics has gone on record as saying most media encounters are scripted.
However, Hanson Robotics is only one of the more notable pseudo-AI encounters to make the news. Popular calendar scheduling services X.AI and Clara Labs were found to both be using humans to schedule appointments and calendar items rather than purely artificial intelligence solutions. The above linked article quotes human workers who wished that the AI system did actually work as promised because the work was such boring drudgery. No doubt while these companies were the unfortunate ones to get unwanted media attention, there are no doubt many others who are pursuing the approach of using humans as a “stop gap” when the AI systems are deficient.
What’s wrong with Pseudo-AI?
It isn’t unheard of though for companies in the tech industry to fake some or part of their services, especially when they are starting out. While this may work for some tech areas, does it work for AI? Some would say no.
The entire premise of AI is that it can accomplish feats that previously only humans were able to do. So faking AI capabilities actually undermines the very essence of what AI is promising to be able to do. AI is still a very emerging field and new artificial intelligence technology is being developed every day. Emerging companies are pitching technology solutions whose premise is that they can accomplish technically challenging tasks. But rather than delivering on these promises, these companies are simply performing technology-enabled outsourcing to humans, and often at very low wages.
Incidents like this have the potential to create a lot of problems in the tech industry. One of the biggest is the potential for an AI winter if people consistently see products that are being faked regularly. People will be less likely to want to invest in AI technology. AI winters, which are periods of decline of interest and funding in AI, have previously been triggered by substantial overpromising and underdelivering of AI capabilities. The increasing or widespread use of Pseudo-AI to cover up AI deficiencies could lead to disenchantment with AI as a whole, and contribute to widespread AI pullback.
Another big issue with Pseudo-AI approaches is the potential for breaches of privacy and confidentiality. A computer that processes information in isolation can safeguard data to various extents, but putting random humans in the loop is a recipe for potential data privacy breaches.
For example, AI solutions that process information in regulated industries such as healthcare, finance, or government systems can be compromised by humans that are not allowed to view confidential or private information. In one instance, the Health Insurance Portability and Accountability Act (HIPAA) was legislated in the United States to help insure patient privacy. If a company lies about using computers to schedule appointments that contain private patient data, it isn’t clear whether or not they are able to uphold the standards set forth by HIPAA for patient information privacy and security. This could get companies employing Pseudo-AI approaches in a lot of hot water.
Besides these issues, we need to also consider the basic ethical considerations of faking artificial intelligence. If you aren’t disclosing your company’s use of humans, there is a major ethical issue at hand. Is it okay to lie to your customers and the public in general?
Is Pseduo-AI The Exception in the AI Industry or an Increasing Reality?
While there are some companies implementing Pseudo-AI approaches, there are plenty of other companies who are in fact really using artificial intelligence across a range of implementation patterns. Many of them recognize and admit that there are still advancements needing to be made before their technology is perfected. Indeed, there are an increasing number of real-world use cases and case studies from actual implementation of machine learning across an increasing range of applications.
For those that are evaluating third-party solutions that claim AI capabilities, you should maintain an optimistic but skeptical perspective. One area of evaluation for AI solutions should be due diligence to see if the company has any humans in the loop that would otherwise be part of what the AI system should be doing. Ask the solution provider if there is ever a human that views the information, even if it is just to validate it. You can have them attest to the program being solely run by computers when signing contracts with them.
One question that a lot of people come to when they find that there are faked AI services is: have we hit a wall in artificial intelligence development? At this point, we don’t completely know. Some companies, such as Facebook that have failed at some AI-based programs, but instead have leveraged humans masquerading as machines show that there are problems developing artificial intelligence even amongst the big names.
This article originally appeared on forbes.com To read the full article and see the images, click here.
Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:
- Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
- Raw information handling and analytics speed
- End-to-end business transaction tracking that spans technologies, tiers, and organizations
- Intuitive, easy-to-use data visualizations and dashboards