Human Borgs: How Artificial Intelligence Can Kill Creativity And Make Us Dumber
For decades, scientists and tech visionaries have envisioned a day when computers become so powerful that they become smarter than the human race. There is no shortage of science fiction stories and movies about robot uprisings. We are very far from that scary scenario, but at the same time artificial intelligence (AI) is no longer sci-fi. Many applications of AI abound today in business, and it is even being used in creative professions.
New behavioral experiments by Alok Gupta from University of Minnesota and Andreas Fügener, Jörn Grahl, and Wolfgang Ketter from University of Cologne in Germany bring a cautionary tale for current AI applications. The research, published in late 2021, uncovers risks, consequences, and solutions to over reliance on AI in business and creative decisions. To present the novelty of their findings, let me use scenarios in the context of media and entertainment, where creativity and innovation are critical and where losing unique human knowledge could have negative consequences.
Gupta and his colleagues studied how humans and AI collaborate and complement each other to make decisions. They developed experiments with a simple image classification task (identifying the breed of a dog) to see whether and how AI-supported decision making improved task performance. Their first major finding is that humans are not very good at knowing when they should delegate decisions to AI. As a consequence, they can end up relying on an AI tool even when it recommends the wrong path. When making such mistakes, a team of humans that uses an AI tool can perform worse than a team that doesn’t (even if an individual team member benefits from AI assistance).
In creative professions, this human flaw can have important implications. AI is being used by content creators and marketers to produce media and entertainment content, and to make important creative decisions. For example, AI is being used to create news stories, to produce personalized ads, and to make film production and green lighting decisions.
Over reliance on AI for these content creation decisions may affect the bottom line, as consumers start to defect due to potential monotonicity of the content. Gupta and his colleagues showed experimentally that a viable solution to avoid suboptimal AI-based decisions is to educate humans on the limitations of AI so that they can decide when they should ask for AI assistance and when they should make decisions for themselves. Gupta states, “Using AI for tasks it can do more efficiently is not bad, but over reliance on AI advice can lead to bad decisions that just get reinforced over time.”
This new research also shows that there can be long-term consequences of over reliance on AI, which could get us to a sci-fi doomsday scenario, not so much because computers become more intelligent, but because humans become ‘dumber’ by losing their unique knowledge.
Collective intelligence emerges in humans and society when diverse minds that have access to different data sources come together to find solutions to problems, also known as the wisdom of crowds. Gupta and his colleagues show that over reliance on AI can lead to a decrease in the diversity of thinking, leading to suboptimal collective performance. Gupta adds: “Essentially humans start mimicking AI and stop taxing their own brains, therefore they all act smart similarly like borgs”.
A good example is over reliance by social media platforms on AI engines to power news feeds. If the AI algorithm converges to certain types of personalized content for a group of individuals, it can lead to an echo chamber within this group. Group members, in turn, can become content with a consistent, self-indulging, AI-filtered message, which is reinforced by peers in the social circle. Oh well, isn’t this already happening in some circles?
Those who rely too much on news from social media platforms, which in turn rely too much on AI tools, can slowly become borgs, subject to the echo chambers of AI-enabled news feeds where diversity of thought is gradually lost. As different groups separate in their collective thinking, they cannot appreciate different perspectives, and at one extreme, they live in alternative realities.
The overuse of AI can turn humans into borgs in the long run. For media and entertainment firms, it can start with suboptimal content creation decisions that can then have adverse social outcomes. The solution, according to this new research, doesn’t seem to be that difficult and you can already be a part of it: Share this fresh cautionary tale that AI has limitations and overusing it can kill creativity and diversity of thought.
This article originally appeared on forbes.com, to read the full article, click here.
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics