Why Do IoT Efforts So Often Fail? It’s Complicated (In More Ways Than One)
I met recently with the CEO of a large company who had quite a story to tell. His company had launched about 300 Internet of Things (IoT) projects. Each time, he said, they failed.
Unfortunately, his company’s experience isn’t atypical — and speaks to widespread challenges enterprises face getting IoT projects to succeed. Post-mortems cite everything from poor planning to bad execution. And that all plays a part. However, in my experience, the primary reason IoT projects fail almost always ties back to technology.
It’s almost cliche to note that every business is becoming a digital business. Yet digital transformation is hard, especially when it comes to IoT. We’re talking about highly complicated systems that are tough to build and deploy.
IoT Course Corrections
When it comes to technology, complexity is the ever-present enemy — even more so with IoT. As IoT systems scale up and sensors get deployed by the hundreds or thousands, they collect a vast amount of data.
Consider a smart city example, where “smart road systems” will be endowed with intelligence and sensing capabilities. The idea is that the system will collect real-time data on driving conditions and congestion, recognize an accident in real time, and notify first responders.
Response time is crucial in such a situation. The system has to have the ability to monitor a vast amount of data and filter it to detect anomalies — in this case, the accident itself. And it has to recognize it in real time — as in, instantly.
Here’s where too many IoT systems fail — the default approach to IoT has been to send everything to a database in the cloud. But given the mass of data being generated — 79.4 zettabytes by 2025, according to IDC — just moving everything to a database in the cloud likely won’t work when the volume of events continues accelerating.
If the “smart” road system can’t analyze the data in real time, it’s not so smart after all. To make good on the promise of IoT and create a workable, scalable architecture, we need to analyze events where they actually occur. That means distributing logic to multiple edge nodes to foster real-time responses.
Let’s say you have a security application in which images need to get analyzed immediately. Instead of sending all that information to the cloud, where the vast amount of data under review will take time, an edge computer could analyze images as they arrive, delete the “run-of-the-mill” data and instantly send alerts when it detects something of interest.
Organizations deploying IoT also face other technical challenges.
• You need to deploy programming languages equipped with higher levels of abstractions — visual programming techniques come to mind — that make things easier to develop and manage. Otherwise, the number of different technologies needed for a project (event brokering, streaming analytics, IoT device management, and business process management tools, to name a few) will be far beyond the skill of many programmers.
• Once you build systems, they need to evolve quickly as more sensors and analytics get added to keep pace with growth. The challenge is further exacerbated by the fact that IoT systems or real-time systems need to be dynamic since you can’t bring them down to make changes. This requires you to create a continuous deployment environment that can be easily updated and expanded upon without causing any downtime. Furthermore, the parts of the system should be modular by nature and loosely coupled so you can work on subsets of them without having any impact on the others.
• Getting these systems to work with so many different components is not trivial. Distributed systems are just inherently hard to build. IoT companies need to develop soup-to-nuts solutions companies can deploy by hiding the underlying complexity of the application infrastructure. That means being able to easily distribute logic and application components to the edge without having to worry about hardware or network communication.
• Lastly, you shouldn’t build IoT systems that delegate everything to computers; that approach is not going to work, particularly when problems crop up. You cannot program for every possible outcome because you cannot predict everything as systems get more complex. And since IoT systems work in real time, humans need to be ready to step in when unknown, unexpected or ambiguous phenomena occur.
Getting Real About Real Time
Beyond dealing with the technical nuts and bolts, you should embrace a philosophical shift and accelerate the transition to a real-time, event-driven architecture that makes it possible to analyze large quantities of real-time data across a distributed environment.
Covid-19 drove home the urgency of making this happen sooner rather than later. As companies deploy a variety of IoT-based measures to ensure safety as employees return to work, these solutions will rely upon instant data feedback that only comes with edge computing.
To get started with edge computing, companies should begin by assessing how they can benefit the most from real-time monitoring — e.g., monitoring for Covid-19 symptoms to safeguard the workplace, or monitoring manufacturing systems in order to reduce downtime and maintenance costs. Prioritize the biggest areas of opportunity and have a discussion with your technology partners about how to move compute power near sensors to eliminate latency.
We’re still accustomed to putting everything in a cloud database. I’m not against databases — they are appropriate when you’re storing data for later analysis. But storing to the database should be the last thing that a data-driven application does, not the first.
In contrast, operational data should be acted upon in real time. That’s a different programming style from what we practice nowadays. It’s bad enough when databases aren’t keeping up, but the situation could get worse in a few years when the number of sensors has grown exponentially. What do we do then?
This article originally appeared on forbes.com To read the full article and see the images, click here.
Nastel Technologies helps companies achieve flawless delivery of digital services powered by middleware. Nastel delivers Middleware Management, Monitoring, Tracking and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s Navigator X fuses:
- Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
- Raw information handling and analytics speed
- End-to-end business transaction tracking that spans technologies, tiers, and organizations
- Intuitive, easy-to-use data visualizations and dashboards