With A Few Glitches, Cloud Computing Should Cope With COVID-19
Cloud Computing – The COVID-19 (Coronavirus) contagion has resulted in a global pandemic with cities in lockdown and national governments being placed in crisis mode. As workers in many industries are now faced with the challenge of working from home, how will our IT frameworks be able to adapt to a ‘new shape of data flows’ being created? Reports already suggest that Microsoft Teams has suffered “messaging-related functionality problems” in Europe as a result of the increased workload carried by the online collaboration application’s backend, so how will the cloud backbone hold up under increased pressure going forward?
Our always-on mobile-centric increasingly cloud-native existence has created a world where access to data services is fundamentally important to keeping business moving. While panic buying in supermarkets continues (at the time of writing) in many world cities, we need to consider whether a commensurate level of ‘panic provisioning’ has been going on inside the cloud datacenters that provide us with our central data infrastructure.
If there is a new focus on cloud datacenter provisioning, it must be driven in two core streams: one technical… and one human.
At the technical level, datacenter provisioning involves the preparation of the ‘server real estate’ base to make sure that we have enough processing power, enough memory and storage, enough connectivity (connection gateways wide enough to cope with the input/output of data) and enough ancillary services such as access to big data analytics engines and so on to cope with demand generated by users and (increasingly today) by intelligent machines.
Sometimes this means moving data around to clear the way for anticipated data spikes, sometimes this means putting some data and applications in locations where they can be more efficiently and cost effectively delivered… and, ultimately, sometimes this means purchasing new server units to build a bigger datacenter.
Crafting A Newsletter Out Of First Gen Experiences
At the human level, datacenter provisioning involves the planning procedures to make sure the people who work at the facility can still do their job effectively. Some datacenter specialists including Interxion have unveiled ‘sleeping pods’ and other types of living accommodation in their facilities, but this outbreak is predicted to last months, so how workable this technique is in the medium to long term may be questionable.
If people who can work remotely start to do so in mass numbers, the net effect on the world IT network is arguably comparatively minimal. After all, we are simply displacing data traffic from office locations to home or elsewhere. The European functionality outages experienced in Microsoft Teams were quickly fixed; they were more likely due to data load bottlenecks rather than any deeper architectural flaws in the application itself.
But undeniably, more connectivity may be required so that more data can be exchanged. This will mean that Internet Service Providers (ISPs) may have to deal with a few of their own provisioning headaches. Other connectivity and collaboration applications (such as WhatsApp, Skype, Slack, Zoom, WebEx etc.) may also take a pounding, but their core data streams are shouldered by the cloud, not by the app itself as such, so the same argument should hold water. It is a data spike, but it’s a different shape of spike happening in a different place.
This does not mean that everything still automatically works as it should i.e. more data throughput from more users in more locations across more applications connecting to more databases is generally agreed to be more of a security risk. You can’t just turn the cloud up to volume 11 without risking a little distortion, noise vibration and possibly a little temporary deafness.
On the cloud capacity COVID-19 question, Amazon Web Services’ (AWS) recently put out a statement saying that it’s confident it can meet customer demand for capacity in response to COVID-19. Despite this, other reports note that Amazon’s Prime infrastructure (from its supply chain end delivery side, if not in other areas) is not without extra strain as a result of the virus outbreak.
Eric Troyer is CMO of Australia headquartered Network-as-a-Service (NaaS) provider Megaport. Troyer highlights the suggestion that many datacenter operators do in fact employ technicians and site operations personnel with military service backgrounds.
“They do this precisely because of the rigid operational discipline these people learnt while in service. Anecdotally, during the H1N1 outbreak, several datacenter operators maintained a round-the-clock presence of dedicated personal ‘living’ in key locations. This cut down on the amount of outside vectors that could come into the facility. Throughout that period, hygiene within those locations was the priority and many measures were taken to ensure the team were kept healthy and limited their ability to spread pathogens,” said Troyer.
Troyer agrees with the suggestion that use of collaboration tools like Slack and Teams will certainly put residential broadband providers to the test as more businesses are requiring their employees to work from home. Additionally, he suggests that the Coronavirus pandemic is currently a big driver for cloud infrastructure and platform technologies (IaaS and PaaS) that support business-specific applications run within public cloud environments that staff will access from home. Megaport reports having had ‘many conversations’ with customers in the last few weeks on how to architect and scale out capacity to meet those demands.
UK managing director for Interxion Andrew Fray has said that his firm is preparing to move through the ‘phases’ of COVID-19 mitigation. This means the movement of admin staff to work remotely and making provision for keeping facilities fully operational under crisis conditions. He notes that datacenter operators are following local government guidelines in each jurisdiction, while, wherever possible, giving multi-country customers a consistent view of their procedures.
Even the cloudiest clouds
“While many customers have the ability to manage their workloads remotely, datacenters are nevertheless physical entities and even the ‘cloudiest clouds’ require servers to be rebooted and cables patched, by a human being. So it is worth recalling why hybrid cloud computing is so compelling; it represents the ability and choice to increase and decrease (‘spin up’ and ‘spin down’) data processing and storage in a flexible on-demand manner. As the crisis deepens, the challenge to the industry is whether this flexibility is delivered when and where it is needed,” said Fray.
As any country attempts to move towards becoming a remote working nation, Fray advises that organizations may still have to re-evaluate how they’ve designed their network and applications.
“As many more thousands or even millions of remote workers try to connect from unfamiliar locations, it is inevitable that there will be some communication pinch points. When your entire workforce is geographically remote, the network architecture and cloud architecture need to be able to cope with a diverse workload [by using Software-Defined Networking (SDN) technologies] so that you are not hitting the same entry point and negatively impacting performance,” added Interxion’s Fray.
As the peak of the Coronavirus pandemic approaches, will we be able to turn back and rely on all the advances in autonomous computing and Artificial Intelligence (AI) that populated so many headlines prior to the outbreak? Back in September 2019 we noted that Oracle was building in layers of IT autonomy into its database to reduce human error. Can’t the systems just get on with it by themselves now and allow us to stay at home, self-isolate and drink lots of fluids?
“The beauty of today’s fully-managed cloud database is that it can be deployed and managed from anywhere with very little intervention needed by the end user. Happily, this is all by design i.e. some fully-managed cloud databases are fault-tolerant, auto-updating, self-healing, elastic-scaling and provide automated proactive management, which benefits end-users immensely as it frees them from the operational chore and costs of leaning and maintaining their database infrastructure,” said Jeff Morris, VP product and solutions marketing at open source document-oriented database company Couchbase.
Can some good come from bad?
Although this discussion is clearly meant to concentrate on the data backbone impact of Coronavirus, there is an argument here to suggest that it could be part of the drive that takes us towards a more cloud-first always-virtualized world of computing. There may arguably be some good in that push i.e. cloud evangelists would argue that we need to ‘let go’ and regard the keyboard as nothing more than a conduit channel to the deeper IT services that lie within the cloud itself.
“I believe any time you have this type of scenario, be it a pandemic, 9/11, or a massive natural disaster, business priorities take on a new focus. One will certainly be about business continuity as people focus on enabling remote work from anywhere. This experience will be a stronger accelerant to a cloud-first world and a SaaS-first world that will put further pressure on the traditional datacenter world [as it] becomes part of companies’ architectural postures,” said Patrick Harr, CEO of Panzura, a specialist in collaborative file and data management.
This article originally appeared on forbes.com To read the full article and see the images, click here.
Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:
- Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
- Raw information handling and analytics speed
- End-to-end business transaction tracking that spans technologies, tiers, and organizations
- Intuitive, easy-to-use data visualizations and dashboards
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics