The Internet of Things (IoT) is only going to get bigger. As yet, nobody quite knows when and where the world of smart cities and intelligent refrigerators will reach its critical mass. That’s probably because we’re still on the ascendant part of the exponential growth curve that describes the trajectory for development in this space. But given the momentum in IoT-rich computing topographies, now is an opportune time for us to consider how we will run, support and underpin the edge computing that happens on these devices; but why is this so?
Let’s start with a simple clarification that many still get befuddled with. We talk about the IoT being made up of the devices that exist in remote (and some not-so-remote) locations around our planet. Sensors, gyroscopes, cameras, gauges and all manner of connected electronic devices are what creates the IoT. The computing processes that happen on IoT devices are what give us edge computing. Although many people still use both terms interchangeably, this is not always helpful.
A swinging pendulum
With the move to put more computing out in edge environments, there is a concerted migration away from enterprises using one centralised cloud hyperscaler for all their computing needs. Principal consultant at research and advisory firm ISG, Anay Nawathe, called this drift a ‘swinging pendulum’ and it is a reality that sees enterprises operate on a more variegated scale, in more locations and – crucially – in a position where they need to use an expanded variety of cloud services from more than one Cloud Services Provider (CSP).
Again we need to ask a very straightforward question here – and it’s ‘why?’ Organisations naturally need to use services from a bigger range of datacentres based upon where their edge estate is actually located. Factors like latency, service differentiation and local data compliance regulations all come into play, as does plain and simple convenience. But running multiple cloud instances to serve edge requirements means multiple infrastructures, multiple configurations, multiple workloads and – typically – multiple systems management headaches.
As we have been direct from the start of this discussion, let’s continue in that vein i.e. the most prudent, productive and efficient way to achieve mixed-cloud management for edge scenarios is to adopt a hyperconverged infrastructure (HCI) platform. Because HCI brings together a ‘family’ of technologies – including server computing, storage and networking – into one single software-defined platform, its suitability for what are now increasingly complex and resource-intensive edge environments can not be overstated.
Elements of edge
Before we come back to underpinning infrastructure as we always must, let’s think about the way edge computing is diversifying and differentiating its workloads. We know that some edge deployments will be essentially autonomous with very limited connection to the datacentre and some will connect more frequently to the mothership and to other devices. Although a degree of edge computing will be quite rudimentary and basic, an increasing proportion will now handle business-critical workloads at the edge.
Extending this thought down to the data layer itself, we know that some instances of edge computing will need to process and handle sensitive or regulated data. With the IT management burden of governance, compliance and security now extending outside the walls of the company headquarters, edge computing demands a more holistic and hybrid approach to digital security. These factors are a key validation point for what we can call a universal cloud operating model, an approach designed to help manage applications and data across environments, including multiple public cloud instances, across on-premises servers and throughout hosted datacentre and edge endpoints.
Single control plane
Because edge computing across the IoT now exists as a first-class citizen in an enterprise organisation’s total application and data services fabric, we need to promote its position so that it ranks alongside all other workflows and data streams.
If for example a supermarket, fashion store or garage operates across multiple locations (and most do), it might be running edge computing workloads to shoulder security cameras, a footfall measurement system or Point of Sale devices. Having a single control plane capable of managing all those different deployment points becomes essential.
If those same organisations take a more piecemeal approach to edge management characterised by unsystematic partial measures taken over a period of time, the resulting IT system typically features islands of infrastructure. Across these islands, we find servers, storage repositories, elements of networking connectivity and functions such as backup all being provided by different service providers. Ultimately, any organisation working at this level will need to consolidate infrastructure and application management onto a single platform. By eliminating the presence of multiple management consoles through an HCI-based approach, these businesses can achieve simplicity, flexibility and rapid scalability.
The view from Nutanix
If I may present some views that stem from our own platform developments in this space, we have been working hard to progress the relevance and need for hyperconverged infrastructure for a decade and a half. This year has also seen us partner with Cisco to integrate with its SaaS-managed compute and networking infrastructure (Unified Computing System) and its Cisco Intersight cloud operations platform.
With Cisco rack and blade servers customers now running Nutanix Cloud Platform, they will benefit from a fully integrated and validated solution that is sold, built, managed and supported holistically for a seamless end-to-end experience. This key partnership sits directly in line with use cases that span the most hybrid of hybrid cloud estates spanning on-premises, public cloud and edge computing and it does so by providing infrastructure engine power at an unprecedented level.
With the prospect of Artificial Intelligence (AI) now further expanding and augmenting the deployment surface for edge computing, as we said at the start, this space is only set to grow. We know that a hyperconverged infrastructure (HCI) approach has the breadth to provide a universal cloud operating model across every conceivable computing resource, which means we will be able to push the edge out as far as we can dream.
Sammy Zoghlami. SVP for EMEA, Nutanix