AI on the Edge and in the Cloud: three things you need to know

The world has seen a rapid upsurge of innovation in smart devices at the edge of the network. From smartphones and sensors to drones, security cameras and wearable technologies, taking intelligence to the edge is creating a market that is anticipated to be worth well over $3 billion by 2025.

But it is not a market valuation driven purely by the devices. Rather it is the value of the opportunity created by a new generation of advanced applications that offer a new awareness and immediacy to events on the network.

Lower latency and higher speeds of connectivity offered by 4G – and, soon, more reliably and ubiquitous with 5G – mobile broadband, along with greater power and memory in a smaller footprint enables applications to number-crunch on devices that are independent and closer to the source.

With intelligence delivered on the spot, in real-time, AI at the edge allows mission-critical and time-sensitive decisions to be made faster, more securely and with greater specificity. For instance, AI-powered medical technologies can empower at-the-scene paramedics with immediate diagnostics data, while self-drive vehicles can be equipped to make micro-second decisions for safety.

Yet there are many scenarios where decisions still demand heavy computational lifting or where intelligence doesn’t need to be delivered in real-time. In these cases, AI-driven apps can comfortably remain located in the cloud, taking advantage of greater processing power and specialised hardware. For instance, hospital scans, machine analytics and drone inspections can happily accommodate data transfer lags to and from the cloud.

Whether considering deployment of AI at the edge or in the cloud – or perhaps a mix of both – there are three things you need to know:

 

1. ‘Trained’ versus ‘inference’ decisions in AI apps, and what this means for users

A key underlying factor to consider in application development for and deployment of AI is whether the machine learning (ML) element relies on training or inference algorithms. The training approach creates a machine learning algorithm and makes predictions using the feed of data as its training source. It creates a continual feedback loop that modifies behaviour to gradually reduce the error value as the algorithm becomes more accurately trained.

Generally, this requires heavy computational lifting using a cloud-based ‘inference system’ that is capable of continually revising and updating the algorithm at the same time as it is receiving data and delivering results. Whilst, for the user, this can mean a time lag in data transfer and processing, the advantages are continually more accurate results and more powerful processing of larger data sources.

In contrast, the inference approach applies a trained algorithm to make a prediction on the device itself where fresh data is received. There is no training loop and no refinement; each response output is processed afresh using the same algorithm. This is where the inference system that executes the algorithm and returns decisions can comfortably be located at the edge.

For users, this means that data is processed on the device rather than sent to the cloud. It means that the device does not need a connection to the network in order to operate, which is useful for remote, off-site workers and devices on the move. Without network latency, it also delivers a near-instant response, whilst also reducing security risks associate with data transfer across networks.

 

2. Interoperability adds performance and value

Any IT infrastructure without a platform for interoperability will end up with a series of disconnected data silos, incapable of sharing intelligence to create value across different parts of the business. For AI devices on the edge, this issue is magnified.

IoT devices and AI applications capture an enormous amount of data by the second. They are also often geographically dispersed and equipped to process part or all of the intelligence at source. But deriving value from the data relies on it being made available and accessible across other business operations in the infrastructure, identifying trends and empowering better decision making.

If the data capture, processing and delivery of results is all happening on the edge device, there is a danger that the data will not be shared across the wider IT infrastructure. Interoperability creates links between applications and services across the business such that, for example, medical data from wearable devices is crunched at the edge to provide personalised diagnostics to the wearer, but also processed in the cloud for longer-term insight on user groups and input to business analysis.

 

3. An IT infrastructure to support hybrid edge and cloud-based AI

There are situations where AI needs to be wholly based in the cloud or at the edge, such as in manufacturing robotics installations (cloud) or consumer gadgets such as smart speakers and home or wearable devices (edge). However, in the majority of applications, the IT infrastructure will need to support a hybrid of both edge and cloud-based AI technologies.

The key consideration is, what will the impact of a delayed or interrupted response be to the end-user?

This requires an understanding of critical at-the-edge intelligence versus non-time critical intelligence. With this knowledge, it is possible to build an IT infrastructure that balances demands and capabilities to suit. Edge devices can be selected with the right processing power, interoperability and capabilities to perform the level of number crunching at source, while cloud functionality can also be matched to support trained enhancements to the algorithm and perform further analytical operations.

 

Striking a balance

As businesses and their customers increasingly embrace highly connected technologies in business and at home, it is inevitable that AI operations will increasingly move to the edge. It is clear to see that intelligent device footprints are ever decreasing, while their capabilities grow.

However, whilst it may be tempting to embrace AI-charged edge computing to the max, it is critical to ensure that important data is not locked the end-point device.

There needs to be connectivity for data sharing across the core business ecosystem to ensure that data continues to empower business decision making and add value to an organisation’s future direction.

Website | + posts

CEO and co-founder of elastic.io, talks about the economics of subscription software that pose a critical challenge to software developers in the growing SaaS marketplace

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

How AI is Transforming Customer Communication Management

Business communication has evolved over the years. Today, it's...

Investment Opportunities for Startups and Technologies in AI 

Although artificial intelligence developed from niche technology has become...

Four Surprising Lessons I’ve Learned Leading Tech Teams

Techies. Geeks. Boffins. Whatever your organisation calls its IT...

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...