Cloud computing and the Internet of Things (IoT) – two of the biggest technologies of the last few years, and two trends on a collision course set to completely transform the world around us.
Cloud platforms and services are already well established, both from a consumer and a business perspective, having impacted many aspects of our daily lives. New cloud innovations are constantly emerging, but it’s when we add IoT into the mix that things start to get really interesting.
The IoT industry is set to experience explosive growth over the coming years. Analyst firm IDC predicts that worldwide IoT spending will reach $745 billion in 2019, a significant increase from the $646 billion spent in 2018.
This growth will drive a transition to a world of ambient technology, a future in which unseen enablers exist all around us – central to how we communicate and function day to day, and accessible to anyone. This growth will result in exponential increases in the amount of data being created, analysed and stored, and so cloud service providers will have a key role to play in engineering this view of the future.
However, is cloud alone the best answer to the scalability and management challenges that the transition to the world of ambient technology will undoubtedly bring? And if not, could Edge-AI prove to be the missing piece of the puzzle?
A step too far for cloud?
While it’s true that IoT presents a huge amount of possibilities for businesses in all industries, it also creates some significant technical issues that will have to be overcome. These primarily revolve around coping with growing data quantities as IoT continues to grow.
One of the great benefits of cloud platforms is that they enable users to manage and work with more data than they ever could before – from the biggest enterprises right down to the smallest start-ups. The issue is that although the centralised cloud model is sufficient for today’s IoT ecosystems, the infrastructure will quickly become overwhelmed when billions of devices are involved.
The issues associated with managing the control of potentially billions of smart devices in the cloud, combined with ongoing concerns surrounding the security and privacy of the data being transmitted, suggest that a wholly cloud-based model may not scale as our interactions with AI devices become more pervasive and intricate.
As well as requiring huge investments, the complexity of the networks and the sheer volume of data involved will also increase the risk of the user experience being degraded by network connectivity or latency issues. This will be particularly true in high-density areas such as cities that will become inundated with ‘smart’ sensors and connected devices.
However, there is a solution. The ability to reduce the amount of data transmitted, to provide higher-level context to the user experience, and to use smart devices on the odd occasion the internet is unavailable, are all on offer with the shift to Edge-AI.
Moving to the edge
Edge computing refers to the practice of decentralising data processing by moving it closer to the source of the data – in this case the connected device. While artificial intelligence and machine learning computation is often performed at large scale in datacentres, the latest processing devices are enabling a trend towards embedding AI/ML capability into IoT devices at the edge of the network. AI on the edge can respond quickly, without waiting for a response from the cloud. There’s no need for an expensive data upload, or for costly compute cycles in the cloud, if the inference can be done locally. Some applications also benefit from reduced concerns about privacy.
When it comes to the issues of preserving data security and reducing the quantity of data being transmitted in the world of IoT, Edge-AI devices pose a viable solution. For example, it removes the latency issues associated with cloud-based control. Although AI in the cloud is thought of as a huge collective intelligence, AI at the edge could be compared to a hive mind of many local smaller brains, working together in self-organising and self-sufficient ways.
Essentially, intelligent insights generated from the data will be available in real-time on the device itself. Edge-AI also enables user experience designers to personalise the interaction with remote AI entities through the fusion of sensor data, as well as enjoying full functionality without the support of a connection, meaning service disruptions will be reduced.
Ultimately, the issue we will face over the coming years is that the cloud will unlikely be able to keep up with the growing speed of the IoT industry or the diverse needs of IoT applications. Total cannibalisation of the cloud may not be the ultimate endpoint, but Edge-AI will enable a rebalancing of compute utilisation from core to the edge, where algorithms will learn from their environment to make locally-optimised decisions in real time.
The future of IoT and the transition to the world of ambient technology will likely depend on decentralising networks in this way, giving industries and businesses the tools to capitalise on the data being collected, rather than being overwhelmed by it.
Before being named CEO of XMOS in 2016, Mark was its Chief Operating Officer, playing a key role in the strategic shift from premium audio to embedded voice interface solutions, bringing a major growth opportunity to the business. During his 12-year tenure at XMOS, Mark also held the post of Vice President Engineering, responsible for the company’s worldwide silicon, applications and software development activities.