The multi-cloud paves the way for AI and ML

The multi-cloud is happening now

The rise in off-premises cloud services is transforming how businesses serve their customers. New mobile apps, websites (e.g., Facebook, YouTube, Twitter, LinkedIn) and services (e.g., Amazon, Skype, Uber) are consolidating decision-making. People who need to act in real time can now gain insight from non-traditional data sources, infusing it into their business processes.

The following three major trends have come together, making it possible for enterprises of all sizes to apply analytic techniques to business processes:

  1. More data is available from an expanded ecosystem of diverse sources, including connected devices and social media. Location tracking and monitoring individual behaviors and news feeds allows for more accurate prediction of individual needs that vary from moment to moment.
  2. Software vendors and cloud service providers (CSPs) are moving to package analytic technology in an easy-to-consume format, allowing enterprises to apply the technology without requiring highly skilled data scientists.
  3. Off-premises cloud services, infrastructure-as-a-service (IaaS) offerings like Amazon Web Services (AWS), cloud as a service (CaaS) products like Google Container Engine, and platform as a service (PaaS) offerings like IBM’s Cloud have become more affordable. Enterprises now have access to the large-scale compute environments needed to provide the required high-volume computation.

The need to store, process, analyse and make decisions about rapidly rising amounts data has led enterprises to adopt many off-premises cloud offerings that can automate and improve business processes. IT organisations now must evolve alongside this new deployment architecture, allowing for greater agility and adoption of new technologies.

Enterprises are not only diversifying the location of their cloud services, from on-premises to off-premises, they are also increasing the number of cloud service providers (CSPs) used to manage their IT needs. North American enterprise respondents to the recent IHS Markit Cloud Service Strategies survey reported they expect to use an average of 13 different CSPs for SaaS and IT and 14 for IT infrastructure (IaaS and PaaS) by 2020. In effect, enterprises are creating their “cloud of clouds,” or multi-cloud. A great deal of variety exists between providers’ offerings, with specialised players meeting particular needs.

Get ready for AI and ML: cloud first, then on-premises

CSPs offer a variety of artificial intelligence (AI) and machine-learning (ML) solutions from the cloud, by deploying within their infrastructures a combination of CPUs and co-processors, memory, storage and networking equipment, providing customers with an abundance of applications to choose from. Users are taking advantage of these advanced applications, delivering services for use cases like facial, video, and voice recognition for smart cities. Another example is online retail, which uses augmented and virtual reality (AR/VR), image detection, and other innovative technologies. Many major CSPs have introduced new and updated AI- and ML-based products in the past year, including the following:

  • Google Cloud: Vision API 1.1 recognises millions of items from Google’s Knowledge Graph and extracts text from scans of legal contracts, research papers, books and other text-heavy documents. Google Cloud Video Intelligence identifies entities and when they appear in the frame, using TensorFlow, which enables developers to search and discover video content by providing information about objects. For more information on this topic, read “Google Cloud Next 2017: Making Machine Learning Ubiquitous,” from IHS Markit.
  • AWS: Amazon Transcribe converts speech to text and adds time stamps notifying users when certain words are spoken. Comprehend transcribes text from documents or social media posts, using deep learning to identify entities (e.g., people, places, organisations), written language, the sentiment expressed, and key phrases. Rekognition Video can track people, detect activities, recognise objects and faces, and select content in videos. For more detailed information, read “AWS re:Invent 2017: Amazon’s Alexa Becomes Business Savvy.”

The use of AI and ML requires that new server architectures include specialised silicon co-processors that can accelerate the parallel computations needed for AI and ML. IHS expects unit shipments of servers with general-purpose programmable parallel-compute co-processors (GPGPU) will make up 11 percent of all servers, exceeding 1.7 million units in 2022. In effect, the use of co-processors crosses an important 10 percent threshold, where the technology passes from early adopters to mainstream buyers. At this point, on-premises enterprise operated data centers are expected to also contain GPGPUs; thus, AI and ML workloads that were born in the cloud will also be executed on-premises. The following factors were included in this IHS Markit forecast:

  • Generation-on-generation improvement of co-processor performance makes servers with co-processors more attractive to customers than traditional CPU-only servers.
  • Co-processor options are multiplying, giving customers choices. With a long-term roadmap developing, there has been an increase in vendors offering servers with co-processors, which is creating even more choices for customers.
  • Multi-tenant server software continues to add features, making it possible for customers to virtualise co-processors, increasing the utilisation of servers shipped with co-processors.

The bottom line

Enterprises are looking for cloud service providers, so they can adopt AI and ML in their business processes as the enterprises go digital. In response, CSPs looking to differentiate themselves have introduced many services with embedded AI and ML. New offerings provide enterprises with a lot of choices, and they have responded by using a variety of cloud service providers, rather than relying on a single provider. They are effectively building a multi-cloud where their software workloads are placed in the specific data centers best optimised to execute them, including their own on-premises data centers.

In the next five years, the use of AI and ML will become mainstream. Enterprises will leverage compute infrastructures from a variety of cloud service providers and from their own on-premises facilities. Server architectures have already changed over the last 12 months, with buyers preferring higher end servers with more memory and storage, as data sets increase in size. Servers have also been shipping with additional co-processors for parallel compute, to support AI and ML workloads, and that trend will continue.

Website | + posts

Cliff is an executive director in the Cloud and Data Center Research Practice with a focus on cloud services, data center compute and networking, and software defined networking (SDN). He has more than 25 years of telecommunications industry experience encompassing scientific research, market analysis, corporate and product strategy, product management and marketing. A recognized thought leader, he frequently is an expert judge for industry and technology innovation awards and an invited conference speaker, who is often quoted in technical publications, including SDx Central, Light Reading, Fierce Telecom, eWeek, Network Computing and Lightwave Online.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...