Stories in the newspapers and online regarding Artificial Intelligence (AI) generally centre around sensational, scary and dystopian headlines. This is a problem: it doesn’t reflect the reality of the situation and it would not do for any serious publication (such as Compare the Cloud) to throw fuel onto the fire by predicting The Singularity coming, leading to AI Taking Over the World leaving Civilisation As We Know It in ruins.

It’s very useful to remind people that technology has been a force for good over many centuries. It also bears remembering that throughout history, despite the positive effects on people’s lives, people have always been afraid of new technologies! In 2018 we will see AI making our lives better in surprising but subtle ways whilst silently making progress towards more exotic – but equally, positive – disruptions.

[clickToTweet tweet=”‘a significant and impactful,’ milestone for AI will occur in 2018… -> Read more!” quote=”‘a significant and impactful,’ milestone for AI will occur in 2018… -> Read More!”]

Compare the Cloud spoke to Craig Hampel, Chief Scientist, Memory and Interface Division at Rambus to give us his predictions for AI and ML for 2018. Craig predicts that, “a significant and impactful,” milestone for AI will occur in 2018 whereby AI will develop the ability to extract sentiment from a facial expression, picture, sound or written text, “nearly as well as a human”. This is certainly one of the main directions that AI / ML is heading down at the moment and we can expect the major focus of 2018 to be towards the wave of AI derived sentiment – or ‘Artificial Sentiment’.

This kind of AI can identify and – even manipulate – sentiment and will surely have a huge impact on various industries like advertising, music and even politics.

In 2018 AI will have the most significant impact on pervasive data sets; like picture, video, DNA, voice and sentiment. AI’s ability to look at billions of data sets and find correlations, or patterns and identify traits has nowhere near reached its potential – it is a significantly untapped source of meaning and insights as things stand.

Services like Google Photos and 23andMe are, “just the tip of the information iceberg.” Craig went on to say that, “the possibilities are huge when moderate amounts of inexpensive AI is applied to these large data sets.

“The largest natural data set that we currently process the least is our immediate surroundings. The integration of AI, ML and near-human sensors using our surroundings will facilitate ubiquitous computer vision. This has the capacity to benefit different technological advancements such as self-driving, collision avoidance and detection of real-world anomalies that improve and extend human lives.”

There is so much data around us that the challenge of data is determining which data analysis is useful and will provide actionable and informational results. Arguably, all data is ‘used’ in the strictest sense but much of it is quickly filtered, compressed, or archived. Only a very small amount of even the most useful data collected to date is used to its full potential. Craig expanded on this by saying, “from DNA to consumer sentiment to archival photos, the amount of information that can still be extracted and analysed from these sources is immense. Yet, our ability to use this data is limited by memory and compute capacity — this is one of the challenges we will face in the upcoming year.”

we will see a new class of memory hierarchy referred to as Storage Class Memory or SCM that emerges to help bridge the capacity gap that results from slowed DRAM capacity scaling – Craig Hampel, Rambus

Right from the beginning of the computing age, computer architecture has relied on hierarchy and filtering to manage data when it exceeds the capacity of the existing infrastructure. These techniques will continue to be applied to process the new waves of data expected in 2018 from the likes of the growing IoT infrastructure and other areas. However, this new wave of data requires improved performance and capacity which places pressure on the existing hierarchy. This pressure is intensified by the fact that the historic rate of improvements in the capacity and cost of DRAM and storage technologies has slowed. Craig added, “in order to compensate for the increased capacity demand on DRAM, we will see a new class of memory hierarchy referred to as Storage Class Memory or SCM that emerges to help bridge the capacity gap that results from slowed DRAM capacity scaling. Rambus is developing memory architectures and buffers to allow storage class memories to achieve performance close to DRAM while staying near the cost curve and capacity of storage devices.

“While it may not impact 2018, we are going to see emerging storage devices that have more archival properties in the upcoming year. These will provide an additional level of storage that is less expensive and higher capacity than disk or flash. Microsoft’s work in using DNA as an archival storage device is an example of the type of storage that could result in multiple orders of magnitude increase in the capacity and density.”

Another AI / ML trend that we will see in 2018 is expanded use of data filtering near the edge. For example, MPEG video where the only content transmitted and stored is that which changes between frames. Considering the large video resolutions (and therefore large file sizes) on offer, this use of edge-processing and filtering has obvious benefits. In a similar fashion, devices near the edge like video cameras and sensors will begin to provide purposely filtered data to the cloud. In these cases data is used locally, to determine what information to forward to the cloud for further analytics. Ultimately only data that produces an action or result is useful. And whilst, at the moment, these techniques are applied in retrospect to existing equipment, Craig says that he expects, “in 2018, this type of filtering will become more purpose built into devices as they evolve.”

In summary, 2018 is set to be a huge year for Artificial Intelligence and Machine Learning. As such, the sensationalist, scary and dystopian headlines are more than likely to continue to increase – in both frequency and shock value. Learning and understanding the reality behind AI and ML are the only way to see through these attention-grabbing headlines. One thing is for certain though – 2018 will see a massive increase in the capabilities of AI and the use of AI will continue to become more and more a force for good in the modern world.

+ posts

Head of #Digital #Innovation @CompareTheCloud - Every Day #Creating #SilverLinings.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...