Are we sometimes talking about artificial intelligence when another term would be more appropriate? Does it matter? Guy Matthews skirts a terminology minefield.
Forbes describes artificial intelligence (AI) as the most important technology of the 21st century. Yet it has a problem. In a rush to associate itself with such game changing, mould breaking potential, the tech sector is all too ready to stick an AI label on any solution that offers some or other degree of basic automation. The water, some argue, is thus muddied and the very reputation of AI compromised at a critical time in its evolution.
One problem is that there is no centralised definition of AI that everyone is agreed on. By rough consensus, a system is only truly artificially intelligent if it is able to become smarter over time with no human input required beyond its creation. True AI grows more capable as a result of its own experience. It thinks for itself, in some fashion, as a human would do, or ideally far more profoundly and reliably than a human would do by removing emotion out of the decision process.
A system that uses algorithms to replicate or mimic a human action can be said to be robotic, but not necessarily intelligent. AI should also be distinguished from augmented analytics or predictive analytics. A system based on this kind of capability can be used to make decisions, perhaps quite important decisions, but it is not getting more intelligent over time. It is not learning and adapting. At best it is making assumptions and predictions based on the historic data it holds, able to call on that information to turn out ‘what if’ scenarios. It does not have a nuanced and human-like ability to see a bigger picture emerging, or to spot the unexpected in a sea of data and react to it. It will spot what it has been programmed to spot.
This perhaps is why it matters when people arbitrarily toss different ideas in a bucket and call the whole thing AI. It might suit a marketing agenda, or play well with shareholders, but ultimately confusion, misinformation and ignorance are the result.
“There are a lot of people who are artificially intelligent about artificial intelligence,” quips Nick McMenemy, CEO with Renewtrak, a developer of automated licensing renewal solutions based around machine learning (ML). “We’ve got a lot of armchair experts who don’t really understand what it is or what it does. Many people mistake AI for what in reality is a derivative of machine learning. But AI is infinitely more complex and involved.”
McMcMenemy is happy to call out those who cynically name check AI because it seems like flavour of the month and resonates with the investment community: “AI has become something people want to talk about, whether it is justified or not,” he says. “I don’t want to denigrate ML, because that’s on the pathway to the nirvana of full AI. But let’s not create confusion.”
True AI, if there is such a thing as true AI, is hard to define because intelligence itself is hard to quantify, argues Kumar Mehta, co-founder of Versa Networks, a vendor of advanced software-defined networking solutions: “What we should be asking instead is for domain specific intelligence,” he says. “An example of true domain specific AI would be where a network is managed and operated without any user intervention – so-called predictive networking.”
Mehta suggests that rather than get hung up on terminology that end users ultimately don’t care about, the IT industry should be following the example of the autonomous car sector which talks about ‘degrees of intelligence and automation’ rather than overlay complex jargon onto developments.
Mehta remains upbeat about AI’s potential, despite issues over what it may or may not be: “We are on track for a huge transformation that will make our life easier and better,” he believes. “However, we need to be realists regarding its progress. It will be incremental and based on what today’s algorithms and the speed of compute and networking technology will allow.”
Rik Turner, principal analyst with independent consulting firm Ovum, believes it may be helpful to view AI as no more than an umbrella term for a range of technologies and approaches. “These could be deemed to include machine learning, deep learning, which is essentially ML on neural network-based steroids, and natural language processing,” he says. “These are some of the branches of AI, though by no means all of them.”
Others believe that AI needs to be contained in a tighter definition than that. In essence it is the theory of how computers and machines can be used to perform human-like tasks most often associated with the cognitive abilities of the human brain, says Scot Schultz, senior director for HPC, artificial intelligence and technical computing with Mellanox, a developer of networking products. “Machine learning is the study of algorithms that build a mathematical model that a computer will use. Machine learning can be applied in a huge number of applications, including data analytics. Data analytics, however, is fundamentally is applied from real-time data and historic data to find otherwise unseen patterns and trends and solve for a future situation in industry and business. Robotic Process Automation on the other hand automates the operation or inputs to software applications or digital control systems and it can use ML to optimise those automated inputs.”
If the man or woman on the street finds it hard to separate what constitutes real AI from dolled-up analytics, then they couldn’t really be blamed. After all, the tech sector itself is at sixes and sevens over the matter, according Zack Zilakakis – Marketing Leader of intent-based networking specialist Apstra: “AI is still ambiguous, and often misunderstood by traditional technology practitioners,” he claims. “The term ‘artificial intelligence’ is hard to define and there is no agreement about it. There is no real agreement among AI researchers and specialists concerning its definition.
By contrast, he defines machine learning as the ability to continuously collect and convert data into knowledge, necessary to make a decision, either digital or from a human.
“Machine learning will ultimately eliminate the need for data science,” he predicts. “Many AI or ML projects fail because organisations hire data scientists as machine learning experts. The projects require a focus on academia and, more specifically, mathematics as a baseline.”
He sees the correct handling of data as no trivial matter: “Organisations will ‘drown’ in data lakes after a year or two if they do not act on data analytics,” he warns. “These data lakes grow by the minute and are a nightmare to manage. It is imperative not only to collect the data but also convert it into knowledge to make a decision.”
On the matter of sorting true AI use cases from false, he agrees that it is more than a minor matter of semantics and very much an issue of reputation: “AI’s image problem historically has been to over-promise and under-deliver,” he says. “Customers care about practical solutions to their problems, not definitions. Historically, there is always a pull-back from the over-promising when there is under-delivering.”
Zilakakis is sure that the delivery will happen, despite occasional periods of doubt and uncertainty. He sees AI as likely to evolve in multiple waves rather than in conveniently linear fashion: “The initial AI wave will address numbers and repetitive tasks,” he forecasts. “These are low complexity and can be augmented or replaced. This includes some of the work of people like accountants and equity traders. The next wave will focus on automating the middle person from the process. This will include platforms to interpret and provide medical diagnosis, among others. Traditional blue-collar and creative jobs require hardware and will take more time to develop. What will not change is the need for creative and empathetic skills. These are artists, nurses, and architectural development, among others.”
One we are past the need to define what is and isn’t true AI, we can start to embrace some other substantive issues, he believes: “AI will introduce its own set of issues to include an ethical element,” says Zilakakis – Marketing Leader at Apstra. “Do algorithms discriminate? Humans are limited in our ability to remember, and that’s our faulty part of the brain, but computers are void of this limitation.”
Mellanox’s Schultz agrees that it is early in the AI revolution, but points out that applications under the overall AI-umbrella have already been transformative in selected areas of science and industry, healthcare and medicine in particular: “It is also becoming a huge deal in language processing and conversational AI,” he observes. “Going forward, AI-based solutions will become transformative in more and more areas including security, manufacturing, e-commerce, agriculture, transportation, research, marketing and politics.”
Perhaps the current quandary over what is and is not true AI is a mere bump in the road on a longer journey. Try as they might to hijack its good name, hyperscalers and marketeers can only do so much to hold back an idea whose time has nearly come.
Guy Matthews has been a tech writer since 1987 and is a regular contributor to titles in the ICT and financial services markets. He writes for AI Business about such issues as leveraging Big Data for machine learning and the use of artificial intelligence in pharma. He has also been a regular writer for Capacity magazine for nearly 20 years, covering topics including subsea cable capacity, Carrier Ethernet, IoT, 5G and cloud gaming. He also writes for the IBS Journal, European Communications and VanillaPlus. He supplies content for clients such as Colt Technology Services, Liquid Telecom, Mellanox, Apstra, NetFoundry and Tibco Software.