AI Mimics Humans Intelligence but Not Human Culpability?

We teach AI systems to process information with the intellectual acumen of humans (perhaps even surpassing some humans), yet we do not reprimand them for their mistakes. Naturally, the blame is placed on humans who developed the artificial intelligence technology and perhaps as a result, AI will never be totally independent. Like a child, AI learns and makes decisions, but it has to be monitored by its creators to ensure it does not go rogue.

How do we respond when an unsupervised machine learning AI is left to its own devices and makes mistakes? For many, the obvious answer to this question is that the developer of this technology should be held accountable, but when we risk allowing AI to make its own decision unmonitored, are we giving it responsibility it does not have the means for? A machine must learn its lesson as a human does if we are to progress further into the widespread implementation of AI technology.

Integrating explanation systems

For AI to truly mimic human intelligence and thinking they would have to be able to explain their actions but this is a complicated matter. Integrating explanations into an AI system would require substantial data and training in its development and also in its application. As AI systems are designed to manage tasks in an efficient and more scientific manner than humans, an AI translating their workings into tangible explanations accessible to humans could cause delay to operations and slow down its productivity. The key question is, how can we make accountable AI’s without sacrificing the quality of its operations?

The absence of accountability and second guessing is a major difference between humans and AI that gives AI the edge in terms of efficiency, but in a human centric world, an entity capable of making decisions is naturally assumed accountable for those decisions. For some manufacturers there is a fear surrounding explanation as they believe the inner-workings of their systems could be revealed, giving competitors access to their golden secrets. Totally transparency in terms of the design behind the AI is not necessary though; rather, a transparency in terms of outcomes from decisions should be prioritised. For example, an AI system should be able to explain why a driverless car crashed without giving away the coding secrets behind the AI’s system. Instead, an explanation such as ‘the wall was labelled incorrectly as ‘road’ ‘ should be provided so that future mistakes can be avoided.

The subject of AI accountability is not only reserved for the speculation of the unknowing public, the matter has been discussed and meditated on at a higher level. For example, German EU legislator, Jan Albrecht, believes that explanation systems are essential in order for the public to accept artificial intelligence. If we do not understand why AI makes the decisions it does or what it has the power to do, people will be far too afraid to embrace the technology. Albrecht believes that there must be someone who is in control of the technology. Explanations could also help developed identify biases, as an AI system may only look at men when collating data on scientific endeavours and this bias could then be eradicated once the developer receives an explanation.

The future will reveal much more about AI and what we should expect from it. Questioning how ‘human’ AI is and whether explanations should be required from it is entirely reasonable as we are still in the process of adapting to AI and deciphering its place within our world.

What about criminal responsibility?

Placing liability with developers makes sense while they remain involved in the development of the AI, but in the not so distant future, autonomous machines may have the agency and intellectual freedom that any human adult has. A parent is responsible for a child up until they come of legal age to care for themselves, and if we see the developers as parents, their AI offspring may fly the nest in the near future.

AI is not at the stage where we allow it to exist independently, but as we test the technology further, our confidence may increase to the point where we let it run freely and make vital decisions for us. To some degree, with the use of unsupervised learning, there are already examples of relative AI independence, but this is within testing centres where people can monitor it. Unsupervised learning only gives an AI system a narrow range of focused abilities such as identifying potential customers for companies based on data.

The issue of liability will become paramount if AI is ever free to choose what it becomes involved in without concern for morality or risk to human life. Many will envisage RoboCop and his flavour of vigilante justice when thinking about the potential for autonomous technology going rogue and although this may be an exaggerated image of what is to come we do not know quite how close to the truth this could be.

Although AI technology is often used to detect criminal activity e.g. intelligent sensors in packaging to identify illegal items AI is yet to adopt the nuanced moral codes of humankind, and therefore an AI bot or system may commit crimes.

AI robots have been known to commit what humans would consider a crime. For instance, in 1981 a Japanese man working in a motorcycle factory was killed by an AI robot working in his vicinity. The robot incorrectly identified the man as a threat and went on to push him into an active machine. The bot then continued working as its perceived enemy had been eliminated.

Although this incident happened many years ago it raises the issue of accountability and whether the developer could be seen as responsible. There appears to have been no intent to harm from the developer and therefore the intent was with the robot. How do we manage criminal activity if it is committed by a non human entity absent of empathy? Perhaps the only answer is to train artificial intelligence more thoroughly on human behaviours and emotions. We cannot threaten to reprimand a robot with jail time as this will mean little to them unless they are trained to feel regret or accountable for their actions.

Andrew McLean Headshot
Website | + posts

Andrew McLean is the Studio Director at Disruptive Live, a Compare the Cloud brand. He is an experienced leader in the technology industry, with a background in delivering innovative & engaging live events. Andrew has a wealth of experience in producing engaging content, from live shows and webinars to roundtables and panel discussions. He has a passion for helping businesses understand the latest trends and technologies, and how they can be applied to drive growth and innovation.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...