Invisible discrimination: How artificial intelligence risks perpetuating historical prejudices

This year has seen a wave of new research of the unintended consequences of an Artificial Intelligence industry dominated by middle-class white men, teaching itself with unchecked and unregulated data sources.

New research published by AI Now Institute, New York University in April 2019 concluded that AI is in the midst of a ‘diversity crisis’ that needs urgent attention:  “Recent studies found only 18% of authors at leading AI conferences are women, and more than 80% of AI professors are men. This disparity is extreme in the AI industry: Women comprise only 15% of AI research staff at Facebook and 10% at Google […]  For black workers, the picture is even worse. For example, only 2.5% of Google’s workforce (in AI) is black, while Facebook and Microsoft are each at 4%”

But the levels of gender and racial representation of those working in the industry is only part of the challenge –  there is an increasing body of evidence that suggests that AI algorithms themselves are unhealthily influenced by discrimination and biases of the past.

Let’s take three examples: Firstly, Amazon had to take an automated recruitment robot out of service, after it was found to be favouring male CVs over female for technical jobs. Google, had to adjust an algorithm that  was defaulting its translations to the masculine pronoun.  Our third example adds racial rather than gender bias:  Joy Buolamwini, a researcher from Massachusetts Institute of Technology found that a facial analysis tool, sold by Amazon, would not recognise her unless she held up a white mask to her face.

An AI algorithm will diligently go about its task of recognising patterns in data more efficiently than any human – but the old adage “garbage in, garbage out” still applies in the digital world. Or, to update it clumsily for the 21st century:   “if the data set contains biases, so will the conclusions”!

Our recruitment tool example likely learned through a dataset of successful CVs for top engineers, it identified patterns, and used those patterns to make recommendations.  But if over the past few decades we have seen more men working in technology, studying technology in higher education, and applying for engineering positions at the likes of Amazon – we should not be surprised that data set had this bias built in.

AI translation tools will favour the male pronoun if the data set it feeds from does the same;  and facial recognition technology fed millions of image of white faces will, guess what,  become more adapt at identifying those faces than other races.

So – what to do about this? Firstly, of course we need to push for a more diverse AI industry in terms of gender and race, but we must recognise that even reaching a balance more reflective of society does not go far enough.   We need to take steps now to either correct our data sets, and/ or engineer specifically to correct bias of the past. By definition we cannot now identify every possible negative unintended consequence of unchecked AI – but we can make a very good start! It would not have taken much foresight in our recruitment bot example to see the risk and engineer for it…

Coding specific controls into AI recommendations or manually changing data sets may seem counter intuitive or even undesirable, but I would argue that positive discrimination when applied selectively can be an absolute force for good – whether in the boardroom, or deep in an AI algorithm.  Even the random play function on your iPhone had to be specifically coded to be less random, so it could appear more random to users –   I am calling for a variation on that theme; to engineer into our AI correctives for the human failures of the past.

As to “how” we do this – there have been calls for increased regulation and government policy in the area of artificial intelligence,  but whilst that may be part of a solution, I worry of more unintended consequences of a rapidly growing industry with a global “war for talent” being stifled in regions where heavy regulation is applied.   It is not in the interest of any of the companies involved in our three examples above to allow these issues to go without correction; I would like to think we are collectively intelligent enough to correct our simulated intelligence without regulation telling us how to do it.

So for the time being at least, I would prefer we push in parallel for greater equality in tech companies across senior management and AI engineering, and a more public awareness and dialogue of these issues in the tech community. We all need to be cognisant of the risk in allowing AI algorithms that continue to blend into in our day-to-day lives propagating historic biases into our future.

Website | + posts

Belinda Finch is Chief Information Officer for Centrica Group Functions, she is passionate on dispelling the myth there are significant barriers to entry for women in technology, and is an advocate of using exercise to increase productivity, efficiency and peace of mind. She was the only girl COBOL programmer in her company in the mid-nineties trying to fix various millennium bug issues, so is a closet geek. She is originally from Cardiff in South Wales but currently lives near Newbury, Berkshire.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...