Ethics in Artificial Intelligence

Artificial Intelligence (AI) is changing the way we live. It is transforming diagnosis and treatment throughout healthcare, improving patient outcomes. Through robotics, it is unleashing new productivity and quality in manufacturing, accelerating drug discovery, and improving road safety. As emerging technologies like ChatGPT have become more widely adopted by individuals, ethical and political implications of AI adoption are becoming increasingly important.

The benefits of AI are undoubtedly numerous, but if algorithms do not adhere to ethical guidelines, is it safe to trust and use their outputs? If the results are not ethical or, if a business has no way to cannot ascertain whether the results are ethical, where is the trust? Where is the value? And how big is the risk?

A collaborative effort to design, implement, and refine ethical AI is more effective when it adheres to a number of ethical principles, including individual rights, privacy, non-discrimination, and non-manipulation. Zizo CEO, Peter Ruffley discusses the ethical issues surrounding AI, trust, and the importance of having robust data sets to support bias checking for organisations on the verge of an enormous shift in technology.

Unlocking Pandora’s Box

Calls from technology leaders for the industry to hold fire on the development of AI are too late. With the advent of ChatGPT, everyone is now playing with AI – and individual employee adoption is outpacing businesses’ ability to adapt. There is no way to tell whether work is done by people or by machines today, and managers are unaware if employees are using AI. And with employees now claiming to be using these tools to work multiple full time jobs, because the tools allow the completion of work such as content creation and coding, in half the time, companies need to get a handle on AI policies fast.

Leaving aside the ethical issues raised by individuals potentially defrauding their employers by not dedicating their full time to the job, the current ChatGPT output may not pose a huge risk. Chatbot-created emails and marketing copy should still be subject to the same levels of rigour and approval as manual content. 

But this is the tip of a very fast expanding iceberg. These tools are developing at a phenomenal pace, creating new, unconsidered risks every day. It is possible to get a chatbot to write Excel rules, for example, but with no way to demonstrate what rules have been used or data changed, can that data be trusted? With employees tending to hide their use of AI from employers, corporations are completely blind to the fast-evolving business risk. This is just the start. What happens when an engineer asks ChatGPT to compile the list of safety tasks? Or a lawyer uses the tool to check case law prior to providing a client opinion? The potential for disaster is unlimited.

Risk Recognition 

ChatGPT is just one side of the corporate AI story. A growing number of businesses are also embracing the power of AI and Machine Learning (ML) to automate health care and insurance processes. As a result, the rate of AI adoption by UK businesses is expected to reach 22.7% of companies by 2025, with a third of UK businesses expecting to have at least one AI tool by 2040, according to research commissioned by the Department for Digital, Culture, Media and Sport (DCMS).

These technologies are hugely exciting. These AI and deep learning solutions have demonstrated outstanding performance across a wide range of fields, including healthcare, education, fraud prevention, and autonomous vehicles. 

But – and it is a large but – can businesses trust these decisions when there is no way to understand how the AI drew its conclusions? Where are the rigorous checks for accuracy, bias, privacy and reliability? To fully realise AI’s potential, tools need to be robust, safe, resilient to attack, and, critically, they must provide some form of audit trail to demonstrate how conclusions and decisions were made.

A Trusted Relationship Requires Proof

Without this ability to ‘show your workings’, companies face a legal and corporate social responsibility (CSR) nightmare.  As a result of bias and discrimination embedded in decision-making as a result of algorithms that operate against the organisation’s diversity, equality, and inclusion strategy?

The Cambridge Analytica scandal highlighted the urgent need for AI related regulation, and the power of AI has since continued its frenetic evolution without any robust regulatory or governance steps being put in place.

As opposed to calling for an unachievable slowdown in AI development, data experts must now work together in order to mitigate risks and enable effective, trusted use of these technologies. It is incumbent upon data experts to develop technology to support the safe and ethical operational use of AI. Data governance and data quality procedures must be in place to ensure both the data used and the output of AI & ML activities are accurate and accessible in order to guarantee accurate AI output.

Collaboration

Providing essential transparency throughout the AI production pipeline requires the development of trustworthy components to enable businesses to understand how AI reached its conclusions, what sources were used, and why. Such ‘AI checking’ technology must also be inherently usable, requiring a simple data governance and risk monitoring framework that could alert users to bias, discrimination, or questionable sources of data, as well as allow the entire AI process to be reviewed if needed.

By creating a simple tool that bridges the gap between domain experts and AI experts, companies will be able to understand and trust the AI system more easily, allowing them to embrace AI confidently.

To expand the data available and increase the context and accuracy of Internet-only information, there is a global need for collaboration and data sharing, both within and between organisations.  As a result of this collaboration, we’ll be able to counter AI-generated bias and discrimination, and combine that with AI’s “explainability” to provide organisations with the tangible business value they need.

Conclusion

These changes must, of course, take place while AI continues its extraordinary pace of innovation. Despite collaboration and technology that deliver AI trust on the agenda, the next few years will not be without risk. Mismanagement of AI usage both at the employee and corporate levels may lead to large-scale corporate failures. 

In this regard, organisations must now develop robust strategies to safely manage AI adoption and usage, with a strong focus on CSR and corporate risk. By adopting an ethical approach to AI, some organisations will, therefore, not progress as fast as others that rush headlong into AI and ML, but they will be safeguarding their stakeholders and, quite possibly, protecting their business’ future.

+ posts

Peter Ruffley is the CEO at Zizo

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...