This yearโ€™s Edelman Trust Barometer revealed the largest-ever global drop in trust across all four key institutions โ€“ government, business, media and NGOs. One sector, however, has so far weathered the anti-establishment storm: tech. Edelman found that 76 percent of the global general population continues to trust the sector.

But will this trust survive the widespread adoption of artificial intelligence? AI, after all, is known for being a black-box approach. It can easily lead to a โ€˜computer says noโ€™ scenario.

New regulations in Europe could bring this to the fore soon. The General Data Protection Regulation (GDPR), contains specific guidance on the rights of individuals when it comes to their data. Point 1 of Article 22 of GDPR states that:

โ€œ1.ย The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.โ€

The Article continues by stating that the data controller must implement suitable measures to โ€œsafeguard the data subjectโ€™s rights and freedoms and legitimate interests, or at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.โ€

In short, consumers are entitled to clear-cut reasons as to how they were adversely impacted by a decision. For model-based decision making, the model must be able to demonstrate the drivers of negative scores. This is a fairly simple process for scorecard-based credit decision models, but when you add AI to the mix it becomes more complicated.

[easy-tweet tweet=”Businesses that deploy AI in their decision-making processes must be accountable and transparent.” hashtags=”AI, GDPR”]

There is the potential with AI-based decision-making, for example, for discrimination against individuals, based on factors such as geographic location; combating such discrimination is an important part of the Digital Single Market being planned by the European Union.

To ensure continued trust in the tech sector at a time of great public scepticism, businesses that deploy AI in their decision-making processes must be accountable and transparent.

Explainable AI

Thatโ€™s where Explainable AI comes in. This is a field of science that seeks to remove the black box and deliver the performance capabilities of AI while also providing an explanation as to how and why a model derives its decisions.

There are several ways to explain AI in a risk or regulatory context:

  1. Scoring algorithms that inject noiseand score additional data points around an actual data record being computed, to observe what features are driving the score in that part of decision phase space. This technique is calledย Local Interpretable Model-agnostic Explanations (LIME), and it involves manipulating data variables in infinitesimal ways to see what moves the score the most.
  2. Models that are built to express interpretability on top of inputs of the AI model.Examples here include And-Or Graphs (AOG)ย that try to associate concepts in deterministic subsets of input values, such that if the deterministic set is expressed, it could provide evidence-based ranking of how the AI reached its decision. These are often utilised and best described toย make sense of images.
  3. Models that change the entire form of the AIto make the latent features exposable. This approach allows reasons to be driven into the latent features (learnedย features) internal to the model. This approach requires rethinking how to design an AI model from the ground up, with a view to explaining the latent features that drive outcomes. This is entirely different than how native neural network models learn. This remains an area of research, and a production-ready version of Explainable AI like this is several years away.

Ultimately, businesses to convince their customers that they can trust AI, regardless of the failures that are likely to occur along the way. It may seem like weโ€™re entering a world where machines do all the thinking, but we need the ability of people to check the machinesโ€™ logic โ€” to get the algorithms to โ€œshow their work,โ€ as maths teachers are so fond of saying.

The same applies to machine learning, incidentally. Machine learning gobbles up data, but that means bad data could create bad equations that would lead to bad decisions. Most machine learning tools today are not good enough at recognising limitations in the data theyโ€™re devouring. The responsible use of machine learning demands that data scientists build in explainability and apply governance processes for building and monitoring models.

Thatโ€™s the challenge before businesses today. To deploy AI โ€” and enjoy the benefits that come with it โ€” they must get customers to accept the reasoning behind decisions that affect their future.

+ posts

Dr Scott Zoldi is Chief Analytics Officer at FICO. He holds 35 patents related to artificial intelligence. Scott blogs at www.fico.com/blogs.

 

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

How AI is Transforming Customer Communication Management

Business communication has evolved over the years. Today, it's...

Investment Opportunities for Startups and Technologies in AIย 

Although artificial intelligence developed from niche technology has become...

Four Surprising Lessons I’ve Learned Leading Tech Teams

Techies. Geeks. Boffins. Whatever your organisation calls its IT...

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...