Proving Reliability of Machine Learning using Explainable AI

Dene Brown

Software Engineer, SysAda Limited

Machine learning is increasingly used in the systems that involve human interaction and decision making, which impacts people’s health or safety. Ensuring that these systems are safe and reliable is an important topic of research.

Many ML based systems are based on black box models. A model is referred to as black box when it is unclear how it achieves its outputs, and which input features are important. Critical systems cannot rely on black box models without any understanding of their decision making, and this has led to the area of Explainable AI (XAI). Explainable AI seeks to provide insights into the decision making of black-box ML models.

This presentation will introduce explainable AI. It will put forward the reasons why explainable AI is needed to understand black box models. It will cover the insights explainable AI provides into the reasoning of black box models such as feature importance and accumulated local effects.

To demonstrate how explainable AI works in practice the presentation will give an overview of the workings of several Model-Agnostic explainable AI methods such as LIME, SHAP, and UnRAvEL. Each of these methods uses different types of interpretable models to achieve their objectives but share some fundamental concepts in the way they achieve them. It will examine some of the strengths and weaknesses of these methods.

Finally, the presentation will look at potential routes where explainable AI could be used to demonstrate the reliability of machine learning models, and how these models can become trusted using explainable AI.

About Dene Brown

Dene Brown has worked on critical software projects for over twenty five years. This has taken in Nuclear Power Station Control; the Tornado and Typhoon military jets; several civil aircraft systems; numerous defence projects; air traffic control; and even a Swiss bank. This has given him a solid insight into the state of software development in the UK defence and aerospace industry.

In 2023 he completed an MSc in Artificial Intelligence at Ulster University. During this course he began researching Explainable AI and its potential use in verification and the certification of reliable AI based systems. As part of this course, he completed a project titled ‘Exploring Trade-offs in Explainable AI’. He continues to study (taking in workshops and conferences) this topic along with other approaches to developing reliable AI software.

Sponsored by

Official Media Partners

Supported by