Explainable AI: How do AI models provide results? By Aditya Abeysinghe

Explainable AI: How do AI models provide results?

By Aditya Abeysinghe

Aditya AbeysingheWith increased use of ‘bot’ based programs at present, AI (Artificial Intelligence) has become an essential component in many functions. AI-based software is costly and time consuming to build due to multiple training cycles involved. With such costly inclusions in businesses, an important question that arises is whether results of these AI models could be trusted. Explainable AI is a component that is used to explain why results and the inner processes could be trusted or how they provide such results.

The main disadvantage of most AI models is the hidden nature of inner behavior. Even developers of AI models cannot sometimes justify how these models behave under different inputs. However, analysts analyzing results from these models need to properly explain how models produce these results under certain conditions to clients. Therefore, a proper approach to explain how these models behave is required.

What are the benefits of explainable AI?

The main benefit of explainable AI is trust in AI models. Many customers are still reluctant to use AI models for their processes because these models are often not controlled manually. Expert knowledge is often vendor dependent and costs of maintenance and risks need to be followed. However, if models and risks could be explained to customers, then customers could trust these models to be used in their businesses.

With trust comes speed of development. AI models used in many large scale companies are a team effort of several business segments. Politics between these teams and those with clients are one of the main reasons why these projects cause a lengthy process. When models themselves are explainable between each team or client issues can be mitigated and the overall time taken to develop these models could be minimized.

When AI models could be explained, multiple models could be evaluated on metrics such as accuracy, performance and drift of results between these models. Then the best model to be used for production can be evaluated instead of relying on a model whose metrics are hard to be evaluated. This could improve the quality of products as well as minimize issues when each model can be evaluated separately.

Explainable AI: How do AI models provide results?

Techniques used

A common technique used is to describe the behavior using flowcharts or other diagrams. Not all AI models can be described in this manner. Especially, large AI models that have thousands of lines of code or logic cannot be explained using diagrams. But simpler model behavior can be explained using this technique.

On some AI models which are based on classifications, methods such as feature selection techniques can be used to explain best attributes to train data. Based on the importance of each feature for training, multiple models could be trained and the accuracy of each model can be used to find the best model.

Image Courtesy: https://www.blog.adva.com

 

Comments are closed.