Explainable AI: How do AI models provide results? By Aditya Abeysinghe With increased use of ‘bot’ based programs at present, AI (Artificial Intelligence) has become an essential component in many functions. AI-based software is costly and time consuming to build due to multiple training cycles involved. With such costly inclusions in businesses, an important question that arises is whether results of these AI models could be trusted. Explainable AI is a component that is used to explain why results and the inner processes could be trusted or how they provide such results. The main disadvantage of most AI models is the hidden nature of inner behavior. Even developers of AI models cannot sometimes justify how these models behave under different inputs. However, analysts analyzing results from these models need to properly explain how models produce these results under certain conditions to clients. Therefore, a proper approach to explain how these ...

Read More →