Artificial Intelligence systems (AI) are becoming increasingly prevalent in various domains including healthcare, autonomous vehicles, law and finance. While transforming traditional approaches to real-world tasks, the decision-making process of AI systems is still opaque. With the widespread adoption of AI models, the transparency and interpretability of the models have raised concerns. The black-box nature of deep neural networks makes it difficult for users to understand the reasoning behind their decisions, which can have significant consequences in high-stakes industries. The emerging field of Explainable Artificial Intelligence (XAI) seeks to address these issues. Explainable AI is a set of processes and methods that allows human users to understand the predictions made by AI models, providing a more transparent process throughout the model decision. Although the current XAI techniques offer a visual explanation of AI models to analyse the feature attribution of input images, users may still find it challenging to comprehend the explanations. Thus, it is crucial to develop AI algorithms that are less opaque and provide explanations that are easily understandable for users.
This research proposes a novel set of techniques designed to enhance the interpretability of AI models while improving human understandability. By focusing on the development of transparent AI systems, this work aims to bridge the gap between complex model predictions and user comprehension to enhance greater trust and reliability in AI applications.