Early Alzheimer's Detection Prediction Using Explainable Artificial Intelligence (Xai) Techniques For Convolutional Neural Networks (Cnns)
Main Article Content
Abstract
This paper focuses on Explainable Artificial Intelligence (XAI) techniques for Convolutional Neural Networks (CNNs) to enhance interpretability and transparency in model predictions. Leveraging methods like Grad-CAM, Saliency Maps, and Layer-wise Relevance Propagation (LRP), the work explains CNN decisions by identifying regions in input images that contribute most to predictions. Grad-CAM generates heatmaps by combining feature maps with class-specific gradients, while Saliency Maps highlight critical pixels using gradient sensitivity. LRP decomposes the output prediction into pixel-level relevance scores through backpropagation. The proposed algorithm integrates these techniques to provide comprehensive visual explanations, aiding in trust and transparency. Key applications include medical diagnosis and autonomous systems, where understanding model decisions is critical for sensitive and high-stakes scenarios. This framework ensures robustness in decision-making and promotes the responsible deployment of CNN-based systems.