EXPLAINABLE DEEP LEARNING: ENHANCING TRUST IN AI-BASED DECISION SYSTEMS

Authors

  • Mr. Pankaj B. Devre, Dr. Deepika Shekhawat

DOI:

https://doi.org/10.25215/9371832592.13

Abstract

The fast rate at which deep learning models are being adopted in the most important fields like healthcare, finance, autonomous systems, and criminal justice has greatly enhanced predictive accuracy and automation. But the obscurity of deep neural networks has cast grave doubts on the areas of transparency, accountability, fairness, and trust. Such models are frequently black boxes and give minimal or no understanding of the decision-making process and hence are not readily accepted in high stakes applications where it is necessary to explain the decision. Explainable Deep Learning (XDL) has become an important field of research to enable deep learning models to be more interpretable and explainable to humans. Explainability methods aim to find out the logic behind what models think, allowing users to infer reliability, bias, and adherence to ethical and regulatory principles. This paper will analyze how explainable deep learning can be used to increase confidence in AI-based decision systems.

Published

2025-12-05