Description:
© 2022 IEEE.Today, artificial intelligence-based systems, especially machine learning and deep neural network algorithms, make decisions that directly affect people's lives, from health to autonomous vehicles and even to the defense sector. And yet, algorithms like deep neural networks that are high in performance accuracy but low in explainability and trust bring problems in ethics and interpretability. Explainable Artificial Intelligence (XAJD) is a branch of artificial intelligence that produces highquality interpretability, end-user-oriented understanding and explainable tools, techniques and algorithms. In this study, general information about XAI is given.