Recently, deep learning has largely promoted the rapid development of artificial intelligence, and Explainable AI has become a new topic in AI field. For machine learning, especially deep learning, explainable AI is a big challenge. Deep neural networks are a black box for us all. AI algorithms usually cannot explain the logic of each decision when providing a solution. Such opaque decisions cannot be convinced, especially in the fields of military, medical and financial security.Anomaly detection refers to the problem of finding anomaly in dataset. As we know,AI algorithms usually cannot explain the logic of each decision when providing a solution. Such opaque decisions cannot be convinced, especially in the fields of military, medical and financial security.Thus, in this paper we consider using Shapley value to explain the decision tree algorithms in machine learning
Язык оригиналаанглийский
Страницы (с-по)355-360
Журнал ПРОЦЕССЫ УПРАВЛЕНИЯ И УСТОЙЧИВОСТЬ
Том7
Номер выпуска1
СостояниеОпубликовано - 2020
Опубликовано для внешнего пользованияДа

    Области исследований

  • anomaly detection, decision tree, explainable AI, Shapley value

ID: 78596769