Standard

Explainable AI : Using shapley value to explain complex anomaly detection ML-based systems. / Zou, Jinying; Petrosian, Ovanes.

Machine Learning and Artificial Intelligence : Proceedings of MLIS 2020. ред. / Antonio J. Tallon-Ballesteros; Chi-Hua Chen. IOS Press, 2020. стр. 152-164 (Frontiers in Artificial Intelligence and Applications; Том 332).

Результаты исследований: Публикации в книгах, отчётах, сборниках, трудах конференцийстатья в сборнике материалов конференцииРецензирование

Harvard

Zou, J & Petrosian, O 2020, Explainable AI: Using shapley value to explain complex anomaly detection ML-based systems. в AJ Tallon-Ballesteros & C-H Chen (ред.), Machine Learning and Artificial Intelligence : Proceedings of MLIS 2020. Frontiers in Artificial Intelligence and Applications, Том. 332, IOS Press, стр. 152-164, 2020 International Conference on Machine Learning and Intelligent Systems, MLIS 2020, Virtual, Online, Республика Корея, 25/10/20. https://doi.org/10.3233/FAIA200777

APA

Zou, J., & Petrosian, O. (2020). Explainable AI: Using shapley value to explain complex anomaly detection ML-based systems. в A. J. Tallon-Ballesteros, & C-H. Chen (Ред.), Machine Learning and Artificial Intelligence : Proceedings of MLIS 2020 (стр. 152-164). (Frontiers in Artificial Intelligence and Applications; Том 332). IOS Press. https://doi.org/10.3233/FAIA200777

Vancouver

Zou J, Petrosian O. Explainable AI: Using shapley value to explain complex anomaly detection ML-based systems. в Tallon-Ballesteros AJ, Chen C-H, Редакторы, Machine Learning and Artificial Intelligence : Proceedings of MLIS 2020. IOS Press. 2020. стр. 152-164. (Frontiers in Artificial Intelligence and Applications). https://doi.org/10.3233/FAIA200777

Author

Zou, Jinying ; Petrosian, Ovanes. / Explainable AI : Using shapley value to explain complex anomaly detection ML-based systems. Machine Learning and Artificial Intelligence : Proceedings of MLIS 2020. Редактор / Antonio J. Tallon-Ballesteros ; Chi-Hua Chen. IOS Press, 2020. стр. 152-164 (Frontiers in Artificial Intelligence and Applications).

BibTeX

@inproceedings{92830b3951564be4888a066600f3f1de,
title = "Explainable AI: Using shapley value to explain complex anomaly detection ML-based systems",
abstract = "Generally, Artificial Intelligence (AI) algorithms are unable to account for the logic of each decision they take during the course of arriving at a solution. This 'black box' problem limits the usefulness of AI in military, medical, and financial security applications, among others, where the price for a mistake is great and the decision-maker must be able to monitor and understand each step along the process. In our research, we focus on the application of Explainable AI for log anomaly detection systems of a different kind. In particular, we use the Shapley value approach from cooperative game theory to explain the outcome or solution of two anomaly-detection algorithms: Decision tree and DeepLog. Both algorithms come from the machine learning-based log analysis toolkit for the automated anomaly detection 'Loglizer'. The novelty of our research is that by using the Shapley value and special coding techniques we managed to evaluate or explain the contribution of both a single event and a grouped sequence of events of the Log for the purposes of anomaly detection. We explain how each event and sequence of events influences the solution, or the result, of an anomaly detection system.",
keywords = "Anomaly detection, Decision tree, DeepLog, Explainable AI, Log anomaly detection, Shapley value",
author = "Jinying Zou and Ovanes Petrosian",
note = "Funding Information: The work of the second author is supported by Russian Foundation for Basic Research (RFBR) according to the research project No. 18-00-00727 (18-00-00725). Publisher Copyright: {\textcopyright} 2020 The authors and IOS Press. Copyright: Copyright 2020 Elsevier B.V., All rights reserved.; 2020 International Conference on Machine Learning and Intelligent Systems, MLIS 2020 ; Conference date: 25-10-2020 Through 28-10-2020",
year = "2020",
month = dec,
day = "2",
doi = "10.3233/FAIA200777",
language = "English",
series = "Frontiers in Artificial Intelligence and Applications",
publisher = "IOS Press",
pages = "152--164",
editor = "Tallon-Ballesteros, {Antonio J.} and Chi-Hua Chen",
booktitle = "Machine Learning and Artificial Intelligence",
address = "Netherlands",

}

RIS

TY - GEN

T1 - Explainable AI

T2 - 2020 International Conference on Machine Learning and Intelligent Systems, MLIS 2020

AU - Zou, Jinying

AU - Petrosian, Ovanes

N1 - Funding Information: The work of the second author is supported by Russian Foundation for Basic Research (RFBR) according to the research project No. 18-00-00727 (18-00-00725). Publisher Copyright: © 2020 The authors and IOS Press. Copyright: Copyright 2020 Elsevier B.V., All rights reserved.

PY - 2020/12/2

Y1 - 2020/12/2

N2 - Generally, Artificial Intelligence (AI) algorithms are unable to account for the logic of each decision they take during the course of arriving at a solution. This 'black box' problem limits the usefulness of AI in military, medical, and financial security applications, among others, where the price for a mistake is great and the decision-maker must be able to monitor and understand each step along the process. In our research, we focus on the application of Explainable AI for log anomaly detection systems of a different kind. In particular, we use the Shapley value approach from cooperative game theory to explain the outcome or solution of two anomaly-detection algorithms: Decision tree and DeepLog. Both algorithms come from the machine learning-based log analysis toolkit for the automated anomaly detection 'Loglizer'. The novelty of our research is that by using the Shapley value and special coding techniques we managed to evaluate or explain the contribution of both a single event and a grouped sequence of events of the Log for the purposes of anomaly detection. We explain how each event and sequence of events influences the solution, or the result, of an anomaly detection system.

AB - Generally, Artificial Intelligence (AI) algorithms are unable to account for the logic of each decision they take during the course of arriving at a solution. This 'black box' problem limits the usefulness of AI in military, medical, and financial security applications, among others, where the price for a mistake is great and the decision-maker must be able to monitor and understand each step along the process. In our research, we focus on the application of Explainable AI for log anomaly detection systems of a different kind. In particular, we use the Shapley value approach from cooperative game theory to explain the outcome or solution of two anomaly-detection algorithms: Decision tree and DeepLog. Both algorithms come from the machine learning-based log analysis toolkit for the automated anomaly detection 'Loglizer'. The novelty of our research is that by using the Shapley value and special coding techniques we managed to evaluate or explain the contribution of both a single event and a grouped sequence of events of the Log for the purposes of anomaly detection. We explain how each event and sequence of events influences the solution, or the result, of an anomaly detection system.

KW - Anomaly detection

KW - Decision tree

KW - DeepLog

KW - Explainable AI

KW - Log anomaly detection

KW - Shapley value

UR - http://www.scopus.com/inward/record.url?scp=85098629114&partnerID=8YFLogxK

U2 - 10.3233/FAIA200777

DO - 10.3233/FAIA200777

M3 - Conference contribution

AN - SCOPUS:85098629114

T3 - Frontiers in Artificial Intelligence and Applications

SP - 152

EP - 164

BT - Machine Learning and Artificial Intelligence

A2 - Tallon-Ballesteros, Antonio J.

A2 - Chen, Chi-Hua

PB - IOS Press

Y2 - 25 October 2020 through 28 October 2020

ER -

ID: 73623720