The results of evaluating explanations of the black-box model for prediction are presented. The XAI evaluation is realized through the different principles and characteristics between black-box model explanations and XAI labels. In the field of high-dimensional prediction, the black-box model represented by neural network and ensemble models can predict complex data sets more accurately than traditional linear regression and white-box models such as the decision tree model. However, an unexplainable characteristic not only hinders developers from debugging but also causes users mistrust. In the XAI field dedicated to 'opening' the black box model, effective evaluation methods are still being developed. Within the established XAI evaluation framework (MDMC) in this paper, explanation methods for the prediction can be effectively tested, and the identified explanation method with relatively higher quality can improve the accuracy, transparency, and reliability of prediction.

Original languageEnglish
Title of host publicationProceedings of 2021 2nd International Conference on Neural Networks and Neurotechnologies, NeuroNT 2021
EditorsS. Shaposhnikov
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages13-16
Number of pages4
ISBN (Electronic)9781665445344
DOIs
StatePublished - 16 Jun 2021
Event2nd International Conference on Neural Networks and Neurotechnologies, NeuroNT 2021 - Saint Petersburg, Russian Federation
Duration: 16 Jun 2021 → …

Publication series

NameProceedings of 2021 2nd International Conference on Neural Networks and Neurotechnologies, NeuroNT 2021

Conference

Conference2nd International Conference on Neural Networks and Neurotechnologies, NeuroNT 2021
Country/TerritoryRussian Federation
CitySaint Petersburg
Period16/06/21 → …

    Scopus subject areas

  • Computer Science Applications
  • Artificial Intelligence
  • Computer Networks and Communications

    Research areas

  • black-box model explanations, ensemble models, neural network, XAI evaluation

ID: 86497588