Shapley value-based explainable AI has recently attracted significant interest. However, the computational complexity of the Shapley value grows exponentially with the number of players, resulting in high computational costs that prevent its widespread practical application. To address this challenge, various approximation methods have been proposed in the literature for computing the Shapley value, such as linear Shapley computation, sampling-based Shapley computation, and several estimation-based approaches. Among these methods, the sampling approach exhibits non-zero bias and variance and is sufficiently universal to be used with almost any AI algorithm. However, it suffers from unstable interpretability results and slow convergence in high-dimensional problems. To address these problems, we propose integrating a sequential Bayesian updating framework into the Shapley sampling approach. The core idea of this method is to dynamically update probabilities based on each sample's Shapley value combined with a selection strategy. Both theoretical analysis and empirical results show that this method significantly improves the convergence speed and interpretability compared to the original sampling approach.
Original languageEnglish
Pages (from-to)166414-166423
Number of pages10
JournalIEEE Access
Volume12
DOIs
StatePublished - 2024

    Research areas

  • Bayesian updating, Explainable AI, Shapley value, cancer detection, efficiency calculation, game theory, high-dimensional problem, interpretability, sampling method, sequential Shapley updating

ID: 127453799