Research output: Chapter in Book/Report/Conference proceeding › Chapter › Research › peer-review
FI-SHAP: Explanation of Time Series Forecasting and Improvement of Feature Engineering Based on Boosting Algorithm. / Zhang, Yuyi ; Petrosian, Ovanes ; Liu, Jing ; Ma, Ruimin; Krinkin, Kirill V.
Intelligent Systems and Applications: Proceedings of the 2022 Intelligent Systems Conference (IntelliSys). Vol. 3 Springer Nature, 2022. p. 745–758 (Lecture Notes in Networks and Systems; No. 544).Research output: Chapter in Book/Report/Conference proceeding › Chapter › Research › peer-review
}
TY - CHAP
T1 - FI-SHAP: Explanation of Time Series Forecasting and Improvement of Feature Engineering Based on Boosting Algorithm
AU - Zhang, Yuyi
AU - Petrosian, Ovanes
AU - Liu, Jing
AU - Ma, Ruimin
AU - Krinkin, Kirill V.
PY - 2022
Y1 - 2022
N2 - Boosting Algorithm (BA) is state-of-the-art in major competitions, especially in the M4 and M5 time series forecasting competitions. However, the use of BA requires tedious feature engineering work with blindness and randomness, which results in a serious waste of time. In this work, we try to guide the initial feature engineering operations in virtue of the explanation results of the SHAP technique, and meanwhile, the traditional Feature Importance (FI) method is also taken into account. Previous BA explanation works have rarely focused on forecasting, so the contribution of this work is (1) to develop a BA explanation framework-“FI-SHAP”, which focuses on time series forecasting, (2) to improve the efficiency of feature engineering. At the same time, to measure explainability performance, (3) we also establish a new practical evaluation framework that attempts to remove development barriers in the field of explainable AI.
AB - Boosting Algorithm (BA) is state-of-the-art in major competitions, especially in the M4 and M5 time series forecasting competitions. However, the use of BA requires tedious feature engineering work with blindness and randomness, which results in a serious waste of time. In this work, we try to guide the initial feature engineering operations in virtue of the explanation results of the SHAP technique, and meanwhile, the traditional Feature Importance (FI) method is also taken into account. Previous BA explanation works have rarely focused on forecasting, so the contribution of this work is (1) to develop a BA explanation framework-“FI-SHAP”, which focuses on time series forecasting, (2) to improve the efficiency of feature engineering. At the same time, to measure explainability performance, (3) we also establish a new practical evaluation framework that attempts to remove development barriers in the field of explainable AI.
U2 - https://doi.org/10.1007/978-3-031-16075-2_55
DO - https://doi.org/10.1007/978-3-031-16075-2_55
M3 - Chapter
SN - 978-3-031-16074-5
VL - 3
T3 - Lecture Notes in Networks and Systems
SP - 745
EP - 758
BT - Intelligent Systems and Applications
PB - Springer Nature
ER -
ID: 104166143