Результаты исследований: Научные публикации в периодических изданиях › статья › Рецензирование
Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches*. / Zhadan, A. U.; Wu, H.; Kudin, P. S.; Zhang, Y.; Petrosian, O. L.
в: Vestnik Sankt-Peterburgskogo Universiteta, Prikladnaya Matematika, Informatika, Protsessy Upravleniya, Том 19, № 3, 01.11.2023, стр. 391-402.Результаты исследований: Научные публикации в периодических изданиях › статья › Рецензирование
}
TY - JOUR
T1 - Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches*
AU - Zhadan, A. U.
AU - Wu, H.
AU - Kudin, P. S.
AU - Zhang, Y.
AU - Petrosian, O. L.
PY - 2023/11/1
Y1 - 2023/11/1
N2 - Optimal scheduling of battery energy storage system plays crucial part in distributedenergy system. As a data driven method, deep reinforcement learning does not requiresystem knowledge of dynamic system, present optimal solution for nonlinear optimizationproblem. In this research, financial cost of energy consumption reduced by schedulingbattery energy using deep reinforcement learning method (RL). Reinforcement learning canadapt to equipment parameter changes and noise in the data, while mixed-integer linearprogramming (MILP) requires high accuracy in forecasting power generation and demand,accurate equipment parameters to achieve good performance, and high computational costfor large-scale industrial applications. Based on this, it can be assumed that deep RL basedsolution is capable of outperform classic deterministic optimization model MILP. This studycompares four state-of-the-art RL algorithms for the battery power plant control problem:PPO, A2C, SAC, TD3. According to the simulation results, TD3 shows the best results,outperforming MILP by 5 % in cost savings, and the time to solve the problem is reducedby about a factor of three.
AB - Optimal scheduling of battery energy storage system plays crucial part in distributedenergy system. As a data driven method, deep reinforcement learning does not requiresystem knowledge of dynamic system, present optimal solution for nonlinear optimizationproblem. In this research, financial cost of energy consumption reduced by schedulingbattery energy using deep reinforcement learning method (RL). Reinforcement learning canadapt to equipment parameter changes and noise in the data, while mixed-integer linearprogramming (MILP) requires high accuracy in forecasting power generation and demand,accurate equipment parameters to achieve good performance, and high computational costfor large-scale industrial applications. Based on this, it can be assumed that deep RL basedsolution is capable of outperform classic deterministic optimization model MILP. This studycompares four state-of-the-art RL algorithms for the battery power plant control problem:PPO, A2C, SAC, TD3. According to the simulation results, TD3 shows the best results,outperforming MILP by 5 % in cost savings, and the time to solve the problem is reducedby about a factor of three.
KW - reinforcement learning,
KW - energy management system
KW - distributed energy system
KW - numerical optimization
KW - reinforcement learning
KW - energy management system
KW - distributed energy system
KW - numerical optimization
UR - https://www.mendeley.com/catalogue/e1059e44-53d5-3d4d-a4ed-d69d227e2d32/
U2 - 10.21638/11701/spbu10.2023.307
DO - 10.21638/11701/spbu10.2023.307
M3 - Article
VL - 19
SP - 391
EP - 402
JO - ВЕСТНИК САНКТ-ПЕТЕРБУРГСКОГО УНИВЕРСИТЕТА. ПРИКЛАДНАЯ МАТЕМАТИКА. ИНФОРМАТИКА. ПРОЦЕССЫ УПРАВЛЕНИЯ
JF - ВЕСТНИК САНКТ-ПЕТЕРБУРГСКОГО УНИВЕРСИТЕТА. ПРИКЛАДНАЯ МАТЕМАТИКА. ИНФОРМАТИКА. ПРОЦЕССЫ УПРАВЛЕНИЯ
SN - 1811-9905
IS - 3
ER -
ID: 111486361