Standard

Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches*. / Zhadan, A. U.; Wu, H.; Kudin, P. S.; Zhang, Y.; Petrosian, O. L.

в: Vestnik Sankt-Peterburgskogo Universiteta, Prikladnaya Matematika, Informatika, Protsessy Upravleniya, Том 19, № 3, 01.11.2023, стр. 391-402.

Результаты исследований: Научные публикации в периодических изданияхстатьяРецензирование

Harvard

Zhadan, AU, Wu, H, Kudin, PS, Zhang, Y & Petrosian, OL 2023, 'Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches*', Vestnik Sankt-Peterburgskogo Universiteta, Prikladnaya Matematika, Informatika, Protsessy Upravleniya, Том. 19, № 3, стр. 391-402. https://doi.org/10.21638/11701/spbu10.2023.307

APA

Vancouver

Author

Zhadan, A. U. ; Wu, H. ; Kudin, P. S. ; Zhang, Y. ; Petrosian, O. L. / Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches*. в: Vestnik Sankt-Peterburgskogo Universiteta, Prikladnaya Matematika, Informatika, Protsessy Upravleniya. 2023 ; Том 19, № 3. стр. 391-402.

BibTeX

@article{7e40fe9a6f53448f9f5905357727d467,
title = "Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches*",
abstract = "Optimal scheduling of battery energy storage system plays crucial part in distributedenergy system. As a data driven method, deep reinforcement learning does not requiresystem knowledge of dynamic system, present optimal solution for nonlinear optimizationproblem. In this research, financial cost of energy consumption reduced by schedulingbattery energy using deep reinforcement learning method (RL). Reinforcement learning canadapt to equipment parameter changes and noise in the data, while mixed-integer linearprogramming (MILP) requires high accuracy in forecasting power generation and demand,accurate equipment parameters to achieve good performance, and high computational costfor large-scale industrial applications. Based on this, it can be assumed that deep RL basedsolution is capable of outperform classic deterministic optimization model MILP. This studycompares four state-of-the-art RL algorithms for the battery power plant control problem:PPO, A2C, SAC, TD3. According to the simulation results, TD3 shows the best results,outperforming MILP by 5 % in cost savings, and the time to solve the problem is reducedby about a factor of three.",
keywords = "reinforcement learning,, energy management system, distributed energy system, numerical optimization, reinforcement learning, energy management system, distributed energy system, numerical optimization",
author = "Zhadan, {A. U.} and H. Wu and Kudin, {P. S.} and Y. Zhang and Petrosian, {O. L.}",
year = "2023",
month = nov,
day = "1",
doi = "10.21638/11701/spbu10.2023.307",
language = "English",
volume = "19",
pages = "391--402",
journal = " ВЕСТНИК САНКТ-ПЕТЕРБУРГСКОГО УНИВЕРСИТЕТА. ПРИКЛАДНАЯ МАТЕМАТИКА. ИНФОРМАТИКА. ПРОЦЕССЫ УПРАВЛЕНИЯ",
issn = "1811-9905",
publisher = "Издательство Санкт-Петербургского университета",
number = "3",

}

RIS

TY - JOUR

T1 - Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches*

AU - Zhadan, A. U.

AU - Wu, H.

AU - Kudin, P. S.

AU - Zhang, Y.

AU - Petrosian, O. L.

PY - 2023/11/1

Y1 - 2023/11/1

N2 - Optimal scheduling of battery energy storage system plays crucial part in distributedenergy system. As a data driven method, deep reinforcement learning does not requiresystem knowledge of dynamic system, present optimal solution for nonlinear optimizationproblem. In this research, financial cost of energy consumption reduced by schedulingbattery energy using deep reinforcement learning method (RL). Reinforcement learning canadapt to equipment parameter changes and noise in the data, while mixed-integer linearprogramming (MILP) requires high accuracy in forecasting power generation and demand,accurate equipment parameters to achieve good performance, and high computational costfor large-scale industrial applications. Based on this, it can be assumed that deep RL basedsolution is capable of outperform classic deterministic optimization model MILP. This studycompares four state-of-the-art RL algorithms for the battery power plant control problem:PPO, A2C, SAC, TD3. According to the simulation results, TD3 shows the best results,outperforming MILP by 5 % in cost savings, and the time to solve the problem is reducedby about a factor of three.

AB - Optimal scheduling of battery energy storage system plays crucial part in distributedenergy system. As a data driven method, deep reinforcement learning does not requiresystem knowledge of dynamic system, present optimal solution for nonlinear optimizationproblem. In this research, financial cost of energy consumption reduced by schedulingbattery energy using deep reinforcement learning method (RL). Reinforcement learning canadapt to equipment parameter changes and noise in the data, while mixed-integer linearprogramming (MILP) requires high accuracy in forecasting power generation and demand,accurate equipment parameters to achieve good performance, and high computational costfor large-scale industrial applications. Based on this, it can be assumed that deep RL basedsolution is capable of outperform classic deterministic optimization model MILP. This studycompares four state-of-the-art RL algorithms for the battery power plant control problem:PPO, A2C, SAC, TD3. According to the simulation results, TD3 shows the best results,outperforming MILP by 5 % in cost savings, and the time to solve the problem is reducedby about a factor of three.

KW - reinforcement learning,

KW - energy management system

KW - distributed energy system

KW - numerical optimization

KW - reinforcement learning

KW - energy management system

KW - distributed energy system

KW - numerical optimization

UR - https://www.mendeley.com/catalogue/e1059e44-53d5-3d4d-a4ed-d69d227e2d32/

U2 - 10.21638/11701/spbu10.2023.307

DO - 10.21638/11701/spbu10.2023.307

M3 - Article

VL - 19

SP - 391

EP - 402

JO - ВЕСТНИК САНКТ-ПЕТЕРБУРГСКОГО УНИВЕРСИТЕТА. ПРИКЛАДНАЯ МАТЕМАТИКА. ИНФОРМАТИКА. ПРОЦЕССЫ УПРАВЛЕНИЯ

JF - ВЕСТНИК САНКТ-ПЕТЕРБУРГСКОГО УНИВЕРСИТЕТА. ПРИКЛАДНАЯ МАТЕМАТИКА. ИНФОРМАТИКА. ПРОЦЕССЫ УПРАВЛЕНИЯ

SN - 1811-9905

IS - 3

ER -

ID: 111486361