The paper is devoted to the construction and analysis of the gradient method for convex optimization, based on explicit Runge–Kutta methods, stabilized with the use of a Lyapunov function. This function is constructed for a system of second-order ordinary differential equations, which describes the dynamics of accelerated gradient methods in time. The proposed method is constructed as the modification of gradient Runge–Kutta methods for convex optimization. New method is based on the application of the Lyapunov function, which does not use the exact solution of the optimization problem. The theorem on the convergence rate is proven. As it is demonstrated, new accelerated method has the same convergence rate as original Runge–Kutta-based gradient methods, and it can be better than the rate of Nesterov’s accelerated gradient method applied to functions of specific classes. Theoretical results are supported by numerical experiments on test problems arising in different fields. As it is demonstrated, the new method, according to its better stability, requires fewer iterations and less computational time in comparison with original Runge–Kutta methods, gradient descent method, and Nesterov’s accelerated method. Such advantage is associated with the larger value of stepsize. Better stability of the proposed method for non-convex problems is demonstrated on the application to the minimization problem, equivalent to the solution of the system of nonlinear algebraic equations.