This paper is devoted to the construction of iterative gradient methods for unconstrained convex optimization based on the explicit second-order non-standard Lagrange–Burmann Runge–Kutta method. The preconditioned gradient descent method and its modification with momentum are constructed. Convergence conditions for the case of a strongly convex quadratic function and its perturbation are obtained. Analytical expressions for optimal parameters, which guarantee the optimal convergence rate, are obtained. Theoretical analysis is supported by numerical examples from different applications (2D and 3D problems for Poisson equation, minimization of integral functional, problem for nonlinear integro-differential equation, regularized logistic regression for different datasets, minimization of the sum of quadratic function and smoothed Huber loss function). Methods are compared with well-known optimal methods for convex unconstrained optimization (gradient descent, Polyak heavy ball, Nesterov methods). As it is demonstrated for the considered examples, the optimal method with momentum, proposed in the presented paper, provides faster convergence and less computation time in comparison with other methods.