Objective. To analyze the functions of various types of neural networks in chest X-rays classification as well as to analyze the sources of errors and how to eliminate them.Methods. Back propagation neural network (BPNN), competitively neural network (CpNN) and convolutional neural network (CNN) developed by Care Mentor (Russia) were used to classify X-ray images into 12 X-ray syndromes. Separate sets of digital JPEG/PNG chest X-rays with size 32 Ѕ 32 pixels obtained from Chestx-ray8 were used for training and benchmarking the networks. The causes of network errors were determined by an expert-analytical method.Results. The BPNN accuracy rate of 81.03% in recognizing radiological phenomena was achieved against low training time and a moderate number of iterations. At the same time, mean square error value did not exceed 0.0026. Due to the peculiarities of the architecture and the self-learning algorithm, CpNN, with minimal time spent on training, made it possible to increase the accuracy of determining an X-ray syndrome up to 90.12%, but the error value was relatively high. CNN showed the best results in the accuracy of recognition of radiological changes and the magnitude of the error, while the training resource costs were the greatest. The main sources of errors are: errors caused by the neural network architecture itself and its learning algorithm; errors associated with incorrect labeling of images; errors associated with the quality of the analyzed images.Interpretation. Improving the accuracy and efficiency of computerized methods for analyzing X-ray images can be associated with the improvement of neural network learning algorithms, accuracy of labeling and image quality.