The paper deals with the problem of unauthorized use in deep learning of facial images from social networks and analyses methods of protecting such images from their use and recognition based on de-identification procedures and the newest of them — the “Fawkes” procedure. The proposed solution uses a comparative analysis of images subjected to the Fawkes-transformation procedure, representation and description of textural changes and features of structural damage in facial images. Multilevel parametric estimates of these damages were applied for their formal and numerical assessment. The reasons for the impossibility of using images of faces destroyed by the Fawkes procedure in deep learning tasks are explained. It has been theoretically proven and experimentally shown that facial images subjected to the Fawkes procedure are well recognized outside of deep learning methods. It is argued that the use of simple preprocessing methods for facial images (subjected to the Fawkes procedure) at the entrance to convolutional neural networks can lead to their recognition with high efficiency, which destroys the myth about the importance of protecting facial images with the Fawkes-procedure.