Standard

Research on Robust Audio-Visual Speech Recognition Algorithms. / Yang, Wenfeng ; Li, Pengyi ; Yang, Wei ; Liu, Yuxing ; He, Yulong; Petrosian, Ovanes; Davydenko, Aleksandr .

In: Mathematics, Vol. 11, No. 7, 05.04.2023, p. 1733.

Research output: Contribution to journalArticlepeer-review

Harvard

APA

Vancouver

Author

Yang, Wenfeng ; Li, Pengyi ; Yang, Wei ; Liu, Yuxing ; He, Yulong ; Petrosian, Ovanes ; Davydenko, Aleksandr . / Research on Robust Audio-Visual Speech Recognition Algorithms. In: Mathematics. 2023 ; Vol. 11, No. 7. pp. 1733.

BibTeX

@article{aa379f8800314083a46875e5bd18d284,
title = "Research on Robust Audio-Visual Speech Recognition Algorithms",
abstract = "Automatic speech recognition (ASR) that relies on audio input suffers from significant degradation in noisy conditions and is particularly vulnerable to speech interference. However, video recordings of speech capture both visual and audio signals, providing a potent source of information for training speech models. Audiovisual speech recognition (AVSR) systems enhance the robustness of ASR by incorporating visual information from lip movements and associated sound production in addition to the auditory input. There are many audiovisual speech recognition models and systems for speech transcription, but most of them have been tested based in a single experimental setting and with a limited dataset. However, a good model should be applicable to any scenario. Our main contributions are: (i) Reproducing the three best-performing audiovisual speech recognition models in the current AVSR research area using the most famous audiovisual databases, LSR2 (Lip Reading Sentences 2) LSR3 (Lip Reading Sentences 3), and comparing and analyzing their performances under various noise conditions. (ii) Based on our experimental and research experiences, we analyzed the problems currently encountered in the AVSR domain, which are summarized as the feature-extraction problem and the domain-generalization problem. (iii) According to the experimental results, the Moco (momentum contrast) + word2vec (word to vector) model has the best AVSR effect on the LRS datasets regardless of whether there is noise or not. Additionally, the model also produced the best experimental results in the experiments of audio recognition and video recognition. Our research lays the foundation for further improving the performance of AVSR models.",
keywords = "multi-model deep learning, MOCO, speech recognition, lip reading, audiovisual speech recognition, model comparison, multi-model deep learning, speech recognition, lip reading, audiovisual speech recognition, model comparison, MOCO",
author = "Wenfeng Yang and Pengyi Li and Wei Yang and Yuxing Liu and Yulong He and Ovanes Petrosian and Aleksandr Davydenko",
note = "Yang, W.; Li, P.; Yang, W.; Liu, Y.; He, Y.; Petrosian, O.; Davydenko, A. Research on Robust Audio-Visual Speech Recognition Algorithms. Mathematics 2023, 11, 1733. https://doi.org/10.3390/math11071733",
year = "2023",
month = apr,
day = "5",
doi = "https://doi.org/10.3390/math11071733",
language = "English",
volume = "11",
pages = "1733",
journal = "Mathematics",
issn = "2227-7390",
publisher = "MDPI AG",
number = "7",

}

RIS

TY - JOUR

T1 - Research on Robust Audio-Visual Speech Recognition Algorithms

AU - Yang, Wenfeng

AU - Li, Pengyi

AU - Yang, Wei

AU - Liu, Yuxing

AU - He, Yulong

AU - Petrosian, Ovanes

AU - Davydenko, Aleksandr

N1 - Yang, W.; Li, P.; Yang, W.; Liu, Y.; He, Y.; Petrosian, O.; Davydenko, A. Research on Robust Audio-Visual Speech Recognition Algorithms. Mathematics 2023, 11, 1733. https://doi.org/10.3390/math11071733

PY - 2023/4/5

Y1 - 2023/4/5

N2 - Automatic speech recognition (ASR) that relies on audio input suffers from significant degradation in noisy conditions and is particularly vulnerable to speech interference. However, video recordings of speech capture both visual and audio signals, providing a potent source of information for training speech models. Audiovisual speech recognition (AVSR) systems enhance the robustness of ASR by incorporating visual information from lip movements and associated sound production in addition to the auditory input. There are many audiovisual speech recognition models and systems for speech transcription, but most of them have been tested based in a single experimental setting and with a limited dataset. However, a good model should be applicable to any scenario. Our main contributions are: (i) Reproducing the three best-performing audiovisual speech recognition models in the current AVSR research area using the most famous audiovisual databases, LSR2 (Lip Reading Sentences 2) LSR3 (Lip Reading Sentences 3), and comparing and analyzing their performances under various noise conditions. (ii) Based on our experimental and research experiences, we analyzed the problems currently encountered in the AVSR domain, which are summarized as the feature-extraction problem and the domain-generalization problem. (iii) According to the experimental results, the Moco (momentum contrast) + word2vec (word to vector) model has the best AVSR effect on the LRS datasets regardless of whether there is noise or not. Additionally, the model also produced the best experimental results in the experiments of audio recognition and video recognition. Our research lays the foundation for further improving the performance of AVSR models.

AB - Automatic speech recognition (ASR) that relies on audio input suffers from significant degradation in noisy conditions and is particularly vulnerable to speech interference. However, video recordings of speech capture both visual and audio signals, providing a potent source of information for training speech models. Audiovisual speech recognition (AVSR) systems enhance the robustness of ASR by incorporating visual information from lip movements and associated sound production in addition to the auditory input. There are many audiovisual speech recognition models and systems for speech transcription, but most of them have been tested based in a single experimental setting and with a limited dataset. However, a good model should be applicable to any scenario. Our main contributions are: (i) Reproducing the three best-performing audiovisual speech recognition models in the current AVSR research area using the most famous audiovisual databases, LSR2 (Lip Reading Sentences 2) LSR3 (Lip Reading Sentences 3), and comparing and analyzing their performances under various noise conditions. (ii) Based on our experimental and research experiences, we analyzed the problems currently encountered in the AVSR domain, which are summarized as the feature-extraction problem and the domain-generalization problem. (iii) According to the experimental results, the Moco (momentum contrast) + word2vec (word to vector) model has the best AVSR effect on the LRS datasets regardless of whether there is noise or not. Additionally, the model also produced the best experimental results in the experiments of audio recognition and video recognition. Our research lays the foundation for further improving the performance of AVSR models.

KW - multi-model deep learning

KW - MOCO

KW - speech recognition

KW - lip reading

KW - audiovisual speech recognition

KW - model comparison

KW - multi-model deep learning

KW - speech recognition

KW - lip reading

KW - audiovisual speech recognition

KW - model comparison

KW - MOCO

UR - https://www.mendeley.com/catalogue/9544417d-bf1e-3f55-b621-8071cee8eabe/

U2 - https://doi.org/10.3390/math11071733

DO - https://doi.org/10.3390/math11071733

M3 - Article

VL - 11

SP - 1733

JO - Mathematics

JF - Mathematics

SN - 2227-7390

IS - 7

ER -

ID: 104166069