The goal of this cross-linguistic study is to analyze the speech of children in the age group of 8-12 years and
automatic classification of children's emotions from various speech data in Russian and Tamil languages. A
standardized approach for the collection of speech (spontaneous and acting speech - emotional words,
phrases, and meaningless texts) was used. Two emotional child speech corpora include the emotional speech
of 95 Russian children and 40 Indian children were created. Annotation of the emotional speech was carried
out in four categories “joy - neutral - sadness – anger” by two speech specialists of the same nationality of the
child. The set of 2505 labeled audio files of Russian children and 418 labeled audio files of Tamil children
were used. The SVM classifier shows slightly better results in Russian language, and the MLP classifier in
Tamil. Inter-Cultural approach (mixed dataset of Russian and Indian speech) revealed that the accuracy of
recognition of all emotions remains higher. In Cross-Cultural approach, on samples of Tamil speech, anger
state is recognized better, and in samples of Russian speech – sad state. Using 5 Layered CNN Model, the
accuracy was revealed for Russian speech is higher than for Indian children.
Original languageEnglish
Title of host publicationProceedings of the 24 th International Congress of Acoustics
Subtitle of host publicationA15 SPEECH
Place of PublicationGyeongju
Pages8-14
VolumeA15
StatePublished - 2022
Event24 th International Congress of Acoustics. October 24 to 28. 2022 in Gyeongju, Korea - Gyeongju, Korea , Gyeongju, Korea, Democratic People's Republic of
Duration: 24 Oct 202228 Nov 2022
https://ica2022korea.org/

Conference

Conference24 th International Congress of Acoustics. October 24 to 28. 2022 in Gyeongju, Korea
Abbreviated title24 th International Congress of Acoustics
Country/TerritoryKorea, Democratic People's Republic of
CityGyeongju
Period24/10/2228/11/22
Internet address

    Scopus subject areas

  • Medicine(all)
  • Psychology(all)

    Research areas

  • child speech, Emotion Detection, Cross-Linguistic Study

ID: 100861799