In this article, we present the first child emotional speech corpus in Russian, called “EmoChildRu”, collected from 3 to 7 years old children. The base corpus includes over 20 K recordings (approx. 30 h), collected from 120 children. Audio recordings are carried out in three controlled settings by creating different emotional states for children: playing with a standard set of toys; repetition of words from a toy-parrot in a game store setting; watching a cartoon and retelling of the story, respectively. This corpus is designed to study the reflection of the emotional state in the characteristics of voice and speech and for studies of the formation of emotional states in ontogenesis. A portion of the corpus is annotated for three emotional states (comfort, discomfort, neutral). Additional data include the results of the adult listeners’ analysis of child speech, questionnaires, as well as annotation for gender and age in months. We also provide several baselines, comparing human and machine estimation on this corpus for prediction of age, gender and comfort state. While in age estimation, the acoustics-based automatic systems show higher performance, they do not reach human perception levels in comfort state and gender classification. The comparative results indicate the importance and necessity of developing further linguistic models for discrimination.

Original languageEnglish
Pages (from-to)268-283
Number of pages16
JournalComputer Speech and Language
Volume46
DOIs
StatePublished - 1 Nov 2017

    Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Human-Computer Interaction

    Research areas

  • Age recognition, Computational paralinguistics, Emotional child speech, Emotional states, Gender recognition, Perception experiments, Spectrographic analysis

ID: 36522018