ОписаниеTopic modelling is a technique widely used today to detect hidden topicality of text corpora, including those from social media. But, for many quite widespread online languages, like, e.g., Russian, topic modelling is still used rarely. For the Russian Twitter, only a handful of works exists, and these works lack substantial discussion on topic interpretability. Also, the impact of various properties of texts upon the modelling results remains widely unexplored. We partly cover these gaps by assessing a mid-range text corpus of a conflictual Twitter discussion in two respects. In continuation to our earlier study that applied three topic modelling algorithms (LDA, WNTM, and BTM) and assessed their quality via automated means, we here juxtapose automated assessment to human coding and link the human evaluation of topic quality to sentiment of the topics. We show that human coding disagrees with the results of the objective metrics in the number of interpretable topics, showing slightly higher interpretability for the LDA algorithm, but inter-coder reliability is much higher for BTM. We discuss a range of coding issues true for all the three topic models. We also find that interpretability of a topic by the human coders is linked to presence of negative keywords among the topic descriptors, with the strongest linkage shown by BTM.
This work has been supported in full by the Presidential grants of the Russian Federation for young Doctors of Science, grant MD-6259.2018.6.
|Период||22 окт 2019 → 25 окт 2019|
|Название события||International Conference on Social Networks Analysis: Management and Security|
Результат исследований: Публикации в книгах, отчётах, сборниках, трудах конференций › статья в сборнике материалов конференции › научная › рецензирование