Topic modelling is a technique widely used today to detect hidden topicality of text corpora, including those from social media. But, for many quite widespread online languages, like, e.g., Russian, topic modelling is still used rarely. For the Russian Twitter, only a handful of works exists, and these works lack substantial discussion on topic interpretability. Also, the impact of various properties of texts upon the modelling results remains widely unexplored. We partly cover these gaps by assessing a mid-range text corpus of a conflictual Twitter discussion in two respects. In continuation to our earlier study that applied three topic modelling algorithms (LDA, WNTM, and BTM) and assessed their quality via automated means, we here juxtapose automated assessment to human coding and link the human evaluation of topic quality to sentiment of the topics. We show that human coding disagrees with the results of the objective metrics in the number of interpretable topics, showing slightly higher interpretability for the LDA algorithm, but inter-coder reliability is much higher for BTM. We discuss a range of coding issues true for all the three topic models. We also find that interpretability of a topic by the human coders is linked to presence of negative keywords among the topic descriptors, with the strongest linkage shown by BTM.
Original languageEnglish
Title of host publication2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS)
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)978-1-7281-2946-4
ISBN (Print)978-1-7281-2947-1
Publication statusPublished - Nov 2019
EventInternational Conference on Social Networks Analysis: Management and Security - Granada, Spain, Granada
Duration: 22 Oct 201925 Oct 2019
Conference number: 6


Conference2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS)
Abbreviated titleSNAMS 2019
Internet address

Cite this