DOI

Deep learning architectures based on self-attention have recently achieved and surpassed state of the art results in the task of unsupervised aspect extraction and topic modeling. While models such as neural attention-based aspect extraction (ABAE) have been successfully applied to user-generated texts, they are less coherent when applied to traditional data sources such as news articles and newsgroup documents. In this work, we introduce a simple approach based on sentence filtering in order to improve topical aspects learned from newsgroups-based content without modifying the basic mechanism of ABAE. We train a probabilistic classifier to distinguish between out-of-domain texts (outer dataset) and in-domain texts (target dataset). Then, during data preparation we filter out sentences that have a low probability of being in-domain and train the neural model on the remaining sentences. The positive effect of sentence filtering on topic coherence is demonstrated in comparison to aspect extraction models trained on unfiltered texts.

Original languageEnglish
Pages (from-to)2487-2496
Number of pages10
JournalJournal of Intelligent and Fuzzy Systems
Volume39
Issue number2
DOIs
StatePublished - 2020

    Research areas

  • Aspect extraction, deep learning, out-of-domain classification, topic coherence, topic models

    Scopus subject areas

  • Statistics and Probability
  • Engineering(all)
  • Artificial Intelligence

ID: 95167268