Accelerated High-Quality Mutual-Information Based Word Clustering

Manuel R. Ciosici, Ira Assent, Leon Derczynski

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review

Abstract

Word clustering groups words that exhibit similar properties. One popular method for this is Brown clustering, which uses short-range distributional information to construct clusters. Specifically, this is a hard hierarchical clustering with a fixed-width beam that employs bi-grams and greedily minimizes global mutual information loss. The result is word clusters that tend to outperform or complement other word representations, especially when constrained by small datasets. However, Brown clustering has high computational complexity and does not lend itself to parallel computation. This, together with the lack of efficient implementations, limits their applicability in NLP. We present efficient implementations of Brown clustering and the alternative Exchange clustering as well as a number of methods to accelerate the computation of both hierarchical and flat clusters. We show empirically that clusters obtained with the accelerated method match the performance of clusters computed using the original methods.
Original languageEnglish
Title of host publicationLREC 2020 - 12th International Conference on Language Resources and Evaluation, Conference Proceedings
Number of pages6
Place of publicationMarseille
PublisherEuropean Language Resources Association
Publication date2020
Pages2491-2496
ISBN (Print)979-10-95546-34-4
ISBN (Electronic)9791095546344
Publication statusPublished - 2020
Event12th Conference on Language Resources and Evaluation: LREC 2020 - Marseille, France
Duration: 11 May 202016 May 2020

Conference

Conference12th Conference on Language Resources and Evaluation
Country/TerritoryFrance
CityMarseille
Period11/05/202016/05/2020

Keywords

  • Efficient computation
  • Word clusters
  • Word representations

Fingerprint

Dive into the research topics of 'Accelerated High-Quality Mutual-Information Based Word Clustering'. Together they form a unique fingerprint.

Cite this