Word clustering groups words that exhibit similar properties. One popular method for this is Brown clustering, which uses short-range distributional information to construct clusters. Specifically, this is a hard hierarchical clustering with a fixed-width beam that employs bi-grams and greedily minimizes global mutual information loss. The result is word clusters that tend to outperform or complement other word representations, especially when constrained by small datasets. However, Brown clustering has high computational complexity and does not lend itself to parallel computation. This, together with the lack of efficient implementations, limits their applicability in NLP. We present efficient implementations of Brown clustering and the alternative Exchange clustering as well as a number of methods to accelerate the computation of both hierarchical and flat clusters. We show empirically that clusters obtained with the accelerated method match the performance of clusters computed using the original methods.
Original language
English
Title of host publication
Proceedings of The 12th Language Resources and Evaluation Conference
Number of pages
6
Place of publication
Marseille
Publisher
European Language Resources Association
Publication year
2020
Pages
2491-2496
ISBN (print)
979-10-95546-34-4
Publication status
Published - 2020
Event
12th Conference on Language Resources and Evaluation: LREC 2020 - Marseille, France Duration: 11 May 2020 → 16 May 2020
Conference
Conference
12th Conference on Language Resources and Evaluation