Abstract
Few-shot classification is a powerful technique, but training requires substantial computing power and data. We propose an efficient method with small model sizes and less training data with only 2-8 training instances per class. Our proposed method, AncSetFit, targets low data scenarios by anchoring the task and label information through sentence embeddings in fine-tuning a Sentence Transformer model. It uses contrastive learning and a triplet loss to enforce training instances of a class to be closest to its own textual semantic label information in the embedding space-and thereby learning to embed different class instances more distinct. AncSetFit obtains strong performance in data-sparse scenarios compared to existing methods across SST-5, Emotion detection, and AG News data, even with just two examples per class.
Original language | Danish |
---|---|
Title of host publication | Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing |
Editors | Houda Bouamor, Juan Pino, Kalika Bali |
Number of pages | 11 |
Publisher | Association for Computational Linguistics |
Publication date | 2023 |
Pages | 11254–11264 |
DOIs | |
Publication status | Published - 2023 |