Project Details
Description
Massive Text Embedding Benchmark (MMTEB) is a global effort to expand text embedding evaluation to all languages with more than 50 contributors. The benchmark seeks to evaluate the quality of embeddings of text, e.g. used for search, retrieval etc.
Short title | Massive Text Embedding Benchmark |
---|---|
Acronym | MMTEB |
Status | Active |
Effective start/end date | 01/04/2024 → … |
Keywords
- Natural Language Processing
- Information Retrieval
- Evaluation
- Artificial intelligence
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.