Decoder-only large language model (LLM)-based embedding models are beginning to outperform BERT or T5-based embedding models in general-purpose text embedding tasks, including dense vector-based retrieval. NV-Embed model significantly enhances the performance of LLM as a versatile embedding model, while maintaining its simplicity and reproducibility. For model architecture, NV-Embed has a latent attention layer to obtain pooled embeddings, which consistently improves retrieval and downstream task accuracy compared to mean pooling or using the last EOS token embedding from LLMs. To enhance representation learning, NV-Embed removes the causal attention mask of LLMs during contrastive training. For model training, NV-Embed uses a two-stage contrastive instruction-tuning method. It first applies contrastive training with instructions on retrieval datasets, utilizing in-batch negatives and curated hard negative examples. At stage-2, it blends various non-retrieval datasets into instruction tuning, which not only enhances non-retrieval task accuracy but also improves retrieval performance. Combining these techniques, NV-Embed model, using only publicly available data, has achieved a record-high score of 69.32, ranking No. 1 on the Massive Text Embedding Benchmark (MTEB) (as of May 24, 2024), with 56 tasks, encompassing retrieval, reranking, classification, clustering, and semantic textual similarity tasks. Notably, NV-Embed model also attains the highest score of 59.36 on 15 retrieval tasks in the MTEB benchmark (also known as BEIR).
In this video, I talk about the following: What is NV-Embed and how is it trained? What training data is used for NV-Embed? What is the MTEB and BEIR benchmarks? How does NV-Embed perform on the MTEB and BEIR benchmarks?
For more details, please look at https://arxiv.org/pdf/2405.17428 and https://huggingface.co/nvidia/NV-Embe...
Lee, Chankyu, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. "NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models." arXiv preprint arXiv:2405.17428 (2024).
Информация по комментариям в разработке