You are here |
blog.nomic.ai | ||
| | | |
zackproser.com
|
|
| | | | Embeddings models are the secret sauce that makes RAG work. How are THEY made? | |
| | | |
www.mixedbread.ai
|
|
| | | | The 2D-?? model introduces a novel approach that enables you to reduce both the number of layers and the dimensions of embeddings within the model. This dual reduction strategy allows for a more compact model size while still delivering competitive performance compared to leading models, such as Nomic's embedding model. Specifically, reducing the model's layers by approximately 50% retains up to 85% of its original performance, even without additional training. | |
| | | |
www.singlelunch.com
|
|
| | | | This is the blog version of a talk of mine on embedding methods. It's the main slides and what I would say in the talk. Intended audience: Anyone interested in embedding methods. I don'... | |
| | | |
blog.vespa.ai
|
|
| | In this post, we reproduce the state-of-the-art baseline for retrieval-based question-answering systems within a single, scalable production ready application on Vespa.ai. |