Explore >> Select a destination


You are here

blog.nomic.ai
| | www.mixedbread.ai
2.4 parsecs away

Travel
| | The 2D-?? model introduces a novel approach that enables you to reduce both the number of layers and the dimensions of embeddings within the model. This dual reduction strategy allows for a more compact model size while still delivering competitive performance compared to leading models, such as Nomic's embedding model. Specifically, reducing the model's layers by approximately 50% retains up to 85% of its original performance, even without additional training.
| | qwenlm.github.io
3.5 parsecs away

Travel
| | GITHUB HUGGING FACE MODELSCOPE DISCORD We release Qwen3 Embedding series, a new proprietary model of the Qwen model family. These models are specifically designed for text embedding, retrieval, and reranking tasks, built on the Qwen3 foundation model. Leveraging Qwen3's robust multilingual text understanding capabilities, the series achieves state-of-the-art performance across multiple benchmarks for text embedding and reranking tasks. We have open-sourced this series of text embedding and reranking models under the Apache 2.
| | unstructured.io
2.0 parsecs away

Travel
| | Navigate the Massive Text Embedding Benchmark (MTEB) leaderboard with confidence! Understand the difference between Bi-Encoders and Cross-Encoders, learn how text embedding models are pre-trained and benchmarked, and how to make the best choice for your specific use case.
| | www.blopig.com
19.4 parsecs away

Travel
| [AI summary] The article discusses the application of graph neural networks (GNNs) in protein property prediction, highlighting their ability to model protein structures and interactions, the integration of pre-trained protein language models like ESM, and the use of residual layers to address oversmoothing challenges.