|
You are here |
lukesalamone.github.io | ||
| | | | |
blog.vespa.ai
|
|
| | | | | This is the first blog post in a series of posts where we introduce using pretrained Transformer models for search and document ranking with Vespa.ai. | |
| | | | |
blog.reachsumit.com
|
|
| | | | | Welcome to Sumit Kumar's Personal Blog! | |
| | | | |
www.shaped.ai
|
|
| | | | | This article explores how cross-encoders, long praised for their performance in neural ranking, may in fact be reimplementing classic information retrieval logic, specifically, a semantic variant of BM25. Through mechanistic interpretability techniques, the authors uncover circuits within MiniLM that correspond to term frequency, IDF, length normalization, and final relevance scoring. The findings bridge modern transformer-based relevance modeling with foundational IR principles, offering both theoretical insight and a roadmap for building more transparent and interpretable neural retrieval systems. | |
| | | | |
github.com
|
|
| | | Retrieval Augmented Generation (RAG) on audio data with LangChain - AssemblyAI-Community/rag-langchain-audio-data | ||