Explore >> Select a destination


You are here

blog.chand1012.dev
| | www.danieldemmel.me
10.4 parsecs away

Travel
| | Part two of the series Building applications using embeddings vector search and Large Language Models
| | ollama.com
17.5 parsecs away

Travel
| | Get up and running with large language models.
| | simonam.dev
7.5 parsecs away

Travel
| | All the steps required to turn an RTX 2060 into an OpenAI drop-in replacement
| | mattmazur.com
10.0 parsecs away

Travel
| Below are the steps I used to get Mistral 8x7Bs Mixture of Experts (MOE) model running locally on my Macbook (with its Apple M2 chip and 24 GB of memory). Here's a great overview of the model for anyone interested in learning more. Short version: The Mistral "Mixtral" 8x7B 32k model,developed by Mistral AI, is...