You are here |
blog.chand1012.dev | ||
| | | |
www.danieldemmel.me
|
|
| | | | Part two of the series Building applications using embeddings vector search and Large Language Models | |
| | | |
ollama.com
|
|
| | | | Get up and running with large language models. | |
| | | |
simonam.dev
|
|
| | | | All the steps required to turn an RTX 2060 into an OpenAI drop-in replacement | |
| | | |
mattmazur.com
|
|
| | Below are the steps I used to get Mistral 8x7Bs Mixture of Experts (MOE) model running locally on my Macbook (with its Apple M2 chip and 24 GB of memory). Here's a great overview of the model for anyone interested in learning more. Short version: The Mistral "Mixtral" 8x7B 32k model,developed by Mistral AI, is... |