|
You are here |
jasoneckert.github.io | ||
| | | | |
mobiarch.wordpress.com
|
|
| | | | | Ollama makes it super easy to run open source LLMs locally. You can expect decent performance even in small laptops. Ollama is an alternative to Hugging Face for running models locally. Hugging Face libraries run on top of Tensorflow or Torch. Ollama uses llama.cpp as the underlying runtime. This makes Ollama very easy to get... | |
| | | | |
www.jeremymorgan.com
|
|
| | | | | Want to run a large language model like ChatGPT on your Ubuntu machine? Here are the full instructions. | |
| | | | |
weisser-zwerg.dev
|
|
| | | | | Setting Up AI Models on Older Hardware - A Beginner's Guide to Running Local LLMs with Limited Resources | |
| | | | |
nora.codes
|
|
| | | [AI summary] The article explains the concept of 'unsafe' in Rust, clarifying that it allows specific low-level operations while maintaining overall memory safety through the language's type system and safe abstractions. | ||