Explore >> Select a destination


You are here

jasoneckert.github.io
| | mobiarch.wordpress.com
3.6 parsecs away

Travel
| | Ollama makes it super easy to run open source LLMs locally. You can expect decent performance even in small laptops. Ollama is an alternative to Hugging Face for running models locally. Hugging Face libraries run on top of Tensorflow or Torch. Ollama uses llama.cpp as the underlying runtime. This makes Ollama very easy to get...
| | www.jeremymorgan.com
1.3 parsecs away

Travel
| | Want to run a large language model like ChatGPT on your Ubuntu machine? Here are the full instructions.
| | weisser-zwerg.dev
3.3 parsecs away

Travel
| | Setting Up AI Models on Older Hardware - A Beginner's Guide to Running Local LLMs with Limited Resources
| | nora.codes
20.5 parsecs away

Travel
| [AI summary] The article explains the concept of 'unsafe' in Rust, clarifying that it allows specific low-level operations while maintaining overall memory safety through the language's type system and safe abstractions.