Explore >> Select a destination


You are here

www.antstack.com
| | mobiarch.wordpress.com
10.1 parsecs away

Travel
| | Ollama makes it super easy to run open source LLMs locally. You can expect decent performance even in small laptops. Ollama is an alternative to Hugging Face for running models locally. Hugging Face libraries run on top of Tensorflow or Torch. Ollama uses llama.cpp as the underlying runtime. This makes Ollama very easy to get...
| | yastr.dev
7.6 parsecs away

Travel
| | Retrieval-augmented generation in simple terms, my experience in running one, and the pitfalls and solutions I discovered
| | blog.streamlit.io
12.2 parsecs away

Travel
| | A step-by-step guide using OpenAI, LangChain, and Streamlit
| | seekinglavenderlane.com
46.7 parsecs away

Travel
|