You are here |
www.antstack.com | ||
| | | |
mobiarch.wordpress.com
|
|
| | | | Ollama makes it super easy to run open source LLMs locally. You can expect decent performance even in small laptops. Ollama is an alternative to Hugging Face for running models locally. Hugging Face libraries run on top of Tensorflow or Torch. Ollama uses llama.cpp as the underlying runtime. This makes Ollama very easy to get... | |
| | | |
yastr.dev
|
|
| | | | Retrieval-augmented generation in simple terms, my experience in running one, and the pitfalls and solutions I discovered | |
| | | |
blog.streamlit.io
|
|
| | | | A step-by-step guide using OpenAI, LangChain, and Streamlit | |
| | | |
seekinglavenderlane.com
|
|
| |