Explore >> Select a destination


You are here

www.4async.com
| | www.anyscale.com
3.2 parsecs away

Travel
| | In part 2 of this blog series, we show you how to turbocharge embeddings in LLM workflows using LangChain and Ray Data.
| | www.antstack.com
3.6 parsecs away

Travel
| | Learn how to enhance your LangChain chatbot with AWS DynamoDB using partition and sort keys for efficient chat memory management. Follow this detailed tutorial now!
| | mobiarch.wordpress.com
1.5 parsecs away

Travel
| | Ollama makes it super easy to run open source LLMs locally. You can expect decent performance even in small laptops. Ollama is an alternative to Hugging Face for running models locally. Hugging Face libraries run on top of Tensorflow or Torch. Ollama uses llama.cpp as the underlying runtime. This makes Ollama very easy to get...
| | shyamal.me
40.4 parsecs away

Travel
| RFT is still in its early days, but it's a powerful tool - making it surprisingly easy to create expert models for specific domains with less training data.