|
You are here |
www.jeremymorgan.com | ||
| | | | |
shekhargulati.com
|
|
| | | | | Today I was reading Chapter 9 "Multimodal Large Language Models" of Hands-On Large Language Models book and thought of applying it to a problem I face occassionally. The chapter covers CLIP model and how you can use them to embed both text and images in the same vector space. Like most normal humans, I take... | |
| | | | |
mobiarch.wordpress.com
|
|
| | | | | Ollama makes it super easy to run open source LLMs locally. You can expect decent performance even in small laptops. Ollama is an alternative to Hugging Face for running models locally. Hugging Face libraries run on top of Tensorflow or Torch. Ollama uses llama.cpp as the underlying runtime. This makes Ollama very easy to get... | |
| | | | |
pcandmore.net
|
|
| | | | | My wife and me tried cooking indian naan flatbread according to a recipe generated by Zephyr-7B-?. I wrote an article about not using ChatGPT a while ago, and I am still not interested in funneling semi-personal information into such a service. Especially if there is the theoretical possibility of running an open large language model on my own hardware. And for a while, that remained a theoretical possibility for me. | |
| | | | |
bdtechtalks.com
|
|
| | | Retrieval augmented generation (RAG) enables you to use custom documents with LLMs to improve their precision. | ||