Explore >> Select a destination


You are here

yastr.dev
| | bdtechtalks.com
12.0 parsecs away

Travel
| | Retrieval augmented generation (RAG) enables you to use custom documents with LLMs to improve their precision.
| | mobiarch.wordpress.com
9.4 parsecs away

Travel
| | Ollama makes it super easy to run open source LLMs locally. You can expect decent performance even in small laptops. Ollama is an alternative to Hugging Face for running models locally. Hugging Face libraries run on top of Tensorflow or Torch. Ollama uses llama.cpp as the underlying runtime. This makes Ollama very easy to get...
| | www.vladsiv.com
11.2 parsecs away

Travel
| | In this blog post, I'm sharing my experience taking the Databricks Generative AI Associate exam - from study notes to resources that made a difference. Whether you're just starting your prep or looking for extra insights, this guide will help you find the right resources to get prepared.
| | www.britive.com
30.0 parsecs away

Travel
| Secure agentic AI identities with Britive. Enforce Zero Standing Privileges, runtime authorization, and unified policies across cloud, SaaS, and hybrid.