Explore >> Select a destination


You are here

blog.nunosenica.com
| | blog.chand1012.dev
9.2 parsecs away

Travel
| | Its no secret that I think Llama 2 and its derivatives are the future of AI and ML. Rather than getting bigger and smarter, I think GPT-4 is enough for 99.5% of applications, AI should instead strive to get smaller, cheaper, and faster. If you want to run Llama 2 via llama.cpp, you can check out my guide on how to do that. However, the problem with llama.cpp is that to get it working you have to have all the dependencies, either download a binary or clone and build the repo, make sure your drivers are working, and then you can finally run it.
| | www.grendelman.net
11.5 parsecs away

Travel
| |
| | nathanchance.dev
10.0 parsecs away

Travel
| | Recently, I built a computer for school that I installed Windows 10 Pro on (link to the current specs if you are curious). I was a little bummed about leaving Chrome OS because I was going to lose my local Linux development environment; however, Windows Subsystem for Linux is a thing and it has gotten even better with WSL 2, as it is actually running a Linux kernel so there is full Linux compatibility going forward.
| | listed.to
56.8 parsecs away

Travel
| For context, I have VSCode installed inside a Toolbox in Fedora 40 Silverblue. Getting the Extension to work in a Fedora container The Slint VSCode extensio...