Explore >> Select a destination


You are here

www.jeremymorgan.com
| | michael-lewis.com
6.8 parsecs away

Travel
| | Notes on using Tensorflow with GPU support in a Docker container interactively, running an IDE within the container, and running Jupyter Notebooks from the container.
| | simonwillison.net
8.5 parsecs away

Travel
| | I think it's now possible to train a large language model with similar functionality to GPT-3 for $85,000. And I think we might soon be able to run the resulting ...
| | robr.dev
10.5 parsecs away

Travel
| | Three things from this week. I tried out three different ways to use an LLM on my home PC this week. A Large Language Model (LLM) is the kind of ML model that runs inside of ChatGPT and other similar popular chatbots. Running on my home PC lets me see just what they can do and whether they're useful to me. While there have been a whole lot of different models emerging lately, there are also a few different frontends or user interfaces that can load the model and perform inferences.
| | blog.chand1012.dev
83.6 parsecs away

Travel
| In this tutorial, we will be setting up a Flask server using Gunicorn and NGINX on Ubuntu 18.04 LTS. Requirements Any system running Ubuntu 18.04 LTS with SSH enabled. An SSH client. Installing After connecting via SSH to your server as root, run the following commands to install the required programs: apt update apt upgrade -y apt install nginx python3 python3-pip python3-venv This will install Python, NGINX, and the virtual environment needed to run the app.