|
You are here |
reticulated.net | ||
| | | | |
jalammar.github.io
|
|
| | | | | Can AI Image generation tools make re-imagined, higher-resolution versions of old video game graphics? Over the last few days, I used AI image generation to reproduce one of my childhood nightmares. I wrestled with Stable Diffusion, Dall-E and Midjourney to see how these commercial AI generation tools can help retell an old visual story - the intro cinematic to an old video game (Nemesis 2 on the MSX). This post describes the process and my experience in using these models/services to retell a story in higher fidelity graphics. Meet Dr. Venom This fine-looking gentleman is the villain in a video game. Dr. Venom appears in the intro cinematic of Nemesis 2, a 1987 video game. This image, in particular, comes at a dramatic reveal in the cinematic. Let's update ... | |
| | | | |
www.edwinwenink.xyz
|
|
| | | | | [AI summary] The author explores using Stable Diffusion to create professional self-portraits without fine-tuning, experimenting with reference images and artistic styles to achieve a recognizable likeness. | |
| | | | |
davidyat.es
|
|
| | | | | I ended my last post about AI image generation on the following note: | |
| | | | |
harvardnlp.github.io
|
|
| | | [AI summary] The provided code is a comprehensive implementation of the Transformer model, including data loading, model architecture, training, and visualization. It also includes functions for decoding and visualizing attention mechanisms across different layers of the model. The code is structured to support both training and inference, with examples provided for running the model and visualizing attention patterns. | ||