|
You are here |
vickiboykis.com | ||
| | | | |
jalammar.github.io
|
|
| | | | | Translations: Chinese, Vietnamese. (V2 Nov 2022: Updated images for more precise description of forward diffusion. A few more images in this version) AI image generation is the most recent AI capability blowing people's minds (mine included). The ability to create striking visuals from text descriptions has a magical quality to it and points clearly to a shift in how humans create art. The release of Stable Diffusion is a clear milestone in this development because it made a high-performance model available to the masses (performance in terms of image quality, as well as speed and relatively low resource/memory requirements). After experimenting with AI image generation, you may start to wonder how it works. This is a gentle introduction to how Stable Diffus... | |
| | | | |
www.chrisritchie.org
|
|
| | | | | Experimenting with prompts in Stable Diffusion. | |
| | | | |
n9o.xyz
|
|
| | | | | In the last year, several machine learning models have become available to the public to generate images from textual descriptions. This has been an interesting development in the AI space. However, just recently did this technology became available for everyone to try. | |
| | | | |
cprimozic.net
|
|
| | | I'm picking back up the work that I started last year building 3D scenes and sketches with Three.JS. At that time, it was just after AI image generators like DALL-E and Stable Diffusion were really taking off. I had success running Stable Diffusion locally and using it to generate textures for terrain, buildings, and other environments in the 3D worlds I was building. I was using Stable Diffusion v1 back then. | ||