Explore >> Select a destination


You are here

lisyarus.github.io
| | www.willusher.io
3.7 parsecs away

Travel
| | [AI summary] The user has provided a detailed explanation of implementing a parallel Marching Cubes algorithm using WebGPU, including data-parallel primitives like exclusive scan and stream compaction. They also compared performance with Vulkan on different hardware, showing close results. The implementation involves several compute passes and is designed to run in the browser with near-native performance.
| | gpfault.net
3.9 parsecs away

Travel
| | [AI summary] The article discusses the implementation of particle systems in WebGL 2, covering various enhancements such as dynamic billboard rendering using instancing, sprite textures, and the integration of force fields for more complex particle behaviors. It also details the setup of vertex arrays and buffers for efficient rendering, along with the necessary shader modifications to support these features.
| | nelari.us
5.3 parsecs away

Travel
| | A devlog of my GPU pathtracer project, where I am writing a physically based pathtracer from scratch using WebGPU.
| | jalammar.github.io
30.5 parsecs away

Travel
| Discussion: Discussion Thread for comments, corrections, or any feedback. Translations: Korean, Russian Summary: The latest batch of language models can be much smaller yet achieve GPT-3 like performance by being able to query a database or search the web for information. A key indication is that building larger and larger models is not the only way to improve performance. Video The last few years saw the rise of Large Language Models (LLMs) - machine learning models that rapidly improve how machines process and generate language. Some of the highlights since 2017 include: The original Transformer breaks previous performance records for machine translation. BERT popularizes the pre-training then finetuning process, as well as Transformer-based contextualized...