Explore >> Select a destination


You are here

blog.daniemon.com
| | qwenlm.github.io
4.6 parsecs away

Travel
| | GITHUB HUGGING FACE MODELSCOPE DISCORD Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct - a 480B-parameter Mixture-of-Experts model with 35B active parameters which supports the context length of 256K tokens natively and 1M tokens with extrapolation methods, offering exceptional performance in both coding and agentic tasks. Qwen3-Coder-480B-A35B-Instruct sets new state-of-the-art results among open models on Agentic Coding, Agentic Browser-Use, and Agentic Tool-Use, comparable to Claude Sonnet 4.
| | josephm.dev
1.9 parsecs away

Travel
| | Get the OpenAI API to return a JSON object.
| | blog.miguelgrinberg.com
2.1 parsecs away

Travel
| | miguelgrinberg.com
| | teddykoker.com
15.5 parsecs away

Travel
| This post is the first in a series of articles about natural language processing (NLP), a subfield of machine learning concerning the interaction between computers and human language. This article will be focused on attention, a mechanism that forms the backbone of many state-of-the art language models, including Googles BERT (Devlin et al., 2018), and OpenAIs GPT-2 (Radford et al., 2019).