|
You are here |
www.smashcompany.com | ||
| | | | |
shekhargulati.com
|
|
| | | | | Mistral released a new model yesterday. It is designed to excel at Agentic coding tasks meaning it can use tools. It is Apache 2.0 license. It is finetuned from Mistral-Small-3.1, therefore it has a long context window of up to 128k tokens. It is a 24B parameter model that uses Tekken tokenizer with a 131k... | |
| | | | |
www.vellum.ai
|
|
| | | | | Understand the latest benchmarks, their limitations, and how models compare. | |
| | | | |
tomasvotruba.com
|
|
| | | | | Last week, I had many interesting discussions about OpenAI and GPT on [Laracon in Porto](https://laracon.eu/). Especially with [Marcel Pociot](https://twitter.com/marcelpociot). I've learned much more in 2 days than on the Internet since December. That feels great, and tips seem basic but effective. But as in any other fresh area, finding out about them takes a lot of work. I want to embrace sharing in the GPT community, so here is cherry-pick list of failures and tricks from people **who were generous to share it with me**. | |
| | | | |
blog.rfox.eu
|
|
| | | Some context about GPT & LLM (Large Language Models), how to run them on your own computer, and some examples how I use GPT4. | ||