Explore >> Select a destination


You are here

cucumber.io
| | avelino.run
16.8 parsecs away

Travel
| | I constantly hear people saying that to contribute to an Open Source project you need to be able to program very well, have a lot of knowledge, be able to handle code criticism, etc. I see the above statements as excuses and focus in the wrong place. In the last few months, I haven't had as much time to contribute to open source projects as I like (writing code), but that didn't stop me from contributing. Actually, the lack of priority (not lack of time) has made me contribute by reporting problems that I run into in the projects/software I use in my day-to-day life and this has been more work than writing code with a defined specification (issue that someone invested time detailing).
| | babeljs.io
16.3 parsecs away

Travel
| | For the first time, Babel is participating in Summer of Code!
| | snipe.net
16.9 parsecs away

Travel
| | If you contribute to an open source project, you have my gratitude. It's often a thankless job, unless you're working on very high profile projects, and even then. Most people don't become rich and/or famous because of their work in open source, and you sometimes have to deal with obnoxious project maintainers who don't appreciate [...]
| | ezyang.github.io
54.3 parsecs away

Travel
| When you're learning to use a new framework or library, simple uses of the software can be done just by copy pasting code from tutorials and tweaking them as necessary. But at some point, it's a good idea to just slog through reading the docs from top-to-bottom, to get a full understanding of what is and is not possible in the software. One of the big wins of AI coding is that LLMs know so many things from their pretraining. For extremely popular frameworks that occur prominently in the pretraining set, an LLM is likely to have memorized most aspects of how to use the framework. But for things that are not so common or beyond the knowledge cutoff, you will likely get a model that hallucinates things. Ideally, an agentic model would know to do a web search and find the docs it needs. However, Sonnet does not currently support web search, so you have to manually feed it documentation pages as needed. Fortunately, Cursor makes this very convenient: simply dropping a URL inside a chat message will include its contents for the LLM.