Explore >> Select a destination


You are here

tech.michaelaltfield.net
| | werat.dev
8.7 parsecs away

Travel
| | Benchmarks are often underestimated and don't get the same attention as tests. However "performance is a feature" and when something is not tested it might as well be just broken. If the performance is not measured/tracked regressions are inevitable. Modern tooling makes it really easy to write benchmarks. Some languages have built-in support, for example, Rust comes with cargo bench (docs) and Go has go test -bench (docs). For C++ there is google/benchmark - not as streamlined as having it built into the language infrastructure, but still definitely worth the effort.
| | andre.arko.net
14.0 parsecs away

Travel
| | I've been using Dependabot for a long time. Back before GitHub bought it and took away the web dashboard, there was an amazing, glorious, wonderful feature: you could check a checkbox, and Dependabot would merge the open PR as soon as your tests passed. Now that Dependabot has no web dashboard, and can't be added to a repo with one click, it has also lost the ability to automatically merge updates.
| | code.dblock.org
7.9 parsecs away

Travel
| | The OpenSearch API specification is authored in OpenAPI and used to auto-generate OpenSearch language clients. I wanted to know how much of the API was described in it vs. the actual API implemented in the default distribution of OpenSearch that includes all plugins. To do so, I have exposed an iterator over REST handlers in OpenSearch core, and wrote a plugin that rendered a very minimal OpenAPI spec at runtime. All that was left was to compare the manually authored OpenAPI spec in opensearch-api-specification to the runtime one, added in opensearch-api-specification#179. The comparison workflow output a total and relative number of APIs described.
| | localheinz.com
14.8 parsecs away

Travel
| Since GitHub introduced the automatic generation of release notes, creating releases with release notes has become easier than ever.