Explore >> Select a destination


You are here

joecarlsmith.com
| | www.greaterwrong.com
3.8 parsecs away

Travel
| | Eric DrexlerCentre for the Governance of AIUniversity of Oxford This document argues for "open agencies" - not opaque, unitary agents - as the appropriate model for applying future AI capabilities to consequential tasks that call for combining human guidance with delegation of planning and implementation to AI systems. This prospect reframes and can help to tame a wide range of classic AI safety challenges, leveraging alignment techniques in a relatively fault-tolerant context.
| | www.lesswrong.com
1.4 parsecs away

Travel
| | We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren't conf...
| | longtermrisk.org
3.5 parsecs away

Travel
| | Suffering risks, or s-risks, are "risks of events that bring about suffering in cosmically significant amounts" (Althaus and Gloor 2016).1 This article will discuss why the reduction of s-risks could be a candidate for a top priority among altruistic causes aimed at influencing the long-term future. The number of sentient beings in the future might be astronomical, and certain cultural, evolutionary, and technological forces could cause many of these beings to have lives dominated by severe suffering. S-...
| | ig.ft.com
27.1 parsecs away

Travel
| Charts and maps show paradoxes of a pandemic that has claimed a million lives