Explore >> Select a destination


You are here

www.cold-takes.com
| | www.alignmentforum.org
4.0 parsecs away

Travel
| | "Human feedbackon diverse tasks" could lead to transformative AI, while requiring little innovation on current techniques. But it seems likely that...
| | optimists.ai
4.3 parsecs away

Travel
| | Should we lobby governments to impose a moratorium on AI research? Since we don't enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it's clear that the benefits...
| | joecarlsmith.com
4.6 parsecs away

Travel
| | Introduction to an essay series about paths to safe, useful superintelligence.
| | longtermrisk.org
25.8 parsecs away

Travel
| Suffering risks, or s-risks, are "risks of events that bring about suffering in cosmically significant amounts" (Althaus and Gloor 2016).1 This article will discuss why the reduction of s-risks could be a candidate for a top priority among altruistic causes aimed at influencing the long-term future. The number of sentient beings in the future might be astronomical, and certain cultural, evolutionary, and technological forces could cause many of these beings to have lives dominated by severe suffering. S-...