|
You are here |
www.stafforini.com | ||
| | | | |
magnusvinding.com
|
|
| | | | | The following is a point-by-point critique of Lukas Gloor's essay Altruists Should Prioritize Artificial Intelligence. My hope is that this critique will serve to make it clear - to Lukas, myself, and others - where and why I disagree with this line of argument, and thereby hopefully also bring some relevant considerations to the table... | |
| | | | |
joecarlsmith.com
|
|
| | | | | On a career move, and on AI-safety-focused people working at AI companies. | |
| | | | |
longtermrisk.org
|
|
| | | | | Suffering risks, or s-risks, are "risks of events that bring about suffering in cosmically significant amounts" (Althaus and Gloor 2016).1 This article will discuss why the reduction of s-risks could be a candidate for a top priority among altruistic causes aimed at influencing the long-term future. The number of sentient beings in the future might be astronomical, and certain cultural, evolutionary, and technological forces could cause many of these beings to have lives dominated by severe suffering. S-... | |
| | | | |
politicalhat.com
|
|
| | | [AI summary] The text is a compilation of various news articles, opinions, and commentary covering a wide range of topics including politics, international relations, social issues, and cultural observations. It includes discussions on U.S. politics, particularly the Trump administration's policies and actions, international events such as Brexit and German elections, social issues like feminism and racism, and cultural references. The tone is diverse, ranging from critical to analytical, with some sections expressing strong opinions and others providing factual reports. | ||