|
You are here |
joecarlsmith.com | ||
| | | | |
www.lesswrong.com
|
|
| | | | | On a career move, and on AI-safety-focused people working at frontier AI companies. | |
| | | | |
www.stafforini.com
|
|
| | | | | [AI summary] The text discusses the challenges of evaluating the moral significance of different entities and actions, emphasizing the importance of considering crucial considerations in ethical decision-making. It highlights the need for a nuanced approach to moral uncertainty, the potential impact of overlooked factors, and the importance of balancing short-term and long-term objectives. The text also touches on the complexities of utilitarianism, the role of moral theories in decision-making, and the importance of ongoing analysis and deliberation in addressing ethical dilemmas. | |
| | | | |
blog.omega-prime.co.uk
|
|
| | | | | I will soon be joining Anthropic, and so this is my last opportunity to write down some thoughts on the AI lab business model before I can be accused of spilling any inside information. | |
| | | | |
longtermrisk.org
|
|
| | | Suffering risks, or s-risks, are "risks of events that bring about suffering in cosmically significant amounts" (Althaus and Gloor 2016).1 This article will discuss why the reduction of s-risks could be a candidate for a top priority among altruistic causes aimed at influencing the long-term future. The number of sentient beings in the future might be astronomical, and certain cultural, evolutionary, and technological forces could cause many of these beings to have lives dominated by severe suffering. S-... | ||