|
You are here |
joecarlsmith.com | ||
| | | | |
www.alignmentforum.org
|
|
| | | | | > This post is part of my AI strategy nearcasting series: trying to answer key strategic questions about transformative AI, under the assumption that... | |
| | | | |
www.lesswrong.com
|
|
| | | | | On a career move, and on AI-safety-focused people working at frontier AI companies. | |
| | | | |
longtermrisk.org
|
|
| | | | | Suffering risks, or s-risks, are "risks of events that bring about suffering in cosmically significant amounts" (Althaus and Gloor 2016).1 This article will discuss why the reduction of s-risks could be a candidate for a top priority among altruistic causes aimed at influencing the long-term future. The number of sentient beings in the future might be astronomical, and certain cultural, evolutionary, and technological forces could cause many of these beings to have lives dominated by severe suffering. S-... | |
| | | | |
williamgibsonblog.blogspot.com
|
|
| | | [AI summary] The provided text is a collection of questions and answers from a Q&A session with a writer, likely William Gibson, focusing on various aspects of his creative process, influences, and perspectives on writing and technology. The questions range from the role of technology in writing, the influence of personal experiences on characters, the structure of writing routines, and reflections on past projects. The answers reveal insights into the writer's approach to storytelling, the integration of real-world elements into fiction, and the challenges of maintaining creative integrity while engaging with contemporary issues. | ||