|
You are here |
www.lesswrong.com | ||
| | | | |
joecarlsmith.com
|
|
| | | | | A high-level picture of how we might get from here to safe superintelligence. | |
| | | | |
qualiacomputing.com
|
|
| | | | | by Mike Johnson The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems. TL;DR version: functionalism ("consciousness is the sum-total of the functional properties of our brains")... | |
| | | | |
opentheory.net
|
|
| | | | | [AI summary] The text presents a critique of the Foundational Research Institute's (FRI) approach to defining and addressing suffering and s-risks, with a focus on the philosophical and metaphysical challenges of functionalism. The author, Mike Johnson, argues that FRI's reliance on functionalism leads to intractable problems, such as the inability to provide a clear, disagreement-mediating definition of suffering. He outlines several objections to FRI's position, including the ineffability of suffering, intuition duels, convergence requirements, and the mapping of consciousness to physical systems. Johnson suggests that FRI should consider alternative frameworks, such as computational hierarchies, to address these issues. The text also references various so... | |
| | | | |
bach.ai
|
|
| | | I have a nagging suspicion that we misinterpret quantum mechanics. I am probably wrong, but I believe that quantum computers may always be outperformed by classical algorithms. | ||