|
You are here |
www.alignmentforum.org | ||
| | | | |
blog.jessriedel.com
|
|
| | | | | Here's a collection of reviews of the arguments that artificial general intelligence represents an existential risk to humanity. They vary greatly in length and style. I may update this from time to time. (This is well-paired with Katja Grace's summary of counterarguments.) Continue reading | |
| | | | |
joecarlsmith.com
|
|
| | | | | It's really important; we have a real shot; there are a lot of ways we can fail. | |
| | | | |
www.lesswrong.com
|
|
| | | | | We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren't conf... | |
| | | | |
www.qri.org
|
|
| | | There are at least some encouraging facts that suggest it is not too late to prevent a pure replicator takeover. | ||