|
You are here |
www.alignmentforum.org | ||
| | | | |
www.cold-takes.com
|
|
| | | | | Why would we program AI that wants to harm us? Because we might not know how to do otherwise. | |
| | | | |
distill.pub
|
|
| | | | | If we want to train AI to do what humans want, we need to study humans. | |
| | | | |
scottaaronson.blog
|
|
| | | | | Two weeks ago, I gave a lecture setting out my current thoughts on AI safety, halfway through my year at OpenAI. I was asked to speak by UT Austin's Effective Altruist club. You can watch the lecture on YouTube here (I recommend 2x speed). The timing turned out to be weird, coming immediately after the... | |
| | | | |
www.gisagents.org
|
|
| | | This blog is a research site focused around my interests in Geographical Information Science (GIS) and Agent-Based Modeling (ABM). | ||