|
You are here |
www.lesswrong.com | ||
| | | | |
distill.pub
|
|
| | | | | If we want to train AI to do what humans want, we need to study humans. | |
| | | | |
scottaaronson.blog
|
|
| | | | | Update (Dec. 17): Some of you might enjoy a 3-hour podcast I recently did with Lawrence Krauss, which was uploaded to YouTube just yesterday. The first hour is about my life and especially childhood (!); the second hour's about quantum computing; the third hour's about computational complexity, computability, and AI safety. I'm being attacked on... | |
| | | | |
www.greaterwrong.com
|
|
| | | | | The core fallacy of anthropomorphism is expecting something to be predicted by the black box of your brain, when its casual structure is so different from that of a human brain, as to give you no license to expect any such thing. The Tragedy of Group Selectionism (as previously covered in the evolution sequence) was a rather extreme error by a group of early (pre-1966) biologists, including Wynne-Edwards, Allee, and Brereton among others, who believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat and exhausting the prey population. | |
| | | | |
thesustainableagency.com
|
|
| | | Generative AI helps us with our creativity. But it might be coming at a cost. Let's take a look at 20 statistics on generative AI's environmental impact. | ||