|
You are here |
www.theverge.com | ||
| | | | |
www.lesswrong.com
|
|
| | | | | Comment by paulfchristiano - A common view is that the timelines to risky AI are largely driven by hardware progress and deep learning progress occurring outside of OpenAI. Many people (both at OpenAI and elsewhere) believe that questions of who builds AI and how are very important relative to acceleration of AI timelines. This is related to lower estimates of alignment risk, higher estimates of the importance of geopolitical conflict, and (perhaps most importantly of all) radically lower estimates for the amount of useful alignment progress that would occur this far in advance of AI if progress were to be slowed down. Below I'll also discuss two arguments that delaying AI progress would on net reduce alignment risk which I often encountered at OpenAI. I thi... | |
| | | | |
www.aakashg.com
|
|
| | | | | Check out the conversation on Apple, Spotify and YouTube. A comprehensive guide to engineering delight in AI products from Spotify's Wrapped creator and Google's Delight Team PM. Learn how Nesrine Changuel built emotional connections into products used by millions, discover the 4-step Delight Model, the 50-40-10 rule for roadmaps, and understand why ChatGPT has 800M [...] | |
| | | | |
every.to
|
|
| | | | | Nathan Labenz saves time, eliminates drudgery, and offloads tasks with AI | |
| | | | |
scottaaronson.blog
|
|
| | | I've been increasingly tempted to make this blog into a forum solely for responding to the posts at Overcoming Bias. (Possible new name: "Wallowing in Bias.") Two days ago, Robin Hanson pointed to a fascinating paper by Bousso, Harnik, Kribs, and Perez, on predicting the cosmological constant from an "entropic" version of the anthropic principle.... | ||