|
You are here |
yoshuabengio.org | ||
| | | | |
www.wired.com
|
|
| | | | | Tech luminaries, renowned scientists, and Elon Musk warn of an "out-of-control race" to develop and deploy ever-more-powerful AI systems. | |
| | | | |
www.lesswrong.com
|
|
| | | | | Comment by Wei Dai - I passed up an invitation to invest in Anthropic in the initial round which valued it at $1B (it's now planning a round at $170B valuation), to avoid contributing to x-risk. (I didn't want to signal that starting another AI lab was a good idea from a x-safety perspective, or that I thought Anthropic's key people were likely to be careful enough about AI safety. Anthropic had invited a number of rationalist/EA people to invest, apparently to gain such implicit endorsements.) This idea/plan seems to legitimize giving founders and early investors of AGI companies extra influence on or ownership of the universe (or just extremely high financial returns, if they were to voluntarily sell some shares to the public as envisioned here), which is ... | |
| | | | |
blog.iclr.cc
|
|
| | | | | [AI summary] The article announces keynote speakers for ICLR 2025, highlighting their research on machine learning, AI safety, AGI frameworks, and open-ended innovation. | |
| | | | |
www.index.dev
|
|
| | | Learn all about Large Language Models (LLMs) in our comprehensive guide. Understand their capabilities, applications, and impact on various industries. | ||