|
You are here |
joecarlsmith.com | ||
| | | | |
www.greaterwrong.com
|
|
| | | | | Eric DrexlerCentre for the Governance of AIUniversity of Oxford This document argues for "open agencies" - not opaque, unitary agents - as the appropriate model for applying future AI capabilities to consequential tasks that call for combining human guidance with delegation of planning and implementation to AI systems. This prospect reframes and can help to tame a wide range of classic AI safety challenges, leveraging alignment techniques in a relatively fault-tolerant context. | |
| | | | |
thezvi.wordpress.com
|
|
| | | | | Previously: On RSPs. Be Prepared OpenAI introduces their preparedness framework for safety in frontier models. A summary of the biggest takeaways, which I will repeat at the end: I am very happy the preparedness framework exists at all. I am very happy it is beta and open to revision. It's very vague and needs fleshing... | |
| | | | |
yoshuabengio.org
|
|
| | | | | This paper was initially published by the Aspen Strategy Group (ASG), a policy program of the Aspen Institute. It was released as part of a... | |
| | | | |
deepmind.google
|
|
| | | This has been a year of incredible progress in the field of Artificial Intelligence (AI) research and its practical applications. | ||