You are here |
vitalik.eth.limo | ||
| | | |
yoshuabengio.org
|
|
| | | | I have been hearing many arguments from different people regarding catastrophic AI risks. I wanted to clarify these arguments, first for myself, because I would really like to be convinced that we need not worry. Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over biological machines (3) we should take even a small probability of catastrophic outcomes of superdangerous AI seriously, because of the possibly large magnitude of the impact (4) more powerful AI systems can be catastrophically dangerous even if they do not surpass humans on every front and even if they have to go through humans to produce non-virtual actions, so long as they can manipulate or pay humans for tasks (5) catastrophic AI outcomes are part of a spectrum of harms and risks that should be mitigated with appropriate investments and oversight in order to protect human rights and humanity, including possibly using safe AI systems to help protect us. | |
| | | |
www.molecule.xyz
|
|
| | | | On a recent episode of The DeSci Podcast, Vincent Weisser interviewed Vitalk Buterin, co-founder of the Ethereum Foundation, to talk about the state of decentralized science today and its prospects for the future which led to an in-depth analysis of the potential curveballs and unknowns. | |
| | | |
dissidentvoice.org
|
|
| | | | Out of necessity, organized resistance to the Trump administration's authoritarian and hyper-violent policy agenda is growing rapidly, both domestically and internationally. Within this context, it is important for those of us who engage in individual and collective acts of resistance - based on our varying proximities to power structures - to consider what and how | |
| | | |
jan.schnasse.org
|
|
| |