Explore >> Select a destination


You are here

www.newyorker.com
| | www.accelerating.org
12.4 parsecs away

Travel
| |
| | www.jdmoyer.com
20.1 parsecs away

Travel
| | In Part I of this post I challenged the idea of Vernor Vinge's Singularity. I also promised a response from Vinge himself. While he hasn't yet responded to my email inquiry, he did write a brilliant follow-up essay, in 2008, entitled What If the Singularity Does NOT Happen? The article includes dramatic section headings including [...]
| | www.biointelligence-explosion.com
17.9 parsecs away

Travel
| | Do biological minds have a future?
| | yoshuabengio.org
110.3 parsecs away

Travel
| I have been hearing many arguments from different people regarding catastrophic AI risks. I wanted to clarify these arguments, first for myself, because I would really like to be convinced that we need not worry. Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over biological machines (3) we should take even a small probability of catastrophic outcomes of superdangerous AI seriously, because of the possibly large magnitude of the impact (4) more powerful AI systems can be catastrophically dangerous even if they do not surpass humans on every front and even if they have to go through humans to produce non-virtual actions, so long as they can manipulate or pay humans for tasks (5) catastrophic AI outcomes are part of a spectrum of harms and risks that should be mitigated with appropriate investments and oversight in order to protect human rights and humanity, including possibly using safe AI systems to help protect us.