Explore >> Select a destination


You are here

thezvi.wordpress.com
| | www.journalofdemocracy.org
15.2 parsecs away

Travel
| | AI with superhuman abilities could emerge within the next few years, and there is currently no guarantee that we will be able to control them. We must act now...
| | yoshuabengio.org
16.0 parsecs away

Travel
| | I have been hearing many arguments from different people regarding catastrophic AI risks. I wanted to clarify these arguments, first for myself, because I would really like to be convinced that we need not worry. Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over biological machines (3) we should take even a small probability of catastrophic outcomes of superdangerous AI seriously, because of the possibly large magnitude of the impact (4) more powerful AI systems can be catastrophically dangerous even if they do not surpass humans on every front and even if they have to go through humans to produce non-virtual actions, so long as they can manipulate or pay humans for tasks (5) catastrophic AI outcomes are part of a spectrum of harms and risks that should be mitigated with appropriate investments and oversight in order to protect human rights and humanity, including possibly using safe AI systems to help protect us.
| | originality.ai
17.0 parsecs away

Travel
| | Uncover 2024's AI dominance with OpenAI Statistics. Explore trends, applications, and ChatGPT's impact on reshaping machine learning and neural networks.
| | github.com
48.4 parsecs away

Travel
| Contribute to nicolas-daniel/the-clocks development by creating an account on GitHub.