You are here |
longtermrisk.org | ||
| | | |
unstableontology.com
|
|
| | | | A theist, minimally, believes in a higher power, and believes that acting in accordance with that higher power's will is normative. The higher power must be very capable; if not infinitely capable, it must be more capable than the combined forces of all current Earthly state powers. Suppose that a higher power exists. When and... | |
| | | |
yoshuabengio.org
|
|
| | | | I have been hearing many arguments from different people regarding catastrophic AI risks. I wanted to clarify these arguments, first for myself, because I would really like to be convinced that we need not worry. Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over biological machines (3) we should take even a small probability of catastrophic outcomes of superdangerous AI seriously, because of the possibly large magnitude of the impact (4) more powerful AI systems can be catastrophically dangerous even if they do not surpass humans on every front and even if they have to go through humans to produce non-virtual actions, so long as they can manipulate or pay humans for tasks (5) catastrophic AI outcomes are part of a spectrum of harms and risks that should be mitigated with appropriate investments and oversight in order to protect human rights and humanity, including possibly using safe AI systems to help protect us. | |
| | | |
www.alignmentforum.org
|
|
| | | | "Human feedbackon diverse tasks" could lead to transformative AI, while requiring little innovation on current techniques. But it seems likely that... | |
| | | |
www.wiringthebrain.com
|
|
| | The reductionist perspective on biology is that it all boils down to physics eventually. That anything that is happenin... |