|
You are here |
www.lesswrong.com | ||
| | | | |
www.nngroup.com
|
|
| | | | | Plausible but incorrect AI responses create design challenges and user distrust. Discover evidence-based UI patterns to help users identify fabrications. | |
| | | | |
www.alignmentforum.org
|
|
| | | | | On March 29th, DeepMind published a paper, "Training Compute-Optimal Large Language Models", that shows that essentially everyone -- OpenAI, DeepMind... | |
| | | | |
blog.jessriedel.com
|
|
| | | | | Here's a collection of reviews of the arguments that artificial general intelligence represents an existential risk to humanity. They vary greatly in length and style. I may update this from time to time. (This is well-paired with Katja Grace's summary of counterarguments.) Continue reading | |
| | | | |
qualiacomputing.com
|
|
| | | "It seems plain and self-evident, yet it needs to be said: the isolated knowledge obtained by a group of specialists in a narrow field has in itself no value whatsoever, but only in its synthesis with all the rest of knowledge and only inasmuch as it really contributes in this synthesis toward answering the demand,... | ||