|
You are here |
www.lesswrong.com | ||
| | | | |
www.danieldjohnson.com
|
|
| | | | | I argue that hallucinations are a natural consequence of the language modeling objective, which focuses on simulating confident behavior even when that behavior is hard to predict, rather than predictable behaviors that take uncertainty into account. I also discuss five strategies for avoiding this mismatch. | |
| | | | |
windowsontheory.org
|
|
| | | | | [Yet another "philosophizing" post, but one with some actual numbers. See also this follow up. --Boaz] Recently there have been many debates on "artificial general intelligence" (AGI) and whether or not we are close to achieving it by scaling up our current AI systems. In this post, I'd like to make this debate a bit... | |
| | | | |
www.alignmentforum.org
|
|
| | | | | "Human feedbackon diverse tasks" could lead to transformative AI, while requiring little innovation on current techniques. But it seems likely that... | |
| | | | |
www.livescience.com
|
|
| | | We've wondered for centuries whether knowledge is latent and innate or learned and grasped through experience, and a new research project is asking the same question about AI. | ||