You are here |
leogao.dev | ||
| | | |
bmk.sh
|
|
| | | | Despite the buzz around GPT-3, it is, in and of itself, not AGI. In many ways, this makes it similar to AlphaGo or Deep Blue; while approaching human ability in one domain (playing Chess/Go, or writi | |
| | | |
gwern.net
|
|
| | | | On GPT-3: meta-learning, scaling, implications, and deep theory. The scaling hypothesis: neural nets absorb data & compute, generalizing and becoming more Bayesian as problems get harder, manifesting new abilities even at trivial-by-global-standards-scale. The deep learning revolution has begun as foretold. | |
| | | |
gwern.net
|
|
| | | | Creative writing by OpenAI's GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors. | |
| | | |
research.google
|
|
| | Posted by Chelsea Finn, Research Adviser and Eric Jang, Senior Research Scientist, Robotics at Google People can flexibly maneuver objects in their... |