Explore >> Select a destination


You are here

www.alignmentforum.org
| | scottaaronson.blog
3.9 parsecs away

Travel
| | Two weeks ago, I gave a lecture setting out my current thoughts on AI safety, halfway through my year at OpenAI. I was asked to speak by UT Austin's Effective Altruist club. You can watch the lecture on YouTube here (I recommend 2x speed). The timing turned out to be weird, coming immediately after the...
| | joecarlsmith.com
3.5 parsecs away

Travel
| | Introduction to an essay series about paths to safe, useful superintelligence.
| | www.lesswrong.com
3.9 parsecs away

Travel
| | Our alignment research aims to make artificial general intelligence (AGI) aligned with human values and follow human intent. We take an iterative, em...
| | evjang.com
23.6 parsecs away

Travel
| This blog post outlines a key engineering principle I've come to believe strongly in for building general AI systems with deep learning. This principle guides my present-day research tastes and day-to-day design choices in building large-scale, general-purpose ML systems. Discoveries around Neural Scaling Laws, unsupervised pretraining on Internet-scale datasets, and other work on Foundation Models have pointed to a simple yet exciting narrative for making progress in Machine Learning: Large amounts of d...