|
You are here |
www.alexirpan.com | ||
| | | | |
dynomight.net
|
|
| | | | | How likely are we to hit a barrier? | |
| | | | |
evjang.com
|
|
| | | | | This blog post outlines a key engineering principle I've come to believe strongly in for building general AI systems with deep learning. This principle guides my present-day research tastes and day-to-day design choices in building large-scale, general-purpose ML systems. Discoveries around Neural Scaling Laws, unsupervised pretraining on Internet-scale datasets, and other work on Foundation Models have pointed to a simple yet exciting narrative for making progress in Machine Learning: Large amounts of d... | |
| | | | |
windowsontheory.org
|
|
| | | | | [Yet another "philosophizing" post, but one with some actual numbers. See also this follow up. --Boaz] Recently there have been many debates on "artificial general intelligence" (AGI) and whether or not we are close to achieving it by scaling up our current AI systems. In this post, I'd like to make this debate a bit... | |
| | | | |
aclanthology.org
|
|
| | | [AI summary] The text provides an overview of various natural language processing (NLP) and machine learning research topics. It covers a wide range of areas including: grammatical error correction, text similarity measures, compositional distributional semantics, neural machine translation, dependency parsing, and political orientation prediction. The text also discusses the development of datasets for evaluating models, the importance of readability in reading comprehension tasks, and the use of advanced techniques such as nested attention layers and error-correcting codes to improve model performance. The key themes include the advancement of NLP models, the creation of evaluation datasets, and the exploration of new methods for text analysis and understa... | ||