Explore >> Select a destination


You are here

irisvanrooijcogsci.com
| | criticalai.org
8.1 parsecs away

Travel
| | Kathryn Conrad [Critical AI 2.1is a special issue, co-edited by Lauren M.E. Goodlad and Matthew Stone, collecting interdisciplinary essays and think pieces on a wide range of topics involving Large Language Models. Below, a sneak preview from the issue:Kathryn Conrad's important "Blueprint for an AI Bill of Rights for Education"] Since November 2022, when OpenAI...
| | elearn.ucalgary.ca
8.3 parsecs away

Travel
| |
| | cccblog.org
12.4 parsecs away

Travel
| | TheComputing Community Consortium(CCC) sponsored another track in itsBlue Sky Ideas Conference Trackseriesat the 29th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence (AAAI-15), January 25-30, 2015 in Austin, Texas. The purpose of this conference was to promote research in artificial intelligence (AI) and scientific exchange among AI researchers, practitioners, scientists, and engineers in affiliated disciplines. The goal of this track was topresent ideas and visions that canstimulate the research community to pursue newdirections, such as new problems, new applicationdomains, or new methodologies. The winning papers were: Machine Teaching: an Inverse Problem to Machine Learning and an Approach Toward Optimal Education Xiaojin Zhu (Department of Computer Sciences, [...]
| | lilianweng.github.io
72.4 parsecs away

Travel
| [Updated on 2019-02-14: add ULMFiT and GPT-2.] [Updated on 2020-02-29: add ALBERT.] [Updated on 2020-10-25: add RoBERTa.] [Updated on 2020-12-13: add T5.] [Updated on 2020-12-30: add GPT-3.] [Updated on 2021-11-13: add XLNet, BART and ELECTRA; Also updated the Summary section.] I guess they are Elmo & Bert? (Image source: here) We have seen amazing progress in NLP in 2018. Large-scale pre-trained language modes like OpenAI GPT and BERT have achieved great performance on a variety of language tasks using generic model architectures. The idea is similar to how ImageNet classification pre-training helps many vision tasks (*). Even better than vision classification pre-training, this simple and powerful approach in NLP does not require labeled data for pre-training, allowing us to experiment with increased training scale, up to our very limit.