Model Collection
Last updated
Last updated
2018Bidirectional Encoder Representations from Transformers2018Improving Language Understanding by Generative Pre-Training2019A Robustly Optimized BERT Pretraining Approach2019Language Models are Unsupervised Multitask Learners2019Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer2019Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension2019A Lite BERT for Self-supervised Learning of Language Representations2019Generalized Autoregressive Pretraining for Language Understanding and Generation2019CTRL: A Conditional Transformer Language Model for Controllable Generation2019ERNIE: Enhanced Representation through Knowledge Integration2020GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding2020Language Models are Few-Shot Learners2021LaMDA: Language Models for Dialog Applications2021PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation2021mT5: A massively multilingual pre-trained text-to-text transformer2021CPM-2: Large-scale Cost-effective Pre-trained Language Models2021Multitask Prompted Training Enables Zero-Shot Task Generalization2021What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers2021Evaluating Large Language Models Trained on Code2021ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation2021Jurassic-1: Technical Details and Evaluation2021Finetuned Language Models Are Zero-Shot Learners2021Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model2021Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning2021WebGPT: Browser-assisted question-answering with human feedback2021Scaling Language Models: Methods, Analysis & Insights from Training Gopher2021ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation2021GLaM: Efficient Scaling of Language Models with Mixture-of-Experts2022Training language models to follow instructions with human feedback2022GPT-NeoX-20B: An Open-Source Autoregressive Language Model2022Competition-Level Code Generation with AlphaCode2022CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis2022Shows that for a compute budget, the best performances are not achieved by the largest models but by smaller models trained on more data.2022Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks2022UL2: Unifying Language Learning Paradigms2022PaLM: Scaling Language Modeling with Pathways2022OPT: Open Pre-trained Transformer Language Models2022BLOOM: A 176B-Parameter Open-Access Multilingual Language Model2022GLM-130B: An Open Bilingual Pre-trained Model2022AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model2022Scaling Instruction-Finetuned Language Models2022Improving alignment of dialogue agents via targeted human judgements2022Transcending Scaling Laws with 0.1% Extra Compute2022Crosslingual Generalization through Multitask Finetuning2022Galactica: A Large Language Model for Science2022OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization2023LLaMA: Open and Efficient Foundation Language Models2023GPT-4 Technical Report2023PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing2023BloombergGPT: A Large Language Model for Finance2023Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster