-
Attention Is All You Need
Paper • 1706.03762 • Published • 77 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 19 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 17
Collections
Discover the best community collections!
Collections including paper arxiv:2211.05100
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 19 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 16 -
OPT: Open Pre-trained Transformer Language Models
Paper • 2205.01068 • Published • 2
-
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 32 -
Contrastive Language-Image Pre-training for the Italian Language
Paper • 2108.08688 • Published • 2 -
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation
Paper • 2203.03759 • Published • 5 -
Spanish Pre-trained BERT Model and Evaluation Data
Paper • 2308.02976 • Published • 3
-
FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language
Paper • 2506.20920 • Published • 64 -
SmolVLM: Redefining small and efficient multimodal models
Paper • 2504.05299 • Published • 197 -
YourBench: Easy Custom Evaluation Sets for Everyone
Paper • 2504.01833 • Published • 22 -
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model
Paper • 2502.02737 • Published • 241
-
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 32 -
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models
Paper • 2308.06721 • Published • 31 -
LEDITS++: Limitless Image Editing using Text-to-Image Models
Paper • 2311.16711 • Published • 24
-
Nemotron-4 15B Technical Report
Paper • 2402.16819 • Published • 47 -
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 57 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 19 -
Reformer: The Efficient Transformer
Paper • 2001.04451 • Published
-
Mistral 7B
Paper • 2310.06825 • Published • 51 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 25 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 19 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 17
-
Attention Is All You Need
Paper • 1706.03762 • Published • 77 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 19 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 17
-
FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language
Paper • 2506.20920 • Published • 64 -
SmolVLM: Redefining small and efficient multimodal models
Paper • 2504.05299 • Published • 197 -
YourBench: Easy Custom Evaluation Sets for Everyone
Paper • 2504.01833 • Published • 22 -
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model
Paper • 2502.02737 • Published • 241
-
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 32 -
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models
Paper • 2308.06721 • Published • 31 -
LEDITS++: Limitless Image Editing using Text-to-Image Models
Paper • 2311.16711 • Published • 24
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 19 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 16 -
OPT: Open Pre-trained Transformer Language Models
Paper • 2205.01068 • Published • 2
-
Nemotron-4 15B Technical Report
Paper • 2402.16819 • Published • 47 -
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 57 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 19 -
Reformer: The Efficient Transformer
Paper • 2001.04451 • Published
-
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 32 -
Contrastive Language-Image Pre-training for the Italian Language
Paper • 2108.08688 • Published • 2 -
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation
Paper • 2203.03759 • Published • 5 -
Spanish Pre-trained BERT Model and Evaluation Data
Paper • 2308.02976 • Published • 3
-
Mistral 7B
Paper • 2310.06825 • Published • 51 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 25 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 19 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 17