 
				Wav2Vec 2.0
A collection for the first release of Wav2Vec 2.0, a speech encoder that learns powerful representations from unlabelled audio data.
  Automatic Speech Recognition • Updated • 44.4k • 155 Automatic Speech Recognition • Updated • 44.4k • 155- Note The Wav2Vec 2.0 "large" model pre-trained on 53k hours of un-labelled audio data from the LibriSpeech and LibriVox (LV) corpora, and fine-tuned on 960 hours of LibriSpeech ASR data. This is the most performant Wav2Vec 2.0 checkpoint from the initial release, obtaining 1.9/3.9% WER on the LibriSpeech test clean/other subsets respectively. 
   - facebook/wav2vec2-large-960hAutomatic Speech Recognition • Updated • 50.7k • 31- Note The Wav2Vec 2.0 "large" model pre-trained and fine-tuned on 960 hours of LibriSpeech ASR data. 
   - facebook/wav2vec2-base-960hAutomatic Speech Recognition • 94.4M • Updated • 5.85M • 381- Note The Wav2Vec 2.0 "base" model pre-trained and fine-tuned on 960 hours of LibriSpeech ASR data. 
   - facebook/wav2vec2-base-100hAutomatic Speech Recognition • Updated • 501 • 7- Note The Wav2Vec 2.0 "base" model pre-trained on 960 hours of un-labelled LibriSpeech ASR data, and fine-tuned on 100 hours of labelled LibriSpeech ASR data. 
   - facebook/wav2vec2-large-lv60Updated • 10.5k • 11- Note The Wav2Vec 2.0 "large" model pre-trained on 53k hours of un-labelled data from the LibriSpeech and LibriVox (LV) corpora. 
   - facebook/wav2vec2-largeUpdated • 3.11k • 9- Note The Wav2Vec 2.0 "large" model pre-trained on 960 hours of un-labelled LibriSpeech ASR data. 
   - facebook/wav2vec2-baseUpdated • 810k • 111- Note The Wav2Vec 2.0 "base" model pre-trained on 960 hours of un-labelled LibriSpeech ASR data. 
 - wav2vec 2.0: A Framework for Self-Supervised Learning of Speech RepresentationsPaper • 2006.11477 • Published • 7- Note The wav2vec 2.0 paper, accepted to NeurIPS 2020. 
 
					 
					