repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Arsenalalex108/cburnett-helmet-concept-2
|
Arsenalalex108
| null | 26 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 1,869 | false |
### Cburnett-Helmet-Concept-2 Dreambooth model trained by Arsenalalex108 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
- Stable Diffusion 1.5
- 20 instance images
- 154 concept images
- 4000 training steps
- 600 text encoder training steps
- 1000 text encoder concept training steps
- Style training
- 512 x 512
This model is currently only good at generating headwear and still struggles with other objects
Sample pictures of this concept:








|
f3795bd919d2c85e4ae7fb860d505fc4
|
sd-concepts-library/iridescent-illustration-style
|
sd-concepts-library
| null | 13 | 0 | null | 2 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,314 | false |
### Iridescent Illustration Style on Stable Diffusion
This is the `<iridescent-illustration-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:








Here are images generated with this style:




|
35fdd49e6bac9e7039e4df31e073e6b2
|
jonatasgrosman/exp_w2v2t_fr_unispeech_s514
|
jonatasgrosman
|
unispeech
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fr']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fr']
| false | true | true | 469 | false |
# exp_w2v2t_fr_unispeech_s514
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
d9d1e3e6653bf657dcf524eadf996d3e
|
DOOGLAK/Article_500v2_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
|
bert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['article500v2_wikigold_split']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,559 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2086
- Precision: 0.7113
- Recall: 0.7526
- F1: 0.7314
- Accuracy: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 185 | 0.1795 | 0.6982 | 0.7530 | 0.7245 | 0.9412 |
| No log | 2.0 | 370 | 0.2018 | 0.7218 | 0.7537 | 0.7374 | 0.9403 |
| 0.1342 | 3.0 | 555 | 0.2086 | 0.7113 | 0.7526 | 0.7314 | 0.9411 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
b566da7af02392f1bd99d3c6729b19ca
|
lasya-pidaparthi/bert-emotion
|
lasya-pidaparthi
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,455 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2994
- Precision: 0.7059
- Recall: 0.7093
- Fscore: 0.7066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8638 | 1.0 | 815 | 0.6727 | 0.6987 | 0.6539 | 0.6706 |
| 0.5072 | 2.0 | 1630 | 1.0434 | 0.7090 | 0.6747 | 0.6878 |
| 0.2683 | 3.0 | 2445 | 1.2994 | 0.7059 | 0.7093 | 0.7066 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
79224067ad051ab90089a0c670dc787f
|
huxxx657/bart-base-finetuned-squad
|
huxxx657
|
bart
| 13 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,155 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-squad
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4988 | 0.2 | 1108 | 1.2399 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
4116d10702248c4e4031956a9e190fc6
|
NSandra/distilbert-base-uncased-finetuned-ner
|
NSandra
|
distilbert
| 18 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,523 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2393
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 1 | 1.5491 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 2.0 | 2 | 1.3278 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 3 | 1.2393 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
137d9b42aadd25c06fa55ed1ebe40e52
|
tanmaylaud/wav2vec2-large-xlsr-hindi-marathi
|
tanmaylaud
|
wav2vec2
| 14 | 30 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['mr', 'hi']
|
['openslr', 'interspeech_2021_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week', 'hindi', 'marathi']
| true | true | true | 13,447 | false |
# Wav2Vec2-Large-XLSR-53-Hindi-Marathi
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hindi and Marathi using the OpenSLR SLR64 datasets. When using this model, make sure that your speech input is sampled at 16kHz.
## Installation
```bash
pip install git+https://github.com/huggingface/transformers.git datasets librosa torch==1.7.0 torchaudio==0.7.0 jiwer
```
## Eval dataset:
```bash
wget https://www.openslr.org/resources/103/Marathi_test.zip -P data/marathi
unzip -P "K3[2?do9" data/marathi/Marathi_test.zip -d data/marathi/.
tar -xzf data/marathi/Marathi_test.tar.gz -C data/marathi/.
wget https://www.openslr.org/resources/103/Hindi_test.zip -P data/hindi
unzip -P "w9I2{3B*" data/hindi/Hindi_test.zip -d data/hindi/.
tar -xzf data/hindi/Hindi_test.tar.gz -C data/hindi/.
wget -O test.csv 'https://filebin.net/snrz6bt13usv8w2e/test_large.csv?t=ps3n99ho'
#If download does not work, paste this link in browser: https://filebin.net/snrz6bt13usv8w2e/test_large.csv
```
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi text and path fields:
```python
import torch
import torchaudio
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_metric, Dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi')
model = Wav2Vec2ForCTC.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi').to("cuda")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
test_data= test_data.map(speech_file_to_array_fn)
inputs = processor(test_data["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_data["text"][:2])
```
# Code For Evaluation on OpenSLR (Hindi + Marathi : https://filebin.net/snrz6bt13usv8w2e/test_large.csv)
```python
import torchaudio
import torch
import librosa
import numpy as np
import re
test = Dataset.from_csv('test.csv')
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\।]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
test= test.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
# we do not want to group tokens when computing the metrics
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
test = test.map(evaluate, batched=True, batch_size=32)
print("WER: {:2f}".format(100 * wer.compute(predictions=test["pred_strings"], references=test["sentence"])))
```
#### Code for Evaluation on Common Voice Hindi (Common voice does not have Marathi yet)
```python
import torchaudio
import torch
import librosa
import numpy as np
import re
from datasets import load_metric, load_dataset, Dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi')
model = Wav2Vec2ForCTC.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi').to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\।]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
#Run prediction on batch
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
# we do not want to group tokens when computing the metrics
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
test_data = load_dataset("common_voice", "hi", split="test")
test_data = test_data.map(speech_file_to_array_fn)
test_data = test_data.map(evaluate, batched=True, batch_size=32)
print("WER: {:2f}".format(100 * wer.compute(predictions=test_data["pred_strings"],
references=test_data["sentence"])))
```
Link to eval notebook : https://colab.research.google.com/drive/1nZRTgKfxCD9cvy90wikTHkg2il3zgcqW#scrollTo=cXWFbhb0d7DT
WER : 23.736641% (OpenSLR Hindi+Marathi Test set : https://filebin.net/snrz6bt13usv8w2e/test_large.csv)
WER: 44.083527% (Common Voice Hindi Test Split)
|
096f0738eecaeb8255f6debec953868f
|
anas-awadalla/bart-large-few-shot-k-16-finetuned-squad-infilling-seed-4
|
anas-awadalla
|
bart
| 18 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 971 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-few-shot-k-16-finetuned-squad-infilling-seed-4
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
80994471a94baa03b5a26ca39a37c4fc
|
mrm8488/T5-base-finetuned-cuad
|
mrm8488
|
t5
| 9 | 3 |
transformers
| 2 |
text2text-generation
| true | false | false |
mit
|
['en']
|
['cuad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,642 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-base fine-tuned on CUAD for Legal Contract Review (via QA)
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the cuad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2809 | 1.0 | 2795 | 0.2331 |
| 0.2459 | 2.0 | 5590 | 0.2253 |
| 0.2355 | 3.0 | 8385 | 0.2220 |
| 0.2212 | 4.0 | 11180 | 0.2203 |
| 0.2068 | 5.0 | 13975 | 0.2197 |
| 0.2085 | 6.0 | 16770 | 0.2194 |
| 0.1968 | 7.0 | 19565 | 0.2199 |
| 0.1906 | 8.0 | 22360 | 0.2200 |
| 0.1909 | 9.0 | 25155 | 0.2208 |
| 0.1788 | 10.0 | 27950 | 0.2209 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ae63325c6aeebacb810efe77a2c06a1c
|
erikycd/chatbot_hadita
|
erikycd
|
gpt2
| 9 | 6 |
transformers
| 0 |
conversational
| true | false | false |
gpl-3.0
|
['en']
|
['wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational', 'gpt2']
| false | true | true | 2,540 | false |
# DialoGPT small base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
import torch
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("erikycd/chatbot_hadita")
model = AutoModelWithLMHead.from_pretrained("erikycd/chatbot_hadita")
exit_commands = ('bye', 'quit')
text = ''
while text not in exit_commands:
text = input('User: ')
input_ids = tokenizer.encode(text + tokenizer.eos_token, return_tensors = "pt")
bot_input_ids = torch.cat([input_ids])
chat_history_ids = model.generate(
bot_input_ids,
max_length = 30,
do_sample = True,
top_p = 0.95,
top_k = 0,
temperature = 0.75,
pad_token_id = tokenizer.eos_token_id
)
output = tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens = True)
print('Chatbot: ', output)
```
|
450269eb513b4597302422d06c042c2f
|
zates/distilbert-base-uncased-finetuned-squad-seed-9001
|
zates
|
distilbert
| 14 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,297 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-seed-9001
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2411 | 1.0 | 8235 | 1.2265 |
| 0.9797 | 2.0 | 16470 | 1.2576 |
| 0.791 | 3.0 | 24705 | 1.4060 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
3ad7c979cb653073fe096fadd6d8499d
|
fathyshalab/massive_calendar-roberta-large-v1-4-93
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,466 | false |
# fathyshalab/massive_calendar-roberta-large-v1-4-93
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_calendar-roberta-large-v1-4-93")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
8f0aa00ff80e7af08b6a292d7d7959d5
|
Helsinki-NLP/opus-mt-sv-el
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 775 | false |
### opus-mt-sv-el
* source languages: sv
* target languages: el
* OPUS readme: [sv-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-el/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-el/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-el/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.sv.el | 20.8 | 0.456 |
|
4c044de20ab1af601f15db3a1c78e48f
|
muhtasham/small-mlm-wikitext-target-conll2003
|
muhtasham
|
bert
| 10 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,221 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-wikitext-target-conll2003
This model is a fine-tuned version of [muhtasham/small-mlm-wikitext](https://huggingface.co/muhtasham/small-mlm-wikitext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1116
- Precision: 0.8899
- Recall: 0.9184
- F1: 0.9039
- Accuracy: 0.9785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.223 | 1.14 | 500 | 0.0903 | 0.8418 | 0.8810 | 0.8609 | 0.9720 |
| 0.0741 | 2.28 | 1000 | 0.0790 | 0.8792 | 0.8999 | 0.8894 | 0.9761 |
| 0.0429 | 3.42 | 1500 | 0.0804 | 0.8822 | 0.9135 | 0.8976 | 0.9777 |
| 0.0281 | 4.56 | 2000 | 0.0827 | 0.8969 | 0.9150 | 0.9059 | 0.9789 |
| 0.0185 | 5.69 | 2500 | 0.0908 | 0.8933 | 0.9184 | 0.9057 | 0.9784 |
| 0.013 | 6.83 | 3000 | 0.0960 | 0.8871 | 0.9179 | 0.9022 | 0.9782 |
| 0.0095 | 7.97 | 3500 | 0.0975 | 0.9013 | 0.9201 | 0.9106 | 0.9793 |
| 0.0074 | 9.11 | 4000 | 0.1094 | 0.8884 | 0.9189 | 0.9034 | 0.9776 |
| 0.0059 | 10.25 | 4500 | 0.1088 | 0.8998 | 0.9185 | 0.9091 | 0.9795 |
| 0.005 | 11.39 | 5000 | 0.1116 | 0.8899 | 0.9184 | 0.9039 | 0.9785 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
8c395df4d9854d260a6ac293796c7d10
|
haryoaw/id-recigen-bart
|
haryoaw
|
mbart
| 8 | 11 |
transformers
| 1 |
text2text-generation
| true | false | false |
mit
|
['id']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bart', 'id']
| false | true | true | 1,733 | false |
# Indonesia Recipe Ingredients Generator Model
**WARNING: inference on Huggingface might not run since the tokenizer used is not transformers's tokenizer.**
Feel free to test the model [in this space](https://huggingface.co/spaces/haryoaw/id-recigen)
😎 **Have fun on generating ingredients** 😎
This is a fine-tuned model to generate the Indonesian food ingredients. One of my personal project that I did in my free time.
Basically, you give the name of the food and it will produce the ingredients of the food.
## Model
Data: [Indonesian Recipe Data on Kaggle](https://www.kaggle.com/datasets/canggih/indonesian-food-recipes)
Pre-trained Model: [IndoBART-v2](https://huggingface.co/indobenchmark/indobart-v2)
## How to use
We will specify the usage of the tokenizer and the model.
### Tokenizer
Since we use `indobart-v2`, we need to use their tokenizer.
First, install the tokenizer by doing `pip install indobenchmark-toolkit`.
After that, you can load the tokenizer:
```python
from indobenchmark.tokenization_indonlg import IndoNLGTokenizer
tokenizer = IndoNLGTokenizer.from_pretrained("haryoaw/id-recigen-bart")
```
**EDIT**:
Seems like the tokenizer in the package is not the same as the one that I use to finetune the model.
There are some noticeable bug such as some subword tokens are not considered as subword. Nevertheless, it stil works!
### Model
The model can be loaded by using AutoModel.
```python
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("haryoaw/id-recigen-bart")
```
## Input Example
Make sure to input a **LOWERCASE** food name. The tokenizer is case-sensitive!
```
sayur asam
```
```
nasi goreng ayam
```
~To be continued..
|
e00a10b18117793b881e4fdabc9eb629
|
fathyshalab/clinic-kitchen_and_dining-roberta-domain-adaptation
|
fathyshalab
|
roberta
| 14 | 4 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,492 | false |
# fathyshalab/clinic-kitchen_and_dining-roberta-domain-adaptation
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/clinic-kitchen_and_dining-roberta-domain-adaptation")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
4fd18e8925ea659645966dfd63f73a3a
|
Helsinki-NLP/opus-mt-en-eu
|
Helsinki-NLP
|
marian
| 11 | 40 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
|
['en', 'eu']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,995 | false |
### eng-eus
* source group: English
* target group: Basque
* OPUS readme: [eng-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.eus | 31.8 | 0.590 |
### System Info:
- hf_name: eng-eus
- source_languages: eng
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'eu']
- src_constituents: {'eng'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: eus
- short_pair: en-eu
- chrF2_score: 0.59
- bleu: 31.8
- brevity_penalty: 0.9440000000000001
- ref_len: 7080.0
- src_name: English
- tgt_name: Basque
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: eu
- prefer_old: False
- long_pair: eng-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
9623886a7c8f50a32b84bcfd0088820d
|
ricardo-filho/bert_base_tcm_0.8
|
ricardo-filho
|
bert
| 24 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 5,568 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_tcm_0.5
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0149
- Criterio Julgamento Precision: 0.8409
- Criterio Julgamento Recall: 0.8740
- Criterio Julgamento F1: 0.8571
- Criterio Julgamento Number: 127
- Data Sessao Precision: 0.7901
- Data Sessao Recall: 0.9143
- Data Sessao F1: 0.8477
- Data Sessao Number: 70
- Modalidade Licitacao Precision: 0.8976
- Modalidade Licitacao Recall: 0.9581
- Modalidade Licitacao F1: 0.9269
- Modalidade Licitacao Number: 430
- Numero Exercicio Precision: 0.9676
- Numero Exercicio Recall: 0.9721
- Numero Exercicio F1: 0.9698
- Numero Exercicio Number: 215
- Objeto Licitacao Precision: 0.4375
- Objeto Licitacao Recall: 0.5976
- Objeto Licitacao F1: 0.5052
- Objeto Licitacao Number: 82
- Valor Objeto Precision: 0.76
- Valor Objeto Recall: 0.8444
- Valor Objeto F1: 0.8
- Valor Objeto Number: 45
- Overall Precision: 0.8410
- Overall Recall: 0.9112
- Overall F1: 0.8747
- Overall Accuracy: 0.9963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Objeto Licitacao Precision | Objeto Licitacao Recall | Objeto Licitacao F1 | Objeto Licitacao Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0212 | 1.0 | 3996 | 0.0203 | 0.7483 | 0.8425 | 0.7926 | 127 | 0.5739 | 0.9429 | 0.7135 | 70 | 0.9033 | 0.9558 | 0.9288 | 430 | 0.8805 | 0.9256 | 0.9025 | 215 | 0.3445 | 0.5 | 0.4080 | 82 | 0.5846 | 0.8444 | 0.6909 | 45 | 0.7676 | 0.8896 | 0.8241 | 0.9950 |
| 0.012 | 2.0 | 7992 | 0.0158 | 0.8201 | 0.8976 | 0.8571 | 127 | 0.7174 | 0.9429 | 0.8148 | 70 | 0.8686 | 0.9535 | 0.9091 | 430 | 0.9591 | 0.9814 | 0.9701 | 215 | 0.2987 | 0.5610 | 0.3898 | 82 | 0.6364 | 0.7778 | 0.7000 | 45 | 0.7792 | 0.9102 | 0.8396 | 0.9954 |
| 0.0062 | 3.0 | 11988 | 0.0149 | 0.8409 | 0.8740 | 0.8571 | 127 | 0.7901 | 0.9143 | 0.8477 | 70 | 0.8976 | 0.9581 | 0.9269 | 430 | 0.9676 | 0.9721 | 0.9698 | 215 | 0.4375 | 0.5976 | 0.5052 | 82 | 0.76 | 0.8444 | 0.8 | 45 | 0.8410 | 0.9112 | 0.8747 | 0.9963 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
b076b209ce1c4996a1636486dd7a8101
|
Siyris/DialoGPT-medium-SIY
|
Siyris
|
gpt2
| 9 | 8 |
transformers
| 0 |
conversational
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational']
| false | true | true | 1,827 | false |
# DialoGPT Trained on a customized various spiritual texts and mixed with various different character personalities.
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better. I've also trained it on various channeling experiences. I'm testing mixing this dataset with character from popular shows with the intention of creating a more diverse dialogue.
I built a Discord AI chatbot based on this model for internal use within Siyris, Inc.
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Siyris/DialoGPT-medium-SIY")
model = AutoModelWithLMHead.from_pretrained("Siyris/DialoGPT-medium-SIY")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("SIY: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
fcb1cbd9326cb7249071900e7130bdf3
|
theojolliffe/T5-model-1-feedback-0810
|
theojolliffe
|
t5
| 13 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,785 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-model-1-feedback-0810
This model is a fine-tuned version of [theojolliffe/T5-model-1-feedback-0510](https://huggingface.co/theojolliffe/T5-model-1-feedback-0510) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1776
- Rouge1: 94.0404
- Rouge2: 91.0472
- Rougel: 93.8927
- Rougelsum: 93.9417
- Gen Len: 15.5128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 363 | 0.2000 | 93.0351 | 89.425 | 93.1359 | 93.2085 | 15.1538 |
| 0.2311 | 2.0 | 726 | 0.1835 | 93.7371 | 90.8556 | 93.7891 | 93.8622 | 15.2051 |
| 0.191 | 3.0 | 1089 | 0.1792 | 94.1894 | 91.4087 | 94.0525 | 94.0773 | 15.5128 |
| 0.191 | 4.0 | 1452 | 0.1776 | 94.0404 | 91.0472 | 93.8927 | 93.9417 | 15.5128 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
92a5691ba15b19ff99b51382d75c1b98
|
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-evn6-ntsema-colab
|
ntsema
|
wav2vec2
| 13 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['audiofolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,756 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-evn6-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2335
- Wer: 0.9431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.847 | 4.0 | 400 | 0.9836 | 0.9933 |
| 0.8626 | 8.0 | 800 | 0.8241 | 0.9666 |
| 0.536 | 12.0 | 1200 | 0.9166 | 0.9565 |
| 0.3374 | 16.0 | 1600 | 1.1043 | 0.9732 |
| 0.2251 | 20.0 | 2000 | 1.1423 | 0.9632 |
| 0.1649 | 24.0 | 2400 | 1.1648 | 0.9599 |
| 0.1244 | 28.0 | 2800 | 1.2335 | 0.9431 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
f24babd71e9a0485f601463e2b1c8410
|
muhtasham/small-mlm-glue-mnli-target-glue-qqp
|
muhtasham
|
bert
| 10 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,934 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-mnli-target-glue-qqp
This model is a fine-tuned version of [muhtasham/small-mlm-glue-mnli](https://huggingface.co/muhtasham/small-mlm-glue-mnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3263
- Accuracy: 0.8535
- F1: 0.8134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4778 | 0.04 | 500 | 0.4286 | 0.7863 | 0.7468 |
| 0.4182 | 0.09 | 1000 | 0.3862 | 0.8142 | 0.7696 |
| 0.4014 | 0.13 | 1500 | 0.3732 | 0.8225 | 0.7767 |
| 0.3851 | 0.18 | 2000 | 0.3686 | 0.8234 | 0.7887 |
| 0.3784 | 0.22 | 2500 | 0.3600 | 0.8338 | 0.7974 |
| 0.36 | 0.26 | 3000 | 0.3438 | 0.8406 | 0.7995 |
| 0.3583 | 0.31 | 3500 | 0.3361 | 0.8475 | 0.7970 |
| 0.3528 | 0.35 | 4000 | 0.3316 | 0.8472 | 0.8076 |
| 0.3567 | 0.4 | 4500 | 0.3307 | 0.8494 | 0.8089 |
| 0.3428 | 0.44 | 5000 | 0.3263 | 0.8535 | 0.8134 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
3dad19c6b1b39354d0b4d9309a3b9fa4
|
fathyshalab/massive_play-roberta-large-v1-3-71
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,458 | false |
# fathyshalab/massive_play-roberta-large-v1-3-71
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-3-71")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
cd27dcf44bdb23ccdef171c6348019a8
|
theojolliffe/bart-cnn-pubmed-arxiv-v3-e16
|
theojolliffe
|
bart
| 13 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,037 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-v3-e16
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9340
- Rouge1: 57.6388
- Rouge2: 44.834
- Rougel: 47.5043
- Rougelsum: 56.1122
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.2407 | 1.0 | 795 | 0.9270 | 53.3842 | 33.8559 | 35.7393 | 50.6907 | 142.0 |
| 0.704 | 2.0 | 1590 | 0.8092 | 53.2159 | 35.0209 | 37.8641 | 50.9514 | 141.963 |
| 0.5277 | 3.0 | 2385 | 0.7588 | 52.7709 | 34.2453 | 36.6319 | 50.1137 | 142.0 |
| 0.3449 | 4.0 | 3180 | 0.7617 | 52.0249 | 34.5679 | 37.3669 | 49.7643 | 142.0 |
| 0.2668 | 5.0 | 3975 | 0.7575 | 54.3131 | 35.3985 | 38.9242 | 51.5667 | 142.0 |
| 0.1756 | 6.0 | 4770 | 0.8161 | 53.6214 | 36.4376 | 39.1745 | 51.3685 | 142.0 |
| 0.1326 | 7.0 | 5565 | 0.7848 | 55.7549 | 38.8517 | 42.0106 | 53.4243 | 142.0 |
| 0.1051 | 8.0 | 6360 | 0.7912 | 55.2709 | 39.952 | 42.7398 | 53.6479 | 142.0 |
| 0.0781 | 9.0 | 7155 | 0.8491 | 55.5698 | 40.0599 | 42.9521 | 53.6734 | 142.0 |
| 0.0685 | 10.0 | 7950 | 0.8684 | 55.1142 | 40.3136 | 43.699 | 53.5463 | 142.0 |
| 0.0494 | 11.0 | 8745 | 0.8886 | 57.7988 | 43.6659 | 46.0913 | 56.3383 | 142.0 |
| 0.0338 | 12.0 | 9540 | 0.8827 | 57.0166 | 42.7553 | 46.2344 | 55.2893 | 142.0 |
| 0.0296 | 13.0 | 10335 | 0.9111 | 56.7741 | 42.6116 | 45.1692 | 55.2065 | 142.0 |
| 0.0228 | 14.0 | 11130 | 0.9209 | 56.635 | 43.2461 | 46.314 | 55.049 | 142.0 |
| 0.0189 | 15.0 | 11925 | 0.9193 | 56.4404 | 43.4216 | 46.279 | 55.1403 | 142.0 |
| 0.0152 | 16.0 | 12720 | 0.9340 | 57.6388 | 44.834 | 47.5043 | 56.1122 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
9e5d62790340dcab12f6e3be767bb204
|
christopheyebiname/distilbert-base-uncased-finetuned-emotion
|
christopheyebiname
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2230
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8356 | 1.0 | 250 | 0.3184 | 0.9055 | 0.9021 |
| 0.2559 | 2.0 | 500 | 0.2230 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
772d3696524d8fe61b905a96404f3af0
|
noflm/whisper-small-ja-cv11
|
noflm
|
whisper
| 67 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,587 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Japanese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 ja dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4317
- Wer: 13.3262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.042 | 3.03 | 2000 | 0.3056 | 12.9174 |
| 0.0085 | 7.01 | 4000 | 0.3752 | 13.1746 |
| 0.0047 | 10.04 | 6000 | 0.4103 | 13.5817 |
| 0.0042 | 14.01 | 8000 | 0.4202 | 13.5323 |
| 0.0051 | 17.05 | 10000 | 0.4317 | 13.3262 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
ccfc098ac18ed6a6983c9231899bdefe
|
polejowska/vit-convnext-tiny-224-eurosat
|
polejowska
|
convnext
| 11 | 5 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,575 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-convnext-tiny-224-eurosat
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0576
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2881 | 0.99 | 147 | 0.2325 | 0.9588 |
| 0.0869 | 1.99 | 294 | 0.0912 | 0.9753 |
| 0.0687 | 2.99 | 441 | 0.0663 | 0.9805 |
| 0.0272 | 3.99 | 588 | 0.0576 | 0.9859 |
| 0.0247 | 4.99 | 735 | 0.0532 | 0.9854 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
581de977038a554714585cc9af927d5b
|
jorge-henao/gpt2-small-spanish-disco-poetry-15
|
jorge-henao
|
gpt2
| 9 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,031 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-disco-poetry-15
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
258e56d51fb71cafa15a7b166d7054f7
|
course5i/SEAD-L-6_H-256_A-8-stsb
|
course5i
|
bert
| 11 | 11 |
transformers
| 0 |
text-classification
| true | true | true |
apache-2.0
|
['en']
|
['glue', 'stsb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SEAD']
| false | true | true | 3,640 | false |
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-256_A-8-stsb
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **stsb** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_pearson | eval_spearmanr | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:------------:|:--------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.8962 | 0.8978 | 2.1978 | 682.498 | 21.385 | 0.4679 | 1500 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
68e66cea82cfb661d2760ab331db3e10
|
gmihaila/wav2vec2-large-xlsr-53-romanian
|
gmihaila
|
wav2vec2
| 9 | 10 |
transformers
| 0 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['ro']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,590 | false |
# Wav2Vec2-Large-XLSR-53-Romanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Romanian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ro", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("gmihaila/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("gmihaila/wav2vec2-large-xlsr-53-romanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ro", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gmihaila/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("gmihaila/wav2vec2-large-xlsr-53-romanian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\twith torch.no_grad():
\\\\t\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 28.43 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/github/gmihaila/ml_things/blob/master/notebooks/pytorch/RO_Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_🤗_Transformers.ipynb)
|
ae99d90314ac98fd138e59740086b2f4
|
kingabzpro/Helsinki-NLP-opus-yor-mul-en
|
kingabzpro
|
marian
| 9 | 7 |
transformers
| 1 |
text2text-generation
| true | false | false |
apache-2.0
|
['Yorùbá']
|
['AI4D-Africa - Yorùbá Machine Translation Challenge']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text', 'machine-translation', 'language-translation', 'seq2seq', 'helsinki-nlp']
| false | true | true | 881 | false |
## Predicting English Translation
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Loading tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("kingabzpro/Helsinki-NLP-opus-yor-mul-en")
model = AutoModelForSeq2SeqLM.from_pretrained("kingabzpro/Helsinki-NLP-opus-yor-mul-en").to('cuda')
# Prediction
a = model.generate(**tokenizer.prepare_seq2seq_batch('Nínú ìpè kan lẹ́yìn ìgbà náà, wọ́n sọ fún aṣojú iléeṣẹ́ BlaBlaCar pé ètò náà ti yí padà, pé',return_tensors='pt').to('cuda'))
text = tokenizer.batch_decode(a)
# Cleaning text
text = str(text)
text = re.sub("<pad> ","",text)
text = re.sub("'","",text)
text = text.replace("[", "")
text = text.replace("]", "")
text
```
## Result
```
'In a statement after that hearing, the BualaCard’s representative was told that the event had changed, that he had turned up.'
```
## ROGUE Score
**0.3025**
|
0312460ecc7dd35e20c9915dc574223a
|
tahiyacy/emotion-recognition
|
tahiyacy
|
perceiver
| 4 | 0 |
transformers
| 0 |
feature-extraction
| true | false | false |
creativeml-openrail-m
|
['en']
|
['RAVDESS']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['emotion-recognition, perceiver']
| false | true | true | 937 | false |
# Perceiver-based Emotion Recognition
This model is a Perceiver-based (https://huggingface.co/docs/transformers/model_doc/perceiver) emotion recognition model trained on RAVDESS dataset (https://zenodo.org/record/1188976#.Y5iqPy2B1QI).
The model is trained using 3 modalities: video, audio, and text.
For details on the data collection, check here: https://zenodo.org/record/1188976
The feature extraction for each modality and training procedure follows the steps mentioned here: https://dl.acm.org/doi/10.1145/3551876.3554806
## Intended uses
You can use the raw model for directly reconize emotion (classes: 01 = neutral, 02 = calm, 03 = happy, 04 = sad, 05 = angry, 06 = fearful, 07 = disgust, 08 = surprised) or fine-tune on a downstream task.
## Limitations
The model is trained on only one dataset and uses 8 specific classes of emotions. The limitation lies in the lack of diversity in the demographics and emotions.
|
39eaf80c4e9be04efb10357b7d4d77a5
|
clu-ling/whisper-large-v2-spanish
|
clu-ling
|
whisper
| 33 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,759 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-spanish
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1466
- Wer: 0.0855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 25000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1908 | 0.03 | 1000 | 0.2235 | 0.1154 |
| 0.1888 | 0.07 | 2000 | 0.2132 | 0.1131 |
| 0.167 | 0.1 | 3000 | 0.2115 | 0.1133 |
| 0.1752 | 0.14 | 4000 | 0.2081 | 0.1146 |
| 0.1656 | 0.17 | 5000 | 0.2002 | 0.1073 |
| 0.1535 | 0.21 | 6000 | 0.1971 | 0.1086 |
| 0.1854 | 0.24 | 7000 | 0.1927 | 0.1048 |
| 0.1722 | 0.28 | 8000 | 0.1889 | 0.1043 |
| 0.166 | 0.31 | 9000 | 0.1850 | 0.1022 |
| 0.1277 | 0.35 | 10000 | 0.1820 | 0.1032 |
| 0.1457 | 0.38 | 11000 | 0.1777 | 0.0998 |
| 0.169 | 0.42 | 12000 | 0.1771 | 0.0982 |
| 0.1612 | 0.45 | 13000 | 0.1724 | 0.0976 |
| 0.1616 | 0.49 | 14000 | 0.1693 | 0.0956 |
| 0.1556 | 0.52 | 15000 | 0.1671 | 0.0942 |
| 0.1448 | 0.56 | 16000 | 0.1646 | 0.0930 |
| 0.117 | 0.59 | 17000 | 0.1613 | 0.0914 |
| 0.1441 | 0.62 | 18000 | 0.1596 | 0.0899 |
| 0.148 | 0.66 | 19000 | 0.1571 | 0.0895 |
| 0.1255 | 0.69 | 20000 | 0.1547 | 0.0874 |
| 0.1479 | 0.73 | 21000 | 0.1525 | 0.0885 |
| 0.1304 | 0.76 | 22000 | 0.1503 | 0.0861 |
| 0.1111 | 0.8 | 23000 | 0.1486 | 0.0867 |
| 0.1337 | 0.83 | 24000 | 0.1472 | 0.0854 |
| 0.1289 | 0.87 | 25000 | 0.1466 | 0.0855 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
7910c54409825eb54d780723afbdf9ea
|
vasista22/whisper-hindi-large-v2
|
vasista22
|
whisper
| 12 | 13 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event']
| true | true | true | 1,330 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Hindi Large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Hindi data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
## Training and evaluation data at Speech Lab, IITM
Training Data: GramVaani ASR Corpus, ULCA ASR Corpus, Shrutilipi ASR Corpus, Google/Fleurs (Train+Dev) set.
Evaluation Data: GramVaani ASR Corpus Test, Google/Fleurs Test set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.75e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25000
- training_steps: 57000 (Initially set to 116255 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at Speech Lab, IITM. The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
|
c80e60941584a1c37c3b0d821513df3c
|
YumaSaito/distilbert-base-uncased-finetuned-emotion
|
YumaSaito
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2181
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8618 | 1.0 | 250 | 0.3206 | 0.903 | 0.8990 |
| 0.2549 | 2.0 | 500 | 0.2181 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
5b6581ecf8031d9b51dad27cbd32aaeb
|
eunbeee/ainize-kobart-news-eb-finetuned-meetings-papers
|
eunbeee
|
bart
| 14 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,870 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ainize-kobart-news-eb-finetuned-meetings-papers
This model is a fine-tuned version of [ainize/kobart-news](https://huggingface.co/ainize/kobart-news) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3289
- Rouge1: 17.3988
- Rouge2: 7.0454
- Rougel: 17.3877
- Rougelsum: 17.42
- Gen Len: 19.9473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.1402 | 1.0 | 7588 | 0.2930 | 17.1421 | 7.0141 | 17.1211 | 17.1473 | 19.9374 |
| 0.0997 | 2.0 | 15176 | 0.2842 | 17.1692 | 6.8824 | 17.1557 | 17.1985 | 19.9435 |
| 0.0692 | 3.0 | 22764 | 0.3052 | 17.4241 | 7.1083 | 17.4028 | 17.4472 | 19.9453 |
| 0.0556 | 4.0 | 30352 | 0.3289 | 17.3988 | 7.0454 | 17.3877 | 17.42 | 19.9473 |
| 0.0533 | 5.0 | 37940 | 0.3289 | 17.3988 | 7.0454 | 17.3877 | 17.42 | 19.9473 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
4cb32e135f92f302300b5813524fdac2
|
dbmdz/electra-base-french-europeana-cased-generator
|
dbmdz
|
electra
| 7 | 125 |
transformers
| 0 |
fill-mask
| true | true | false |
mit
|
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['historic french']
| false | true | true | 2,159 | false |
# 🤗 + 📚 dbmdz ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana ELECTRA models 🎉
# French Europeana ELECTRA
We extracted all French texts using the `language` metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
ELECTRA model weights for PyTorch and TensorFlow are available.
* French Europeana ELECTRA (discriminator): `dbmdz/electra-base-french-europeana-cased-discriminator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-discriminator/tree/main)
* French Europeana ELECTRA (generator): `dbmdz/electra-base-french-europeana-cased-generator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-generator/tree/main)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
model = AutoModel.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download our models from their S3 storage 🤗
|
4c6480dd5f57a63690c307953c93b6d3
|
sabasazad/finetuning-sentiment-model-3000-samples
|
sabasazad
|
distilbert
| 13 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,053 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3085
- Accuracy: 0.87
- F1: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
aff817edf9dc6c3ffd0d543bd6eee675
|
mse30/bart-base-finetuned-pubmed
|
mse30
|
bart
| 11 | 80 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['scientific_papers']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,749 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9804
- Rouge1: 9.1984
- Rouge2: 4.3091
- Rougel: 7.9739
- Rougelsum: 8.6759
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.2869 | 1.0 | 29981 | 2.1241 | 9.0852 | 4.1152 | 7.842 | 8.5395 | 20.0 |
| 2.1469 | 2.0 | 59962 | 2.0225 | 9.1609 | 4.2437 | 7.9311 | 8.6273 | 20.0 |
| 2.113 | 3.0 | 89943 | 1.9959 | 9.3086 | 4.3305 | 8.0363 | 8.7713 | 20.0 |
| 2.0632 | 4.0 | 119924 | 1.9804 | 9.1984 | 4.3091 | 7.9739 | 8.6759 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
56e9e5072e0b84dcfaeebeb9d952db96
|
Luciano/xlm-roberta-base-finetuned-lener-br
|
Luciano
|
xlm-roberta
| 21 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
|
['pt']
|
['lener_br']
| null | 3 | 0 | 3 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,694 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-lener-br
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the lener_br dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.8443
- Recall: 0.8845
- F1: 0.8639
- Accuracy: 0.9752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0832 | 1.0 | 1957 | nan | 0.6752 | 0.8625 | 0.7575 | 0.9578 |
| 0.0477 | 2.0 | 3914 | nan | 0.8391 | 0.8839 | 0.8609 | 0.9704 |
| 0.029 | 3.0 | 5871 | nan | 0.7530 | 0.9059 | 0.8224 | 0.9648 |
| 0.0223 | 4.0 | 7828 | nan | 0.7488 | 0.8744 | 0.8067 | 0.9659 |
| 0.0234 | 5.0 | 9785 | nan | 0.7216 | 0.8783 | 0.7923 | 0.9644 |
| 0.0171 | 6.0 | 11742 | nan | 0.7072 | 0.8969 | 0.7908 | 0.9642 |
| 0.0121 | 7.0 | 13699 | nan | 0.7769 | 0.8775 | 0.8241 | 0.9681 |
| 0.0093 | 8.0 | 15656 | nan | 0.7218 | 0.8772 | 0.7920 | 0.9621 |
| 0.0074 | 9.0 | 17613 | nan | 0.8241 | 0.8767 | 0.8496 | 0.9739 |
| 0.0055 | 10.0 | 19570 | nan | 0.7369 | 0.8801 | 0.8021 | 0.9638 |
| 0.0055 | 11.0 | 21527 | nan | 0.8443 | 0.8845 | 0.8639 | 0.9752 |
| 0.0029 | 12.0 | 23484 | nan | 0.8338 | 0.8935 | 0.8626 | 0.9753 |
| 0.0026 | 13.0 | 25441 | nan | 0.7721 | 0.8992 | 0.8308 | 0.9694 |
| 0.004 | 14.0 | 27398 | nan | 0.7466 | 0.8886 | 0.8114 | 0.9672 |
| 0.0006 | 15.0 | 29355 | nan | 0.7518 | 0.8995 | 0.8190 | 0.9686 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ea1330a5e516d79b4967015a14d20eda
|
gustavecortal/T0_3B-8bit
|
gustavecortal
|
t5
| 4 | 34 |
transformers
| 9 |
text2text-generation
| true | false | false |
mit
|
['fr']
|
['bigscience/P3']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['en']
| false | true | true | 3,067 | false |
### Quantized BigScience's T0 3B with 8-bit weights
This is a version of [BigScience's T0](https://huggingface.co/bigscience/T0_3B) with 3 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit).
Here's how to run it: [](https://colab.research.google.com/drive/1lMja-CPc0vm5_-gXNXAWU-9c0nom7vZ9)
This model can be easily loaded using the `T5ForConditionalGeneration` functionality:
```python
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("gustavecortal/T0_3B-8bit")
```
Before loading, you have to Monkey-Patch T5:
```python
class T5ForConditionalGeneration(transformers.models.t5.modeling_t5.T5ForConditionalGeneration):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
transformers.models.t5.modeling_t5.T5ForConditionalGeneration = T5ForConditionalGeneration
```
## Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
## Links
* [BigScience](https://bigscience.huggingface.co/)
* [Hivemind](https://training-transformers-together.github.io/)
* [Gustave Cortal](https://twitter.com/gustavecortal)
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
340a58d01df2e00cdb755382f6acce68
|
troesy/bert-base-uncased-hatexplain-label-all-tokens-True-3epoch
|
troesy
|
bert
| 12 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,283 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-hatexplain-label-all-tokens-True-3epoch
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.2211 |
| No log | 2.0 | 348 | 0.2089 |
| 0.2165 | 3.0 | 522 | 0.2139 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
b3b0f07c4994781d15e13fa35b1e2b3e
|
olpa/xlm-roberta-base-finetuned-panx-de
|
olpa
|
xlm-roberta
| 12 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 1 | 0 | 0 | 1 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,313 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 |
| 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 |
| 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bc0e0e4ba911d8608d43771261443c60
|
kasrahabib/50_100-bucket-finetunned
|
kasrahabib
|
bert
| 10 | 7 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,681 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/50_100-bucket-finetunned
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1369
- Validation Loss: 0.1561
- Epoch: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 590, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3347 | 1.2147 | 0 |
| 1.0525 | 0.7854 | 1 |
| 0.6743 | 0.5093 | 2 |
| 0.4330 | 0.3508 | 3 |
| 0.2934 | 0.2534 | 4 |
| 0.2156 | 0.2020 | 5 |
| 0.1750 | 0.1782 | 6 |
| 0.1494 | 0.1634 | 7 |
| 0.1369 | 0.1561 | 8 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
4aac520e840977d5f78ea7f6f5fe6fdc
|
sd-concepts-library/retro-mecha-rangers
|
sd-concepts-library
| null | 9 | 0 | null | 2 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,068 | false |
### retro mecha rangers on Stable Diffusion
This is the `<aesthetic>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
3654d58de23418f35ac8d18fca6037c8
|
sv/gpt2-finetuned-nft-shakes
|
sv
|
gpt2
| 9 | 5 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null |
[]
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,226 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-nft-shakes
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 306 | 3.9679 |
| 4.2957 | 2.0 | 612 | 3.7979 |
| 4.2957 | 3.0 | 918 | 3.7566 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
f510bcb995c1a215e896864b774fc959
|
gchhablani/fnet-base-finetuned-cola
|
gchhablani
|
fnet
| 45 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'fnet-bert-base-comparison']
| true | true | true | 2,273 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-cola
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5929
- Matthews Correlation: 0.3594
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5895 | 1.0 | 535 | 0.6146 | 0.1699 |
| 0.4656 | 2.0 | 1070 | 0.5667 | 0.3047 |
| 0.3329 | 3.0 | 1605 | 0.5929 | 0.3594 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
e1dc1b2e8f45884fc460e3464b9cd3d2
|
simonschoe/pokeball-machine
|
simonschoe
| null | 38 | 60 |
diffusers
| 6 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
| false | true | true | 5,265 | false |
# The Pokeball Machine
The **Pokeball Machine** is a Dreambooth model for the `pokeball` concept (represented by the `pkblz` identifier).
It applies to the *wildcard* theme.
It is fine-tuned from `CompVis/stable-diffusion-v1-4` checkpoint on a small dataset of pokeball images (i.e., images of the red-white original pokeball).
It can be used by modifying the `instance_prompt`: **a pkblz ball in the middle of a miniature jungle**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
#### Fine-Tuning Details
- Number of training images: 31
- Learning rate: 2e-06
- Training steps: 800
- Guidance Scale: 10
- Inference Steps: 50-75
#### Output Examples
<table>
<tr>
<td>a blueprint photo of a <b>pkblz</b> ball</td>
<td>a photo of a cybernetic <b>pkblz</b> ball, wide shot</td>
<td>a photo of a <b>pkblz</b> ball in the style vintage disney</td>
</tr>
<tr>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(1).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(2).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(3).png" style="height:200px"> </td>
</tr>
<tr>
<td>a photo of a mosaic <b>pkblz</b> ball lying in an antique temple</td>
<td>a photo of a detailed ornate <b>pkblz</b> ball</td>
<td>a pkblz ball underwater</td>
</tr>
<tr>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(4).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(5).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(6).png" style="height:200px"> </td>
</tr>
<tr>
<td>a <b>pkblz</b> ball in the middle of a miniature jungle</td>
<td>a <b>pkblz</b> ball underwater</td>
<td>a mystic <b>pkblz</b> ball, trending on artstation</td>
</tr>
<tr>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(7).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(8).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(9).png" style="height:200px"> </td>
</tr>
<tr>
<td>a <b>pkblz</b> ball underwater, trending on artstation</td>
<td>a wooden <b>pkblz</b> ball</td>
<td>a <b>pkblz</b> ball hovering over a pond</td>
</tr>
<tr>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(10).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(11).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(12).png" style="height:200px"> </td>
</tr>
<tr>
<td>a <b>pkblz</b> ball on a sunny tropical beach</td>
<td>a steampunk <b>pkblz</b> ball, trending on artstation</td>
<td>a colored pencil sketch of a <b>pkblz</b> ball</td>
</tr>
<tr>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(13).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(14).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(15).png" style="height:200px"> </td>
</tr>
<tr>
<td>a photo of a spectral ornate <b>pkblz</b> ball, trending on artstation, realistic</td>
<td>a sunset photo of a <b>pkblz</b> ball</td>
<td>a watercolor photo of a <b>pkblz</b> ball</td>
</tr>
<tr>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(16).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(17).png" style="height:200px"> </td>
<td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(18).png" style="height:200px"> </td>
</tr>
</table>
## Usage
```python
from diffusers import StableDiffusionPipeline
import torch
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pipeline = StableDiffusionPipeline.from_pretrained('simonschoe/pokeball-machine').to(device)
prompt = "a pkblz ball in the middle of a miniature jungle"
image = pipeline(
prompt,
num_inference_steps=50,
guidance_scale=10,
num_images_per_prompt=1
).images[0]
image
```
|
3f449b03efe66b891f87894135a299f9
|
paola-md/distilr2-lr2e05-wd0.1-bs64
|
paola-md
|
roberta
| 6 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilr2-lr2e05-wd0.1-bs64
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2722
- Rmse: 0.5218
- Mse: 0.2722
- Mae: 0.4090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2771 | 1.0 | 312 | 0.2742 | 0.5237 | 0.2742 | 0.4241 |
| 0.2737 | 2.0 | 624 | 0.2726 | 0.5221 | 0.2726 | 0.4079 |
| 0.2718 | 3.0 | 936 | 0.2727 | 0.5222 | 0.2727 | 0.4149 |
| 0.2696 | 4.0 | 1248 | 0.2722 | 0.5218 | 0.2722 | 0.4090 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
4ee773162574150da3e688e583faf444
|
nandysoham16/Web_browser-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,863 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Web_browser-clustered
This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1876
- Train End Logits Accuracy: 0.9792
- Train Start Logits Accuracy: 0.9375
- Validation Loss: 0.0125
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1876 | 0.9792 | 0.9375 | 0.0125 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
878f49ffef044997bfdb6ff204433fd5
|
Tushybhutt/GlassBiff
|
Tushybhutt
| null | 10 | 0 | null | 0 | null | false | false | false |
cc-by-sa-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 539 | false |
A stained glass themed embedding that was created with 8 vectors.
Textual Inversion Embedding for SD 2.x trained for 500 steps on twenty 768x768 images from various sources.
Install by downloading the step embedding, and put it in the \embeddings folder
Use keyword: GlassBiff



|
aa25088f4f3806a06129c08e5bdf90ff
|
Reverb/GPyT
|
Reverb
|
gpt2
| 11 | 1 |
transformers
| 0 |
text-generation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,755 | false |
# GPyT Project
GPyT is a GPT2 model trained from scratch (not fine tuned) on Python code from Github. Overall, it was ~200GB of pure
Python code, the current GPyT model is a mere 2 epochs through this data, so it may benefit greatly from continued training and/or fine-tuning.
Newlines are replaced by <N>
Input to the model is code, up to the context length of 1024, with newlines replaced by <N>
Here's a quick example of using this model:
```py
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Reverb/GPyT")
model = AutoModelWithLMHead.from_pretrained("Reverb/GPyT")
# copy and paste some code in here
inp = """import"""
newlinechar = "<N>"
converted = inp.replace("\n", newlinechar)
tokenized = tokenizer.encode(converted, return_tensors='pt')
resp = model.generate(tokenized)
decoded = tokenizer.decode(resp[0])
reformatted = decoded.replace("<N>","\n")
print(reformatted)
```
Should produce:
```py
import numpy as np
import pytest
import pandas as pd<N
```
---
## The Journey
The model took 6 major steps which are:
1. Data Collection
2. Raw Data Cleaning
3. Data Preprocessing
4. Building & Training the Tokenizer
5. Testing the Model on Large Dataset
6. Deploying the Final Model on HuggingFace
#### Data Collection
The data was collected from python github repositories using web scraping techniques, It took nearly a day to gather 200GB worth of data.
#### Raw Data Cleaning
200GB of python code?? sounds ridiculous! that's why we needed to clean the downloaded repositories from any non-python files such as PDF,idx..etc
#### Data Preprocessing
I tried splitting the lines of code for each repository then merged them all under one single text file named **python_text_data.txt**
#### Building & Training the Tokenizer
For this step I have used **ByteLevelBPETokenizer** and trained it then saved the model on the desktop
#### Testing the Model on Large Dataset
After training the tokenizer on a large dataset, It was time for some tests to see how good is the model before proceeding.
---
## Considerations:
> - This model is intended for educational and research use only. Do not trust model outputs.
> - Model is highly likely to regurgitate code almost exactly as it saw it. It's up to you to determine licensing if you intend to actually use the generated code.
> - All Python code was blindly pulled from github. This means included code is both Python 2 and 3, among other more subtle differences, such as tabs being 2 spaces in some cases and 4 in others...and more non-homologous things.
> - Along with the above, this means the code generated could wind up doing or suggesting just about anything. Run the generated code at own risk...it could be anything
|
452016c08017e321d3ddc79ff6b6fe01
|
sagawa/PubChem-10m-t5
|
sagawa
|
t5
| 8 | 1 |
transformers
| 0 |
text2text-generation
| true | false | true |
mit
| null |
['sagawa/pubchem-10m-canonicalized']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 2,105 | false |
# PubChem-10m-t5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/pubchem-10m-canonicalized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2121
- Accuracy: 0.9259
## Model description
We trained t5 on SMILES from PubChem using the task of masked-language modeling (MLM). Its tokenizer is also trained on PubChem.
## Intended uses & limitations
This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
## Training and evaluation data
We downloaded [PubChem data](https://drive.google.com/file/d/1ygYs8dy1-vxD1Vx6Ux7ftrXwZctFjpV3/view) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 9999960, and they were randomly split into train:validation=10:1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-03
- train_batch_size: 30
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
| Training Loss | Step | Accuracy | Validation Loss |
|:-------------:|:------:|:--------:|:---------------:|
| 0.3866 | 25000 | 0.8830 | 0.3631 |
| 0.3352 | 50000 | 0.8996 | 0.3049 |
| 0.2834 | 75000 | 0.9057 | 0.2825 |
| 0.2685 | 100000 | 0.9099 | 0.2675 |
| 0.2591 | 125000 | 0.9124 | 0.2587 |
| 0.2620 | 150000 | 0.9144 | 0.2512 |
| 0.2806 | 175000 | 0.9161 | 0.2454 |
| 0.2468 | 200000 | 0.9179 | 0.2396 |
| 0.2669 | 225000 | 0.9194 | 0.2343 |
| 0.2611 | 250000 | 0.9210 | 0.2283 |
| 0.2346 | 275000 | 0.9226 | 0.2230 |
| 0.1972 | 300000 | 0.9238 | 0.2191 |
| 0.2344 | 325000 | 0.9250 | 0.2152 |
| 0.2164 | 350000 | 0.9259 | 0.2121 |
|
36cae126ed4e3cee192c50e32cc7fc72
|
agnesluhtaru/whisper-large-et-ERR2020-v2
|
agnesluhtaru
|
whisper
| 24 | 9 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,926 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-et-ERR2020-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2915
- Wer: 13.8640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.2158 | 0.1 | 1000 | 0.3205 | 23.8154 |
| 0.0897 | 0.2 | 2000 | 0.2961 | 18.3340 |
| 0.0785 | 0.3 | 3000 | 0.2839 | 17.5230 |
| 0.0653 | 0.4 | 4000 | 0.2847 | 17.8752 |
| 0.0541 | 0.5 | 5000 | 0.2906 | 15.2645 |
| 0.0566 | 0.6 | 6000 | 0.2845 | 15.2081 |
| 0.051 | 0.7 | 7000 | 0.2888 | 14.4668 |
| 0.049 | 1.03 | 8000 | 0.2927 | 15.3130 |
| 0.044 | 1.13 | 9000 | 0.2915 | 13.8640 |
| 0.0379 | 1.23 | 10000 | 0.2913 | 16.5773 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+rocm5.1.1
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
5467582055d6711368dafeb09a8ce991
|
joheras/flan-t5-base-clara-med
|
joheras
|
t5
| 20 | 9 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['simplification', 'generated_from_trainer']
| true | true | true | 4,076 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-clara-med
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2699
- Rouge1: 30.1376
- Rouge2: 16.8424
- Rougel: 27.9649
- Rougelsum: 27.9946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 380 | 1.4710 | 27.6278 | 15.5057 | 25.9917 | 26.0601 |
| No log | 2.0 | 760 | 1.3863 | 28.4324 | 15.8032 | 26.8023 | 26.8387 |
| 1.6476 | 3.0 | 1140 | 1.3494 | 28.6807 | 16.0854 | 26.9253 | 26.9743 |
| 1.6476 | 4.0 | 1520 | 1.3170 | 28.3434 | 15.6852 | 26.58 | 26.5937 |
| 1.3695 | 5.0 | 1900 | 1.3009 | 28.8006 | 15.819 | 26.8122 | 26.8756 |
| 1.3695 | 6.0 | 2280 | 1.2797 | 29.0521 | 16.4032 | 27.1802 | 27.1988 |
| 1.3695 | 7.0 | 2660 | 1.2744 | 29.2339 | 16.4583 | 27.3799 | 27.4091 |
| 1.2162 | 8.0 | 3040 | 1.2557 | 28.8177 | 16.2513 | 26.9967 | 27.028 |
| 1.2162 | 9.0 | 3420 | 1.2553 | 29.0411 | 16.4606 | 27.2912 | 27.3004 |
| 1.1232 | 10.0 | 3800 | 1.2540 | 29.0367 | 16.3896 | 27.2911 | 27.324 |
| 1.1232 | 11.0 | 4180 | 1.2500 | 29.3928 | 16.6718 | 27.4638 | 27.4877 |
| 1.1232 | 12.0 | 4560 | 1.2487 | 29.6046 | 16.7906 | 27.6814 | 27.6977 |
| 1.0389 | 13.0 | 4940 | 1.2542 | 29.4922 | 16.5255 | 27.5363 | 27.5904 |
| 1.0389 | 14.0 | 5320 | 1.2384 | 29.6472 | 16.707 | 27.6808 | 27.6988 |
| 0.9794 | 15.0 | 5700 | 1.2476 | 29.3771 | 16.2381 | 27.3751 | 27.3876 |
| 0.9794 | 16.0 | 6080 | 1.2437 | 29.4158 | 16.4003 | 27.3116 | 27.3409 |
| 0.9794 | 17.0 | 6460 | 1.2466 | 29.2787 | 16.4136 | 27.3256 | 27.3622 |
| 0.9276 | 18.0 | 6840 | 1.2530 | 29.4183 | 16.4244 | 27.325 | 27.3583 |
| 0.9276 | 19.0 | 7220 | 1.2582 | 29.743 | 16.7631 | 27.6997 | 27.7752 |
| 0.8851 | 20.0 | 7600 | 1.2560 | 29.5645 | 16.5834 | 27.5395 | 27.5622 |
| 0.8851 | 21.0 | 7980 | 1.2544 | 29.4893 | 16.4478 | 27.3961 | 27.4465 |
| 0.8851 | 22.0 | 8360 | 1.2593 | 29.785 | 16.6023 | 27.6214 | 27.6394 |
| 0.8578 | 23.0 | 8740 | 1.2588 | 30.008 | 16.8796 | 27.882 | 27.8989 |
| 0.8578 | 24.0 | 9120 | 1.2672 | 30.0112 | 16.6782 | 27.8556 | 27.8934 |
| 0.8347 | 25.0 | 9500 | 1.2668 | 29.6945 | 16.431 | 27.4398 | 27.4956 |
| 0.8347 | 26.0 | 9880 | 1.2642 | 29.9327 | 16.6105 | 27.798 | 27.8497 |
| 0.8347 | 27.0 | 10260 | 1.2674 | 30.0747 | 16.7768 | 27.9137 | 27.9609 |
| 0.8156 | 28.0 | 10640 | 1.2712 | 29.9504 | 16.6466 | 27.8371 | 27.8742 |
| 0.8156 | 29.0 | 11020 | 1.2692 | 30.2209 | 16.9038 | 28.0454 | 28.0982 |
| 0.8055 | 30.0 | 11400 | 1.2699 | 30.1376 | 16.8424 | 27.9649 | 27.9946 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.12.1
|
a115d78b2feea3b6bbe8d5115f7009b2
|
hivemind/gpt-j-6B-8bit
|
hivemind
|
gptj
| 6 | 12,362 |
transformers
| 88 |
text-generation
| true | false | false |
apache-2.0
|
['en']
|
['The Pile']
| null | 1 | 0 | 1 | 0 | 11 | 10 | 1 |
['pytorch', 'causal-lm']
| false | true | true | 4,720 | false |
Note: this model was superceded by the [`load_in_8bit=True` feature in transformers](https://github.com/huggingface/transformers/pull/17901)
by Younes Belkada and Tim Dettmers. Please see [this usage example](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4#scrollTo=W8tQtyjp75O).
This legacy model was built for [transformers v4.15.0](https://github.com/huggingface/transformers/releases/tag/v4.15.0) and pytorch 1.11. Newer versions could work, but are not supported.
### Quantized EleutherAI/gpt-j-6b with 8-bit weights
This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**.
Here's how to run it: [](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es)
__The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
- scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).

__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
### How should I fine-tune the model?
We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
As a result, the larger batch size you can fit, the more efficient you will train.
### Where can I train for free?
You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance.
### Can I use this technique with other models?
The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
|
15e059ff50d769648b9b298a1f681eeb
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_pretrain_mrpc
|
gokuls
|
mobilebert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,850 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_pretrain_mrpc
This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-complete](https://huggingface.co/gokuls/mobilebert_add_pre-training-complete) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.3162
- F1: 0.0
- Combined Score: 0.1581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:--------------:|
| 0.0 | 1.0 | 29 | nan | 0.3162 | 0.0 | 0.1581 |
| 0.0 | 2.0 | 58 | nan | 0.3162 | 0.0 | 0.1581 |
| 0.0 | 3.0 | 87 | nan | 0.3162 | 0.0 | 0.1581 |
| 0.0 | 4.0 | 116 | nan | 0.3162 | 0.0 | 0.1581 |
| 0.0 | 5.0 | 145 | nan | 0.3162 | 0.0 | 0.1581 |
| 0.0 | 6.0 | 174 | nan | 0.3162 | 0.0 | 0.1581 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bb6d679a38b7bcef99e81817890633a7
|
ericntay/stbl_clinical_bert_ft_rs1
|
ericntay
|
bert
| 12 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,879 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0789
- F1: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2742 | 1.0 | 101 | 0.0959 | 0.8413 |
| 0.0698 | 2.0 | 202 | 0.0635 | 0.8923 |
| 0.0335 | 3.0 | 303 | 0.0630 | 0.9013 |
| 0.0171 | 4.0 | 404 | 0.0635 | 0.9133 |
| 0.0096 | 5.0 | 505 | 0.0671 | 0.9171 |
| 0.0058 | 6.0 | 606 | 0.0701 | 0.9210 |
| 0.0037 | 7.0 | 707 | 0.0762 | 0.9231 |
| 0.0034 | 8.0 | 808 | 0.0771 | 0.9168 |
| 0.0021 | 9.0 | 909 | 0.0751 | 0.9268 |
| 0.0013 | 10.0 | 1010 | 0.0770 | 0.9277 |
| 0.0011 | 11.0 | 1111 | 0.0784 | 0.9259 |
| 0.0008 | 12.0 | 1212 | 0.0789 | 0.9267 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
6f116e1108f4f6d3de7930d5d0bd7f9c
|
Intel/MiniLM-L12-H384-uncased-mrpc-int8-qat
|
Intel
|
bert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['en']
|
['mrpc']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classfication', 'int8', 'Intel® Neural Compressor', 'QuantizationAwareTraining']
| false | true | true | 1,071 | false |
# INT8 MiniLM finetuned MRPC
### QuantizationAwareTraining
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/MiniLM-L12-H384-uncased-mrpc](https://huggingface.co/Intel/MiniLM-L12-H384-uncased-mrpc).
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9068|0.9097|
| **Model size (MB)** |33.1|127|
### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification(
'Intel/MiniLM-L12-H384-uncased-mrpc-int8-qat',
)
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- train_batch_size: 16
- eval_batch_size: 8
|
60aa952fbfa267e9c316db4dc7f2d51a
|
GinaYang/distilbert-base-uncased-finetuned-emotion
|
GinaYang
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2248
- Accuracy: 0.9235
- F1: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8242 | 1.0 | 250 | 0.3230 | 0.9 | 0.8960 |
| 0.2497 | 2.0 | 500 | 0.2248 | 0.9235 | 0.9234 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
21ea04db8e60da4e4f39efde96a50779
|
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-0_england-10_s870
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 498 | false |
# exp_w2v2r_en_vp-100k_accent_us-0_england-10_s870
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
9a81d5017e8abb7db777177676220152
|
MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
|
MoritzLaurer
|
deberta-v2
| 9 | 4,838 |
transformers
| 35 |
zero-shot-classification
| true | false | false |
mit
|
['multilingual', 'zh', 'ja', 'ar', 'ko', 'de', 'fr', 'es', 'pt', 'hi', 'id', 'it', 'tr', 'ru', 'bn', 'ur', 'mr', 'ta', 'vi', 'fa', 'pl', 'uk', 'nl', 'sv', 'he', 'sw', 'ps']
|
['MoritzLaurer/multilingual-NLI-26lang-2mil7', 'xnli', 'multi_nli', 'anli', 'fever', 'lingnli', 'alisawuffles/WANLI']
| null | 0 | 0 | 0 | 0 | 2 | 1 | 1 |
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
| true | true | true | 9,541 | false |
# Model card for mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
## Model description
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying mDeBERTa-v3-base model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100) with 100 languages. The model was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli) and on the [multilingual-NLI-26lang-2mil7 dataset](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7). Both datasets contain more than 2.7 million hypothesis-premise pairs in 27 languages spoken by more than 4 billion people.
As of December 2021, mDeBERTa-v3-base is the best performing multilingual base-sized transformer model introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the [multilingual-nli-26lang-2mil7 dataset](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) and the [XNLI](https://huggingface.co/datasets/xnli) validation dataset.
The multilingual-nli-26lang-2mil7 dataset contains 2 730 000 NLI hypothesis-premise pairs in 26 languages spoken by more than 4 billion people. The dataset contains 105 000 text pairs per language. It is based on the English datasets [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) and was created using the latest open-source machine translation models. The languages in the dataset are: ['ar', 'bn', 'de', 'es', 'fa', 'fr', 'he', 'hi', 'id', 'it', 'ja', 'ko', 'mr', 'nl', 'pl', 'ps', 'pt', 'ru', 'sv', 'sw', 'ta', 'tr', 'uk', 'ur', 'vi', 'zh'] (see [ISO language codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). For more details, see the [datasheet](XXX). In addition, a sample of 105 000 text pairs was also added for English following the same sampling method as the other languages, leading to 27 languages.
Moreover, for each language a random set of 10% of the hypothesis-premise pairs was added where an English hypothesis was paired with the premise in the other language (and the same for English premises and other language hypotheses). This mix of languages in the text pairs should enable users to formulate a hypothesis in English for a target text in another language.
The [XNLI](https://huggingface.co/datasets/xnli) validation set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that XNLI also contains a training set of 14 machine translated versions of the MultiNLI dataset for 14 languages, but this data was excluded due to quality issues with the machine translations from 2018.
Note that for evaluation purposes, three languages were excluded from the XNLI training data and only included in the test data: ["bg","el","th"]. This was done in order to test the performance of the model on languages it has not seen during NLI fine-tuning on 27 languages, but only during pre-training on 100 languages - see evaluation metrics below.
The total training dataset had a size of 3 287 280 hypothesis-premise pairs.
### Training procedure
mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
gradient_accumulation_steps=2, # to double the effective batch size for
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
fp16=False
)
```
### Eval results
The model was evaluated on the XNLI test set in 15 languages (5010 texts per language, 75150 in total) and the English test sets of [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) . Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able to do NLI on the other 73 languages mDeBERTa was pre-trained on, but performance is most likely lower than for those languages seen during NLI fine-tuning. The performance on the languages ["bg","el","th"] in the table below is a good indicated of this cross-lingual transfer, as these languages were not included in the training data.
|XNLI subsets|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
| :---: |:---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.794|0.822|0.824|0.809|0.871|0.832|0.823|0.769|0.803|0.746|0.786|0.792|0.744|0.793|0.803|
|Speed (text/sec, A100-GPU)|1344.0|1355.0|1472.0|1149.0|1697.0|1446.0|1278.0|1115.0|1380.0|1463.0|1713.0|1594.0|1189.0|877.0|1887.0|
|English Datasets|mnli_test_m|mnli_test_mm|anli_test|anli_test_r3|fever_test|ling_test|wanli_test|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.857|0.856|0.537|0.497|0.761|0.788|0.732|0.794|
|Speed (text/sec, A100-GPU)|1000.0|1009.0|794.0|672.0|374.0|1177.0|1468.0|
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
## Limitations and bias
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases. Moreover, note that the multilingual-nli-26lang-2mil7 dataset was created using machine translation, which reduces the quality of the data for a complex task like NLI. You can inspect the data via the Hugging Face [dataset viewer](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) for languages you are interested in. Note that grammatical errors introduced by machine translation are less of an issue for zero-shot classification, for which grammar is less important.
## Citation
If the dataset is useful for you, please cite the following article:
```
@article{laurer_less_2022,
title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
url = {https://osf.io/74b8k},
language = {en-us},
urldate = {2022-07-28},
journal = {Preprint},
author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
month = jun,
year = {2022},
note = {Publisher: Open Science Framework},
}
```
## Ideas for cooperation or questions?
For updates on new models and datasets, follow me on [Twitter](https://twitter.com/MoritzLaurer).
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or on [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
## Debugging and issues
Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
|
2be522add3e69c7b35e00159064b28b0
|
ksoky/whisper-large-km
|
ksoky
|
whisper
| 21 | 18 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['km']
|
['openslr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,533 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Khmer - Kak Soky
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the SLR42 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2375
- Wer: 29.5183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0102 | 12.34 | 1000 | 0.2228 | 38.2659 |
| 0.0003 | 24.69 | 2000 | 0.2260 | 30.7900 |
| 0.0001 | 37.04 | 3000 | 0.2310 | 30.0578 |
| 0.0 | 49.38 | 4000 | 0.2375 | 29.5183 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
4c2c492afc5fead30b4ffc601f4c3631
|
google/realm-orqa-wq-openqa
|
google
|
realm
| 7 | 10 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 460 | false |
# realm-orqa-wq-openqa
## Model description
The REALM checkpoint finetuned with Web Questions(WQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmForOpenQA
openqa = RealmForOpenQA.from_pretrained("qqaatw/realm-orqa-wq-openqa")
```
|
6c1b3f3f9c8d7ec132b0c302cbe14caf
|
gustavecortal/camembert-base-cae-ressentis
|
gustavecortal
|
camembert
| 8 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,052 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-cae-ressentis
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8112
- Precision: 0.8116
- Recall: 0.8034
- F1: 0.8060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 1.2699 | 1.0 | 59 | 1.1005 | 0.2718 | 0.5214 | 0.3573 |
| 1.0852 | 2.0 | 118 | 0.8127 | 0.6403 | 0.7179 | 0.6708 |
| 0.7006 | 3.0 | 177 | 0.6582 | 0.7407 | 0.7436 | 0.7310 |
| 0.4187 | 4.0 | 236 | 0.5833 | 0.8075 | 0.7863 | 0.7817 |
| 0.2017 | 5.0 | 295 | 0.5869 | 0.8537 | 0.8376 | 0.8400 |
| 0.1142 | 6.0 | 354 | 0.6433 | 0.8125 | 0.8034 | 0.8064 |
| 0.0735 | 7.0 | 413 | 0.7700 | 0.8027 | 0.7949 | 0.7959 |
| 0.0572 | 8.0 | 472 | 0.8023 | 0.7915 | 0.7863 | 0.7877 |
| 0.0445 | 9.0 | 531 | 0.8010 | 0.8116 | 0.8034 | 0.8060 |
| 0.033 | 10.0 | 590 | 0.8112 | 0.8116 | 0.8034 | 0.8060 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.0
- Tokenizers 0.13.1
|
30325e716210824dd5c251420fb50874
|
sd-concepts-library/abstract-concepts
|
sd-concepts-library
| null | 10 | 0 | null | 4 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,172 | false |
### abstract concepts on Stable Diffusion
This is the `<art-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
1400c4b33feefb49081582bbb913960d
|
jonatasgrosman/exp_w2v2t_et_vp-sv_s807
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['et']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'et']
| false | true | true | 469 | false |
# exp_w2v2t_et_vp-sv_s807
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
c6a0ed901a0559f360e3fdcfe0fedac9
|
vasilis/wav2vec2-large-xlsr-53-swedish
|
vasilis
|
wav2vec2
| 8 | 10 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sv-SE']
|
['common_voice', 'NST Swedish ASR Database']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 4,864 | false |
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice) and parts for the [NST Swedish ASR Database](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-16/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 14.695793 %
## Training
As first step used Common Voice train dataset and parts from NST
as can be found [here](https://github.com/se-asr/nst/tree/master).
Part of NST where removed using this mask
```python
mask = [(5 < len(x.split()) < 20) and np.average([len(entry) for entry in x.split()]) > 5 for x in dataset['transcript'].tolist()]
```
After training like this for 20000 steps the model was finetuned on all of nst data using the mask
```python
mask = [(1 < len(x.split()) < 25) and np.average([len(entry) for entry in x.split()]) > 3 for x in dataset['transcript'].tolist()]
```
and all of common voice for 100000 more steps approximately 16 epochs.
|
b6a3010ed17a4df46615287031e631cc
|
scite/roberta-base-squad2-nq-bioasq
|
scite
|
roberta
| 18 | 1,281 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering', 'generated_from_trainer']
| true | true | true | 1,356 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-nq-bioasq
## Model description
This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on the BioASQ 10b dataset.
## Intended uses & limitations
Cross-domain question answering!
## Training and evaluation data
Training: BioASQ 10B with SQUAD sampled evenly to match the same samples as BioASQ 10B
Eval: BioASQ 9B Eval with SQUAD Eval sampled evenly to match the same samples as BioASQ 9B Eval
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
Went from untrained exact match: 60.9% (f1 71.8%) to exact match: 95.2% (96.6% f1) on BioASQ 9B held out training set.
Scores on SQUAD+BioASQ remained stable at exact match: 72.5% (f1 81.4%) to 88.5% (f1 93.3%).
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
4ce4e42c3ab07629665f0ae1aa670839
|
henryu-lin/t5-large-samsum-deepspeed
|
henryu-lin
|
t5
| 8 | 10 |
transformers
| 1 |
summarization
| true | false | false |
apache-2.0
|
['en']
|
['samsum']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['azureml', 't5', 'summarization', 'deepspeed']
| false | true | true | 3,979 | false |
## `t5-large-samsum-deepspeed`
This model was trained using Microsoft's `AzureML` and `DeepSpeed`'s ZeRO 2 optimization. It was fine-tuned on the `SAMSum` corpus from `t5-large` checkpoint.
More information on the fine-tuning process (includes samples and benchmarks):
*(currently still WIP, major updates coming soon: 7/6/21~7/9/21)*
## Resource Usage
These results are retrieved from AzureML Studio's resource monitoring module. All experiments were ran on AzureML's low priority clusters.
| key | value |
| --- | ----- |
| AzureML SKU | ND40rs_v2 (8 X V100 32GB) |
| Region | US West 2 |
| Run Duration | 12m 47.13s |
| Compute Cost (LowPriority/Dedicated) | $0.94/$4.69 (USD) |
| Average CPU Utilization | 51.2% |
| Average GPU Utilization | 42.0% |
| GPU Memory Usage (Avg/Peak) | 24.85/28.79 (GB) |
| Total GPU Energy Usage | 670.38 (kJ) |
*Compute cost is calculated from run duration and SKU's price per hour. Updated SKU pricing could be found here: https://azure.microsoft.com/en-us/pricing/details/machine-learning/
*Peak memory usage is calculated from average peak across all utilized GPUs.
### Carbon Emissions
These results are obtained using `codecarbon`. The carbon emission is estimated from training runtime only (excluding setup and evaluation runtime).
CodeCarbon: https://github.com/mlco2/codecarbon
| key | value |
| --- | ----- |
| timestamp | 2021-07-08T06:29:27 |
| duration | 515.5018835067749 |
| emissions | 0.043562840982919106 |
| energy_consumed | 0.14638051405550773 |
| country_name | USA |
| region | Washington |
| cloud_provider | azure |
| cloud_region | westus2 |
## Hyperparameters
```yaml
fp16: True
per device batch size: 8
effective batch size: 64
epoch: 3.0
learning rate: 1e-4
weight decay: 0.1
seed: 1
```
*Same `per device batch size` for evaluations
### DeepSpeed
Optimizer = `AdamW`, Scheduler = `WarmupDecayLR`, Offload = `none`
```json
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 1300000000,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 1300000000,
"contiguous_gradients": true
}
```
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="henryu-lin/t5-large-samsum-deepspeed")
conversation = '''Kevin: Hey man, are you excited to watch Finding Nemo tonight?
Henry: Yea, I can't wait to watch that same movie for the 89th time. Is Nate coming over to watch it with us tonight?
Kevin: Yep, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet? It's starting to make the kitchen really smell.
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class. I didn't get to start on it until an hour ago, and it's due in 30 minutes.
Kevin: Okay dude, you should take it out as soon as possible. By the way, Nate is bringing his girlfriend and their cat too.
Henry: Nice, I'm really looking forward to seeing them again.
'''
summarizer(conversation)
```
## Results
| ROUGE | Score |
| ----- | ----- |
| eval_rouge1 | 53.0823 |
| eval_rouge2 | 28.7097 |
| eval_rougeL | 43.939 |
| eval_rougeLsum | 49.067 |
| predict_rouge1 | 51.6716 |
| predict_rouge2 | 26.5372 |
| predict_rougeL | 42.9681 |
| predict_rougeLsum | 47.4084 |
| Metric | Value |
| ------ | ----- |
| eval_gen_len | 26.4071 |
| predict_gen_len | 25.9451 |
| train_loss | 1.3212629926497115 |
| eval_loss | 1.23828125 |
| predict_loss | 1.2333984375 |
| train_runtime | 515.2198 |
| train_samples | 14732 |
| train_samples_per_second | 85.781 |
| train_steps_per_second | 1.345 |
| eval_runtime | 61.275 |
| eval_samples | 818 |
| eval_samples_per_second | 13.35 |
| eval_steps_per_second | 0.212 |
| predict_runtime | 63.3732 |
| predict_samples | 819 |
| predict_samples_per_second | 12.923 |
| predict_steps_per_second | 0.205 |
| total_steps | 693 |
| total_flos | 7.20140924616704e+16 |
|
dfad8611045f88caf521cb8c04a51db4
|
Amloii/gpt2-reviewspanish
|
Amloii
|
gpt2
| 9 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
mit
|
['es']
|
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['GPT-2', 'Spanish', 'review', 'fake']
| false | true | true | 2,108 | false |
# GPT-2 - reviewspanish
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of text data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
In our case, we created a fined-tunned model of [Spanish GTP-2](https://huggingface.co/DeepESP/gpt2-spanish) combined with
the spanish reviews of Amazon from the HG dataset [Amazon-reviews-multi](https://huggingface.co/datasets/amazon_reviews_multi).
With this strategy, we obtain a model for text generation able to create realistic product reviews, useful for bot detection in
fake reviews.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation',
model='Amloii/gpt2-reviewspanish',
tokenizer='Amloii/gpt2-reviewspanish')
set_seed(42)
generator("Me ha gustado su", max_length=30, num_return_sequences=5)
[{'generated_text': 'Me ha gustado su tamaño y la flexibilidad de las correas, al ser de plastico las hebillas que lleva para sujetar las cadenas me han quitado el'},
{'generated_text': 'Me ha gustado su color y calidad. Lo peor de todo, es que las gafas no se pegan nada. La parte de fuera es finita'},
{'generated_text': 'Me ha gustado su rapidez y los ajustes de la correa, lo único que para mí, es poco manejable. Además en el bolso tiene una goma'},
{'generated_text': 'Me ha gustado su diseño y las dimensiones, pero el material es demasiado duro. Se nota bastante el uso pero me parece un poco caro para lo que'},
{'generated_text': 'Me ha gustado su aspecto aunque para lo que yo lo quería no me ha impresionado mucho. Las hojas tienen un tacto muy agradable que hace que puedas'}]
```
|
a2687a23710bf388b92bd8fb0a45b2d3
|
ALM/whisper-el-medium-augmented
|
ALM
|
whisper
| 20 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['el']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 2,271 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Greek - Robust
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 el dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2807
- Wer: 17.7099
**IMPORTANT** The model has been trained using *data augmentation* to improve its generalization capabilities and robustness.
The results on the eval set during training are biased towards data augmentation applied to evaluation data.
**Results on eval set**
- Mozilla CV 11.0 - Greek: 13.250 WER (using official script)
- Google Fluers - Greek: 39.59 WER (using official script)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0407 | 4.69 | 2000 | 0.2484 | 20.8767 |
| 0.0128 | 9.39 | 4000 | 0.2795 | 21.2017 |
| 0.0041 | 14.08 | 6000 | 0.2744 | 19.1308 |
| 0.0017 | 18.78 | 8000 | 0.2759 | 17.9978 |
| 0.0005 | 23.47 | 10000 | 0.2751 | 18.5457 |
| 0.0015 | 28.17 | 12000 | 0.2928 | 19.2051 |
| 0.0004 | 32.86 | 14000 | 0.2819 | 18.2857 |
| 0.0002 | 37.56 | 16000 | 0.2831 | 17.7285 |
| 0.0007 | 42.25 | 18000 | 0.2776 | 17.8399 |
| 0.0 | 46.95 | 20000 | 0.2792 | 17.0970 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.7.1
- Tokenizers 0.12.1
|
0951cf72f6d1ebad418aaf3badbd845d
|
aiautomationlab/wtwm-gpt2-based-mentions-detector
|
aiautomationlab
|
gpt2
| 7 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['de']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classication']
| false | true | true | 6,460 | false |
# WTWM Newsroom Mentions Detector
Please node that this model originates from the ["What's there, what's missing"](https://interaktiv.br.de/ai-detect-newsroom-mentions-in-comments/) collaboration of [AI & Automation Labl of Bayerischer Rundfunk (BR hereafter)](https://www.br.de/extra/ai-automation-lab/index.html) and [Mitteldeutscher Rundfunk (mdr hereafter)](https://www.mdr.de/) as well as [ida](https://idalab.de/). The collaboration took place during the [JournalismAI fellowship '22](https://www.lse.ac.uk/media-and-communications/polis/JournalismAI/Fellowship-Programme) (see chapter **The fellowship** below). The model presented is part of the the documenation of the half year of project time. The related technical framework can be found a [github](https://github.com/br-data/wtwm-topic-modelling).
## The task
This is a model for the task of classifying whether or not a articles comment addresses the moderation team/authors of the media house that published the article. In this prototype stage the media houses are Bayerischer Rundfunk and Mitteldeutscher Rundfunk.
This classification task is implemented as a binary classification into:
label 0: the comment holds no mention
label 1: the comment addresses the moderation team/authors of the media house
We decided to use [german-gpt2](https://huggingface.co/dbmdz/german-gpt2) by MDZ of Bayerische Staatsbibliothek as the foundation model.
**This model is still work in progress and might be updated in the future.**
## Dataset & preprocessing
This model was finetuned on a corpus of 18.860 user comments with a share of user comments from BR and mdr websites and social media channels. The ratio of comments without mentions and with mentions is 92% to 8%. With the initial annotated data the share of comments with mentions was 2% of the data. To run the first round of training during the time of the [JournalismAI fellowship '22](https://www.lse.ac.uk/media-and-communications/polis/JournalismAI/Fellowship-Programme), we decided to augment the corpus by 1421 generated comments with mentions. The generated comments were annotated the same way as the initial data.
Please note, that the generated comments are merely meant to kick off the training of the prototype model. Retraining of the model in later iterations of our system will ignore the generated comments and solely depend on authentic comments.
The preprocessing of the data included:
- remove linebreaks
- remove html tags
- remove emojis
- remove formatting fragments (e.g. "---------", "......")
- remove gaps (~ two or more adjacent spaces)
- strip comments for whitespaces at the begin and end of the corpus
We advice to perform the same preprocessing steps when working with the mode.
## Training
After multiple test runs of finetuning the present model was further trained using the following parameters:
- foundation_model: [german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
- num_train_epochs: 4
- learning_rate: 2e-7
- weight_decay: 0.1
- metric_for_best_model: precision
### Example: Direct model evaluation
```python
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
pipeline,
)
comment = "The preprocessed comment to classify"
tokenizer = AutoTokenizer.from_pretrained(model_path)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForSequenceClassification.from_pretrained(model_path)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = pipe(comment)
label = result[0]["label"]
if label == "LABEL_1":
has_mention = True
elif label == "LABEL_0":
has_mention = False
print(f"Comment includes mention {has_mention}")
```
## Limitations
Clearly, the amount of training data was to small for a state of the art result. This can be seen in the evaluation chapter. Future rounds of retraining have to be performed. For the sake of completeness we publish this model here within [the projects documentation](https://interaktiv.br.de/ai-detect-newsroom-mentions-in-comments/).
An analysis of possible biases reproduced by the present model, regardless of whether they originate from our finetuning or the underlying gpt2 model, is beyond the scope of this work. We assume that biases exist within the model and an analysis will be a task for future work
## Evaluation
The model was evaluated on a held-out test set consisting of 10% of the corpus.
### Quantitative
As a general training approach we decided to optimize for the precision of the detection of the mentions in comments. This strategy best fits the high speed moderation challenge the moderation team's faces in everyday work. Our goal is to focus their attention only to comments that are very likely to contain a mention and not to confuse the moderation team with comments that don't contain mentions.
In addition we decided not to include the accuracy score in our evaluation because its high values are misleading for the interpretation of the evaluation. This effect is because of the strong imbalance in the distribution between comments with and without mentions. E.g., a classification that would label each comment as without mentions would receive a accuracy of 0.92 percentage points of accuracy.
| mentions total | mentions predicted | precision | recall | f1 |
|-|-|-|-|-|
| 148 | 130 | 0.74 | 0.65 | 0.69 |
### Qualitative
A qualitative evaluation conducted by members of the BR and mdr in the daily context of the comment moderation live system resulted in a 88% human agreement on the publish comments.
## Conclusion
The qualitative evaluation of [this project](https://interaktiv.br.de/ai-detect-newsroom-mentions-in-comments/) makes us confident that the mediocre quantitative results can be overcome with a sufficiently large corpus and that the overall prototype of the project can be a usefull addition to comment moderation tools.
## The fellowship
[JournalismAI](https://www.lse.ac.uk/media-and-communications/polis/JournalismAI) is a project of [Polis](https://www.lse.ac.uk/media-and-communications/polis) – the journalism think-tank at the London School of Economics and Political Science – and it’s sponsored by the [Google News Initiative](https://newsinitiative.withgoogle.com/)). If you want to know more about the Fellowship and the other JournalismAI activities, [sign up for the newsletter](https://mailchi.mp/lse.ac.uk/journalismai) or get in touch with the team via hello@journalismai.info
|
36f5f92c1b90b54c7ea9f4fb4b55b35f
|
caffsean/distilbert-base-uncased-finetuned-for-tweet-sentiment
|
caffsean
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,355 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-for-tweet-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3561 | 1.0 | 250 | 0.3072 | 0.9115 | 0.9098 |
| 0.2195 | 2.0 | 500 | 0.2161 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
21761f6b640a0fa35a507532bfaab63e
|
google/tapas-tiny-finetuned-sqa
|
google
|
tapas
| 8 | 16 |
transformers
| 0 |
table-question-answering
| true | true | false |
apache-2.0
|
['en']
|
['msr_sqa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tapas']
| false | true | true | 7,613 | false |
# TAPAS tiny model fine-tuned on Sequential Question Answering (SQA)
This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_tiny_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_sqa_inter_masklm_tiny` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results on SQA - Dev Accuracy
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset)
LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main)
BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset)
BASE | reset | 0.6874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main)
MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset)
MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main)
SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset)
SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main)
MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset)
MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main))
**TINY** | **noreset** | **0.2004** | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset)
**TINY** | **reset** | **0.2375** | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly
train this randomly initialized classification head with the base model on SQA.
## Intended uses & limitations
You can use this model for answering questions related to a table in a conversational set-up.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128.
In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio
of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@InProceedings{iyyer2017search-based,
author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei},
title = {Search-based Neural Structured Learning for Sequential Question Answering},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
year = {2017},
month = {July},
abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.},
publisher = {Association for Computational Linguistics},
url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/},
}
```
|
d7a17fd0b25ea5a179b21761a5278331
|
jonatasgrosman/exp_w2v2t_uk_unispeech-sat_s335
|
jonatasgrosman
|
unispeech-sat
| 10 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['uk']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'uk']
| false | true | true | 463 | false |
# exp_w2v2t_uk_unispeech-sat_s335
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
0cdfb7dcf7d915e0f30e9cd76dfcda37
|
andreypurwanto/opus-mt-en-ro-finetuned-en-to-ro
|
andreypurwanto
|
marian
| 13 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['wmt16']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,313 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1505
- Gen Len: 34.1036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1505 | 34.1036 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
6aaaf178f00cbaefce95d014ed9de462
|
Helsinki-NLP/opus-mt-srn-es
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-srn-es
* source languages: srn
* target languages: es
* OPUS readme: [srn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.es | 30.4 | 0.481 |
|
31138deb4906ddf95921dbac93f9c8ea
|
SashkaHavr/NLP4Web_Home_Exercise6_Group13
|
SashkaHavr
|
bert
| 19 | 19 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 980 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP4Web_Home_Exercise6_Group13
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
914b70251e86e7625fb825aff4b6d6aa
|
MultiversexPeeps/duskfalls-artificial-photography
|
MultiversexPeeps
| null | 70 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 7,590 | false |
### Duskfalls Artificial Photography Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Information on this model will be here: https://civitai.com/user/duskfallcrew
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
Data Training Examples:
rtrophto1 (use that on your prompt)

|
1ad21a6d7f71220d2e7414869fb0f26b
|
dnautiyal/bert_model_reddit_tsla_tracked
|
dnautiyal
|
distilbert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 920 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_model_reddit_tsla_tracked
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
a80cb751a524634c43769552c72ae5fd
|
vasilis/wav2vec2-large-xlsr-53-estonian
|
vasilis
|
wav2vec2
| 8 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['et']
|
['common_voice', 'NST Estonian ASR Database']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 4,429 | false |
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Estonian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Estonian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "et", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 30.658320 %
## Training
Common voice `train` and `validation` sets were used for finetuning
for 20000 steps (approx. 116 epochs). Both the `feature extractor` (`Wav2Vec2FeatureExtractor`) and
`feature projection` (`Wav2Vec2FeatureProjection`) layer were frozen. Only the `encoder` layer (`Wav2Vec2EncoderStableLayerNorm`) was finetuned.
|
37158398fdf0ee13fc6335999ca3c746
|
aajrami/bert-rand-base
|
aajrami
|
roberta
| 9 | 0 |
transformers
| 0 |
feature-extraction
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert']
| false | true | true | 804 | false |
## bert-rand-base
A BERT base Language Model with a **random** pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://aclanthology.org/2022.acl-short.16/)
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{alajrami2022does,
title={How does the pre-training objective affect what large language models learn about linguistic properties?},
author={Alajrami, Ahmed and Aletras, Nikolaos},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages={131--147},
year={2022}
}
```
|
cb6f8f773b87ba7094fd9f18dcc907a5
|
jakeyoo/whisper-medium-ja
|
jakeyoo
|
whisper
| 22 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,567 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Japanese
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 ja dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Wer: 62.6897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2264 | 0.2 | 1000 | 0.3102 | 79.3588 |
| 0.3195 | 0.4 | 2000 | 0.2830 | 78.1955 |
| 0.3905 | 0.6 | 3000 | 0.2508 | 72.9181 |
| 0.2478 | 0.8 | 4000 | 0.2407 | 68.8466 |
| 0.0922 | 1.1 | 5000 | 0.2165 | 62.6897 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
0cf873018c94fea0f8595c7974a8d8bd
|
Raffay/org_speech_processing_project_wav2vec2
|
Raffay
|
wav2vec2
| 27 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 981 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# org_speech_processing_project_wav2vec2
This model is a fine-tuned version of [kingabzpro/wav2vec2-urdu](https://huggingface.co/kingabzpro/wav2vec2-urdu) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aea085aaf14ef41b65c7e777469bb2f0
|
javilonso/Mex_Rbta_Opinion_Polarity
|
javilonso
|
roberta
| 9 | 4 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,423 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/Mex_Rbta_Opinion_Polarity
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4033
- Validation Loss: 0.5572
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5989 | 0.5516 | 0 |
| 0.4033 | 0.5572 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
7a202ccd21418d3b16cd5b9d9740e4a4
|
muhtasham/tiny-mlm-glue-wnli-target-glue-qqp
|
muhtasham
|
bert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,931 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-wnli-target-glue-qqp
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4204
- Accuracy: 0.7892
- F1: 0.7460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5839 | 0.04 | 500 | 0.5193 | 0.7299 | 0.6543 |
| 0.5179 | 0.09 | 1000 | 0.4861 | 0.7508 | 0.6874 |
| 0.5047 | 0.13 | 1500 | 0.4916 | 0.7406 | 0.7097 |
| 0.4871 | 0.18 | 2000 | 0.4647 | 0.7584 | 0.7182 |
| 0.4789 | 0.22 | 2500 | 0.4564 | 0.7637 | 0.7240 |
| 0.4622 | 0.26 | 3000 | 0.4496 | 0.7668 | 0.7296 |
| 0.4617 | 0.31 | 3500 | 0.4468 | 0.7678 | 0.7343 |
| 0.454 | 0.35 | 4000 | 0.4415 | 0.7718 | 0.7376 |
| 0.4553 | 0.4 | 4500 | 0.4371 | 0.7755 | 0.7415 |
| 0.4438 | 0.44 | 5000 | 0.4204 | 0.7892 | 0.7460 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
ce07b9de5e0dba5352e26e43d8152e3d
|
ConvLab/t5-small-dst-multiwoz21_sgd_tm1_tm2_tm3
|
ConvLab
|
t5
| 7 | 9 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en']
|
['ConvLab/multiwoz21', 'ConvLab/sgd', 'ConvLab/tm1', 'ConvLab/tm2', 'ConvLab/tm3']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['t5-small', 'text2text-generation', 'dialog state tracking', 'conversational system', 'task-oriented dialog']
| true | true | true | 976 | false |
# t5-small-dst-multiwoz21_sgd_tm1_tm2_tm3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21), [Schema-Guided Dialog](https://huggingface.co/datasets/ConvLab/sgd), [Taskmaster-1](https://huggingface.co/datasets/ConvLab/tm1), [Taskmaster-2](https://huggingface.co/datasets/ConvLab/tm2), and [Taskmaster-3](https://huggingface.co/datasets/ConvLab/tm3).
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
8f36ceda72cbc064e5de17979b993f58
|
kumarprashant556/checkpoints
|
kumarprashant556
|
marian
| 19 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 929 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.8.1+cpu
- Datasets 2.8.0
- Tokenizers 0.13.2
|
e2dc9c99ed6ad188a71e5dd126d4c723
|
sanskar/DepressionAnalysis
|
sanskar
|
distilbert
| 10 | 1 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,527 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DepressionAnalysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.8367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6091 | 1.0 | 151 | 0.5593 | 0.7082 |
| 0.4041 | 2.0 | 302 | 0.4295 | 0.8055 |
| 0.3057 | 3.0 | 453 | 0.4023 | 0.8367 |
| 0.1921 | 4.0 | 604 | 0.4049 | 0.8454 |
| 0.1057 | 5.0 | 755 | 0.4753 | 0.8479 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
50b1a81c0ab86bee6b7ba81a8056b888
|
arun100/whisper-small-vi
|
arun100
|
whisper
| 22 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['vi']
|
['mozilla-foundation/common_voice_11_0']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,568 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Vietnamese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 vi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8001
- Wer: 27.7034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0002 | 124.0 | 1000 | 0.8001 | 27.7034 |
| 0.0001 | 249.0 | 2000 | 0.8835 | 33.8561 |
| 0.0 | 374.0 | 3000 | 0.9383 | 36.0386 |
| 0.0 | 499.0 | 4000 | 0.9755 | 36.2689 |
| 0.0 | 624.0 | 5000 | 0.9923 | 38.3746 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
0e7c37458dbefebb6b531bcd0cc0842c
|
Culmenus/XLMR-ENIS-finetuned-ner
|
Culmenus
|
xlm-roberta
| 12 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
agpl-3.0
| null |
['mim_gold_ner']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,534 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0891
- Precision: 0.8804
- Recall: 0.8517
- F1: 0.8658
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0573 | 1.0 | 2904 | 0.1024 | 0.8608 | 0.8003 | 0.8295 | 0.9799 |
| 0.0307 | 2.0 | 5808 | 0.0899 | 0.8707 | 0.8380 | 0.8540 | 0.9825 |
| 0.0198 | 3.0 | 8712 | 0.0891 | 0.8804 | 0.8517 | 0.8658 | 0.9837 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
fbe5bd929eb9a9224779d917f5f6d640
|
google/t5-efficient-tiny-nh32
|
google
|
t5
| 12 | 10 |
transformers
| 1 |
text2text-generation
| true | true | true |
apache-2.0
|
['en']
|
['c4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['deep-narrow']
| false | true | true | 6,248 | false |
# T5-Efficient-TINY-NH32 (Deep-Narrow version)
T5-Efficient-TINY-NH32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny-nh32** - is of model type **Tiny** with the following variations:
- **nh** is **32**
It has **37.6** million parameters and thus requires *ca.* **150.41 MB** of memory in full precision (*fp32*)
or **75.2 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
1a4824de5084bc8f427eed09dcf8b85d
|
egumasa/roberta-base-academic
|
egumasa
|
roberta
| 37 | 21 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-sa-4.0
| null |
['orieg/elsevier-oa-cc-by']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,978 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-academic
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on a combination of Elsevier OA CC-by dataset and other corpora of university essays such as [BAWE](https://www.coventry.ac.uk/research/research-directories/current-projects/2015/british-academic-written-english-corpus-bawe/) and [MICUSP](https://elicorpora.info/main).
It achieves the following results on the evaluation set:
- Loss: 1.4229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.671 | 1.0 | 338 | 1.5581 |
| 1.6395 | 1.99 | 676 | 1.5276 |
| 1.5991 | 2.99 | 1014 | 1.5108 |
| 1.5659 | 3.99 | 1352 | 1.4903 |
| 1.5393 | 4.99 | 1690 | 1.4668 |
| 1.5178 | 5.98 | 2028 | 1.4621 |
| 1.4962 | 6.98 | 2366 | 1.4388 |
| 1.4783 | 7.98 | 2704 | 1.4320 |
| 1.4652 | 8.97 | 3042 | 1.4216 |
| 1.4542 | 9.97 | 3380 | 1.4180 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
22f5bf17484b490f13c4de8bbe4fd94e
|
anas-awadalla/roberta-base-compacter-squad
|
anas-awadalla
| null | 22 | 0 | null | 0 | null | false | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,027 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-compacter-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cfbca3e493c6330e11312c6a0f0e8384
|
sschet/ner-disease-ncbi-bionlp-bc5cdr-pubmed
|
sschet
|
roberta
| 9 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['en']
|
['ncbi-disease', 'bc5cdr', 'tner/bc5cdr', 'commanderstrife/jnlpba', 'bc2gm_corpus', 'drAbreu/bc4chemd_ner', 'linnaeus', 'chintagunta85/ncbi_disease']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['ner', 'ncbi', 'disease', 'pubmed', 'bioinfomatics']
| false | true | true | 2,353 | false |
# NER to find Gene & Gene products
> The model was trained on ncbi-disease, BC5CDR dataset, pretrained on this [pubmed-pretrained roberta model](/raynardj/roberta-pubmed)
All the labels, the possible token classes.
```json
{"label2id": {
"O": 0,
"Disease":1,
}
}
```
Notice, we removed the 'B-','I-' etc from data label.🗡
## This is the template we suggest for using the model
```python
from transformers import pipeline
PRETRAINED = "raynardj/ner-disease-ncbi-bionlp-bc5cdr-pubmed"
ner = pipeline(task="ner",model=PRETRAINED, tokenizer=PRETRAINED)
ner("Your text", aggregation_strategy="first")
```
And here is to make your output more consecutive ⭐️
```python
import pandas as pd
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
def clean_output(outputs):
results = []
current = []
last_idx = 0
# make to sub group by position
for output in outputs:
if output["index"]-1==last_idx:
current.append(output)
else:
results.append(current)
current = [output, ]
last_idx = output["index"]
if len(current)>0:
results.append(current)
# from tokens to string
strings = []
for c in results:
tokens = []
starts = []
ends = []
for o in c:
tokens.append(o['word'])
starts.append(o['start'])
ends.append(o['end'])
new_str = tokenizer.convert_tokens_to_string(tokens)
if new_str!='':
strings.append(dict(
word=new_str,
start = min(starts),
end = max(ends),
entity = c[0]['entity']
))
return strings
def entity_table(pipeline, **pipeline_kw):
if "aggregation_strategy" not in pipeline_kw:
pipeline_kw["aggregation_strategy"] = "first"
def create_table(text):
return pd.DataFrame(
clean_output(
pipeline(text, **pipeline_kw)
)
)
return create_table
# will return a dataframe
entity_table(ner)(YOUR_VERY_CONTENTFUL_TEXT)
```
> check our NER model on
* [gene and gene products](/raynardj/ner-gene-dna-rna-jnlpba-pubmed)
* [chemical substance](/raynardj/ner-chemical-bionlp-bc5cdr-pubmed).
* [disease](/raynardj/ner-disease-ncbi-bionlp-bc5cdr-pubmed)
|
0ffdb0a37d322faacf6ce1d511e456da
|
henryscheible/eval_v2_wnli
|
henryscheible
|
bert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 888 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_v2_wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
c52d0409bfe9967aae285bb473f7c0e1
|
sd-concepts-library/neon-pastel
|
sd-concepts-library
| null | 14 | 0 | null | 5 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,097 | false |
### Neon Pastel on Stable Diffusion
This is the `<neon-pastel>` style taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here are some of the outputs from this model:
Prompt: the taj mahal in `<neon-pastel>` style

Prompt: portrait of barack obama in `<neon-pastel>` style

Prompt: a beautiful beach landscape in `<neon-pastel>` style

|
4ebf2f9653f50c2ef6559ce584d73c08
|
jordyvl/bert-base-portuguese-cased_harem-selective-sm-first-ner
|
jordyvl
|
bert
| 13 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['harem']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,618 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased_harem-sm-first-ner
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the harem dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1952
- Precision: 0.7456
- Recall: 0.8053
- F1: 0.7743
- Accuracy: 0.9649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1049 | 1.0 | 2517 | 0.1955 | 0.6601 | 0.7710 | 0.7113 | 0.9499 |
| 0.0622 | 2.0 | 5034 | 0.2097 | 0.7314 | 0.7901 | 0.7596 | 0.9554 |
| 0.0318 | 3.0 | 7551 | 0.1952 | 0.7456 | 0.8053 | 0.7743 | 0.9649 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
a480a7d21b329b7c76ca55a6c3cd66db
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.