modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 18:27:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 526
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 18:26:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pmch/fgflex
|
pmch
| 2022-07-07T14:15:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-07T10:53:17Z |
# FgFlex: A flexible multitasking sequence-labeler for fine-grained sentiment analysis
|
cherrypaca/puppies_classify
|
cherrypaca
| 2022-07-07T13:25:43Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-07T13:25:31Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: puppies_classify
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9701492786407471
---
# puppies_classify
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### husky

#### pomeranian

|
dminiotas05/distilbert-base-uncased-finetuned-ft500_6class600
|
dminiotas05
| 2022-07-07T13:23:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-07T12:40:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft500_6class600
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft500_6class600
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6317
- Accuracy: 0.35
- F1: 0.3327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.5717 | 1.0 | 188 | 1.5375 | 0.3067 | 0.2820 |
| 1.4338 | 2.0 | 376 | 1.5354 | 0.3207 | 0.2824 |
| 1.3516 | 3.0 | 564 | 1.4852 | 0.3573 | 0.3287 |
| 1.2722 | 4.0 | 752 | 1.4997 | 0.366 | 0.3534 |
| 1.1923 | 5.0 | 940 | 1.5474 | 0.362 | 0.3454 |
| 1.1156 | 6.0 | 1128 | 1.5998 | 0.3547 | 0.3387 |
| 1.0522 | 7.0 | 1316 | 1.6154 | 0.3473 | 0.3316 |
| 1.0148 | 8.0 | 1504 | 1.6317 | 0.35 | 0.3327 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kabelomalapane/En-Nso
|
kabelomalapane
| 2022-07-07T13:11:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-07-07T11:32:38Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Nso
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Nso
This model is a fine-tuned version of [kabelomalapane/en_nso_ukuxhumana_model](https://huggingface.co/kabelomalapane/en_nso_ukuxhumana_model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9067
- Bleu: 23.5436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 14 | 3.7614 | 8.0360 |
| No log | 2.0 | 28 | 3.3181 | 20.7201 |
| No log | 3.0 | 42 | 3.1627 | 21.5932 |
| No log | 4.0 | 56 | 3.0935 | 22.0268 |
| No log | 5.0 | 70 | 3.0227 | 21.0859 |
| No log | 6.0 | 84 | 2.9740 | 21.6963 |
| No log | 7.0 | 98 | 2.9419 | 23.2214 |
| No log | 8.0 | 112 | 2.9227 | 24.4649 |
| No log | 9.0 | 126 | 2.9102 | 23.5293 |
| No log | 10.0 | 140 | 2.9067 | 23.5516 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Zengwei/icefall-asr-librispeech-pruned-transducer-stateless5-2022-07-07
|
Zengwei
| 2022-07-07T13:03:44Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2022-07-07T07:51:32Z |
Introduction
See https://github.com/k2-fsa/icefall/pull/330
and https://github.com/k2-fsa/icefall/pull/452
It has random combiner inside.
Note: There is something wrong in the log file, which has been fixed in https://github.com/k2-fsa/icefall/pull/468.
|
Zengwei/icefall-asr-librispeech-pruned-transducer-stateless5-B-2022-07-07
|
Zengwei
| 2022-07-07T12:34:11Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2022-07-07T09:00:28Z |
Introduction
See https://github.com/k2-fsa/icefall/pull/330
and https://github.com/k2-fsa/icefall/pull/452
It has random combiner inside.
|
Zengwei/icefall-asr-librispeech-pruned-transducer-stateless5-M-2022-07-07
|
Zengwei
| 2022-07-07T12:30:37Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2022-07-07T10:17:44Z |
Introduction
See https://github.com/k2-fsa/icefall/pull/330
and https://github.com/k2-fsa/icefall/pull/452
It has random combiner inside.
|
paola-md/recipe-roberta-is
|
paola-md
| 2022-07-07T11:53:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-07T08:40:25Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: recipe-roberta-is
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-roberta-is
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.334 | 1.0 | 961 | 1.1217 |
| 1.1638 | 2.0 | 1922 | 1.0369 |
| 1.0936 | 3.0 | 2883 | 0.9922 |
| 1.0503 | 4.0 | 3844 | 0.9606 |
| 1.0188 | 5.0 | 4805 | 0.9314 |
| 0.9953 | 6.0 | 5766 | 0.9256 |
| 0.9769 | 7.0 | 6727 | 0.9109 |
| 0.9599 | 8.0 | 7688 | 0.8978 |
| 0.9461 | 9.0 | 8649 | 0.8813 |
| 0.9377 | 10.0 | 9610 | 0.8777 |
| 0.9253 | 11.0 | 10571 | 0.8755 |
| 0.918 | 12.0 | 11532 | 0.8601 |
| 0.9112 | 13.0 | 12493 | 0.8541 |
| 0.9043 | 14.0 | 13454 | 0.8548 |
| 0.8984 | 15.0 | 14415 | 0.8470 |
| 0.8958 | 16.0 | 15376 | 0.8412 |
| 0.8914 | 17.0 | 16337 | 0.8345 |
| 0.8882 | 18.0 | 17298 | 0.8353 |
| 0.8871 | 19.0 | 18259 | 0.8344 |
| 0.8839 | 20.0 | 19220 | 0.8382 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dminiotas05/distilbert-base-uncased-finetuned-ft500_6class
|
dminiotas05
| 2022-07-07T11:11:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-07T10:45:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft500_6class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft500_6class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5162
- Accuracy: 0.356
- F1: 0.3347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.579 | 1.0 | 188 | 1.5575 | 0.2933 | 0.2521 |
| 1.4527 | 2.0 | 376 | 1.5043 | 0.3227 | 0.2821 |
| 1.3767 | 3.0 | 564 | 1.4982 | 0.34 | 0.2938 |
| 1.3122 | 4.0 | 752 | 1.4784 | 0.368 | 0.3454 |
| 1.2678 | 5.0 | 940 | 1.5162 | 0.356 | 0.3347 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zhifei/autotrain-autotrain-chinese-title-summarization-9-1101340178
|
zhifei
| 2022-07-07T10:49:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"unk",
"dataset:zhifei/autotrain-data-autotrain-chinese-title-summarization-9",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-07T10:48:04Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zhifei/autotrain-data-autotrain-chinese-title-summarization-9
co2_eq_emissions: 1.565396518204961
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1101340178
- CO2 Emissions (in grams): 1.565396518204961
## Validation Metrics
- Loss: 0.00012778821110259742
- Rouge1: 29.2308
- Rouge2: 0.0
- RougeL: 29.2308
- RougeLsum: 29.2308
- Gen Len: 18.4462
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zhifei/autotrain-autotrain-chinese-title-summarization-9-1101340178
```
|
Fulccrum/trainii_ac94u-label-classification
|
Fulccrum
| 2022-07-07T10:48:17Z | 0 | 0 |
sklearn
|
[
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2022-07-07T10:48:16Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on trainii_ac94u to apply classification on label
**Metrics of the best model:**
accuracy 0.361046
recall_macro 0.353192
precision_macro 0.240667
f1_macro 0.278231
Name: LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000), dtype: float64
**See model plot below:**
<style>#sk-container-id-9 {color: black;background-color: white;}#sk-container-id-9 pre{padding: 0;}#sk-container-id-9 div.sk-toggleable {background-color: white;}#sk-container-id-9 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-9 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-9 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-9 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-9 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-9 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-9 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-9 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-9 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-9 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-9 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-9 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-9 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-9 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-9 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-9 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-9 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-9 div.sk-item {position: relative;z-index: 1;}#sk-container-id-9 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-9 div.sk-item::before, #sk-container-id-9 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-9 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-9 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-9 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-9 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-9 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-9 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-9 div.sk-label-container {text-align: center;}#sk-container-id-9 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-9 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-9" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
text False False False ... False True False[2 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-27" type="checkbox" ><label for="sk-estimator-id-27" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
text False False False ... False True False[2 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-28" type="checkbox" ><label for="sk-estimator-id-28" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
text False False False ... False True False[2 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-29" type="checkbox" ><label for="sk-estimator-id-29" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
TestZee/t5-small-finetuned-custom-wion-test-BIG
|
TestZee
| 2022-07-07T10:31:54Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-07T10:30:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TestZee/t5-small-finetuned-custom-wion-test-BIG
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TestZee/t5-small-finetuned-custom-wion-test-BIG
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1165
- Validation Loss: 0.4609
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9622 | 0.8875 | 0 |
| 1.9276 | 0.8601 | 1 |
| 1.8301 | 0.8342 | 2 |
| 1.7776 | 0.8104 | 3 |
| 1.7345 | 0.7878 | 4 |
| 1.7733 | 0.7660 | 5 |
| 1.5626 | 0.7448 | 6 |
| 1.6111 | 0.7245 | 7 |
| 1.6754 | 0.7050 | 8 |
| 1.5030 | 0.6867 | 9 |
| 1.5101 | 0.6696 | 10 |
| 1.4328 | 0.6536 | 11 |
| 1.4311 | 0.6383 | 12 |
| 1.3917 | 0.6232 | 13 |
| 1.4102 | 0.6071 | 14 |
| 1.3732 | 0.5948 | 15 |
| 1.3468 | 0.5828 | 16 |
| 1.2817 | 0.5712 | 17 |
| 1.2920 | 0.5600 | 18 |
| 1.2696 | 0.5491 | 19 |
| 1.2552 | 0.5385 | 20 |
| 1.1859 | 0.5285 | 21 |
| 1.1995 | 0.5188 | 22 |
| 1.1690 | 0.5094 | 23 |
| 1.1678 | 0.5003 | 24 |
| 1.1420 | 0.4916 | 25 |
| 1.0959 | 0.4830 | 26 |
| 1.0848 | 0.4750 | 27 |
| 1.1248 | 0.4677 | 28 |
| 1.1165 | 0.4609 | 29 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
hugginglearners/malayalam-blurr-xlm-roberta-base
|
hugginglearners
| 2022-07-07T10:17:28Z | 0 | 2 |
fastai
|
[
"fastai",
"text-generation",
"ml",
"dataset:rajeshradhakrishnan/malayalam_wiki",
"region:us"
] |
text-generation
| 2022-07-06T11:10:26Z |
---
tags:
- fastai
- text-generation
language: ml
widget:
- text: "ഓഹരി വിപണി തകരുമ്പോള് നിക്ഷേപം എങ്ങനെ സുരക്ഷിതമാക്കാം"
example_title: "Malayalam Casual Language Model"
datasets:
- rajeshradhakrishnan/malayalam_wiki
---
# Blurr x Casual Machine Learning Model trained on Malayalam (മലയാളം) text. (Working in Progress)
[](https://nbviewer.org/github/rajeshradhakrishnanmvk/kitchen2.0/blob/main/ml/malayalam_blurr_xlm_roberta_base.ipynb)
---
# malayalam-blurr-xlm-roberta-base (base-sized model)
malayalam-blurr-xlm-roberta-base model is pre-trained on [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) using the library [blurr](https://ohmeow.github.io/blurr/) Language Model using fastai x huggingface frameworks.
Ref: [Causal Language Modeling](https://ohmeow.github.io/blurr/text-modeling-language-modeling.html#Causal-language-modeling).
## Usage
```
!pip install -Uqq huggingface_hub["fastai"] ohmeow-blurr
from huggingface_hub import from_pretrained_fastai
learner = from_pretrained_fastai(repo_id)
learner.blurr_generate("ബ്ളൂർ പഠിക്കാൻ വളെരെ എളുപ്പമാണ് എന്തുകൊണ്ട് എന്നാൽ", max_length=50, do_sample=True, top_k=25)
```
## Intended uses & limitations
It's not fine tuned to the state of the art accuracy
## Training and evaluation data
[Wiki 2020 Malayalam Dataset ](https://huggingface.co/datasets/rajeshradhakrishnan/malayalam_wiki)
|
osanseviero/ppo-LunarLander-v10
|
osanseviero
| 2022-07-07T09:42:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-07T09:38:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -574.85 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
huggingtweets/marsajal
|
huggingtweets
| 2022-07-07T09:42:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/marsajal/1657186931820/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1463196823728771079/wZc0m7cd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ajeng🦦</div>
<div style="text-align: center; font-size: 14px;">@marsajal</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ajeng🦦.
| Data | ajeng🦦 |
| --- | --- |
| Tweets downloaded | 214 |
| Retweets | 37 |
| Short tweets | 41 |
| Tweets kept | 136 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kdiymty/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marsajal's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lfk0v9ey) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lfk0v9ey/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/marsajal')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
osanseviero/ppo-LunarLander-v9
|
osanseviero
| 2022-07-07T09:37:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-07T09:36:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -30.40 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
osanseviero/ppo-LunarLander-v6
|
osanseviero
| 2022-07-07T09:29:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-07T09:07:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -443.18 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53
|
gary109
| 2022-07-07T09:10:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-01T03:42:00Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53](https://huggingface.co/gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8797
- Wer: 0.5513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.9613 | 1.0 | 2309 | 1.0171 | 0.7271 |
| 0.8254 | 2.0 | 4618 | 0.9771 | 0.6650 |
| 0.7406 | 3.0 | 6927 | 0.9174 | 0.6420 |
| 0.74 | 4.0 | 9236 | 0.9551 | 0.6371 |
| 0.5855 | 5.0 | 11545 | 0.9262 | 0.6453 |
| 0.5536 | 6.0 | 13854 | 0.9056 | 0.5894 |
| 0.505 | 7.0 | 16163 | 0.9166 | 0.6029 |
| 0.449 | 8.0 | 18472 | 0.8816 | 0.5873 |
| 0.4219 | 9.0 | 20781 | 0.8970 | 0.5589 |
| 0.5764 | 10.0 | 23090 | 0.9189 | 0.5649 |
| 0.5075 | 11.0 | 25399 | 0.8797 | 0.5513 |
| 0.4366 | 12.0 | 27708 | 0.9011 | 0.5567 |
| 0.4915 | 13.0 | 30017 | 0.9248 | 0.5455 |
| 0.3554 | 14.0 | 32326 | 0.9309 | 0.5374 |
| 0.3975 | 15.0 | 34635 | 0.9103 | 0.5259 |
| 0.4119 | 16.0 | 36944 | 0.9402 | 0.5290 |
| 0.267 | 17.0 | 39253 | 0.9479 | 0.5115 |
| 0.3107 | 18.0 | 41562 | 0.9428 | 0.5099 |
| 0.2684 | 19.0 | 43871 | 0.9508 | 0.5133 |
| 0.2125 | 20.0 | 46180 | 0.9737 | 0.5097 |
| 0.3149 | 21.0 | 48489 | 0.9992 | 0.5095 |
| 0.2313 | 22.0 | 50798 | 1.0037 | 0.5059 |
| 0.2674 | 23.0 | 53107 | 1.0091 | 0.5040 |
| 0.2056 | 24.0 | 55416 | 1.0082 | 0.5076 |
| 0.2781 | 25.0 | 57725 | 1.0160 | 0.5015 |
| 0.2005 | 26.0 | 60034 | 1.0390 | 0.5131 |
| 0.2221 | 27.0 | 62343 | 1.0401 | 0.5074 |
| 0.1857 | 28.0 | 64652 | 1.0484 | 0.5096 |
| 0.1562 | 29.0 | 66961 | 1.0516 | 0.5064 |
| 0.3027 | 30.0 | 69270 | 1.0543 | 0.5049 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
m-newhauser/distilbert-political-tweets
|
m-newhauser
| 2022-07-07T09:07:44Z | 75 | 23 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"en",
"dataset:m-newhauser/senator-tweets",
"license:lgpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: lgpl-3.0
library_name: transformers
tags:
- text-classification
- transformers
- pytorch
- generated_from_keras_callback
metrics:
- accuracy
- f1
datasets:
- m-newhauser/senator-tweets
widget:
- text: "This pandemic has shown us clearly the vulgarity of our healthcare system. Highest costs in the world, yet not enough nurses or doctors. Many millions uninsured, while insurance company profits soar. The struggle continues. Healthcare is a human right. Medicare for all."
example_title: "Bernie Sanders (D)"
- text: "Team Biden would rather fund the Ayatollah's Death to America regime than allow Americans to produce energy for our own domestic consumption."
example_title: "Ted Cruz (R)"
---
# distilbert-political-tweets 🗣 🇺🇸
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [m-newhauser/senator-tweets](https://huggingface.co/datasets/m-newhauser/senator-tweets) dataset, which contains all tweets made by United States senators during the first year of the Biden Administration.
It achieves the following results on the evaluation set:
* Accuracy: 0.9076
* F1: 0.9117
## Model description
The goal of this model is to classify short pieces of text as having either Democratic or Republican sentiment. The model was fine-tuned on 99,693 tweets (51.6% Democrat, 48.4% Republican) made by US senators in 2021.
Model accuracy may not hold up on pieces of text longer than a tweet.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: Adam
- training_precision: float32
- learning_rate = 5e-5
- num_epochs = 5
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
osanseviero/ppo-LunarLander-v5
|
osanseviero
| 2022-07-07T08:59:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-07T08:47:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -479.21 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
osanseviero/ppo-LunarLander-v4
|
osanseviero
| 2022-07-07T08:47:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-05T19:12:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -247.76 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep100
|
hsohn3
| 2022-07-07T08:33:59Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-06T16:29:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep100
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep100
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9559
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 4.1247 | 0 |
| 3.5129 | 1 |
| 3.4726 | 2 |
| 3.4483 | 3 |
| 3.4395 | 4 |
| 3.4301 | 5 |
| 3.4260 | 6 |
| 3.4131 | 7 |
| 3.3831 | 8 |
| 3.2925 | 9 |
| 3.2454 | 10 |
| 3.2092 | 11 |
| 3.1695 | 12 |
| 3.1346 | 13 |
| 3.0797 | 14 |
| 3.0154 | 15 |
| 2.9557 | 16 |
| 2.8814 | 17 |
| 2.7720 | 18 |
| 2.5472 | 19 |
| 2.3193 | 20 |
| 2.1005 | 21 |
| 1.9331 | 22 |
| 1.7971 | 23 |
| 1.6859 | 24 |
| 1.6062 | 25 |
| 1.5310 | 26 |
| 1.4706 | 27 |
| 1.4203 | 28 |
| 1.3681 | 29 |
| 1.3222 | 30 |
| 1.2939 | 31 |
| 1.2726 | 32 |
| 1.2494 | 33 |
| 1.2330 | 34 |
| 1.2161 | 35 |
| 1.1998 | 36 |
| 1.1874 | 37 |
| 1.1767 | 38 |
| 1.1641 | 39 |
| 1.1550 | 40 |
| 1.1407 | 41 |
| 1.1363 | 42 |
| 1.1272 | 43 |
| 1.1227 | 44 |
| 1.1163 | 45 |
| 1.1065 | 46 |
| 1.1008 | 47 |
| 1.0957 | 48 |
| 1.0837 | 49 |
| 1.0844 | 50 |
| 1.0778 | 51 |
| 1.0741 | 52 |
| 1.0693 | 53 |
| 1.0662 | 54 |
| 1.0608 | 55 |
| 1.0521 | 56 |
| 1.0526 | 57 |
| 1.0476 | 58 |
| 1.0454 | 59 |
| 1.0452 | 60 |
| 1.0348 | 61 |
| 1.0333 | 62 |
| 1.0342 | 63 |
| 1.0293 | 64 |
| 1.0249 | 65 |
| 1.0241 | 66 |
| 1.0194 | 67 |
| 1.0177 | 68 |
| 1.0102 | 69 |
| 1.0055 | 70 |
| 1.0052 | 71 |
| 1.0038 | 72 |
| 1.0005 | 73 |
| 0.9981 | 74 |
| 0.9991 | 75 |
| 0.9950 | 76 |
| 0.9928 | 77 |
| 0.9898 | 78 |
| 0.9906 | 79 |
| 0.9873 | 80 |
| 0.9849 | 81 |
| 0.9808 | 82 |
| 0.9804 | 83 |
| 0.9792 | 84 |
| 0.9789 | 85 |
| 0.9797 | 86 |
| 0.9741 | 87 |
| 0.9781 | 88 |
| 0.9678 | 89 |
| 0.9686 | 90 |
| 0.9651 | 91 |
| 0.9652 | 92 |
| 0.9613 | 93 |
| 0.9599 | 94 |
| 0.9566 | 95 |
| 0.9571 | 96 |
| 0.9577 | 97 |
| 0.9536 | 98 |
| 0.9559 | 99 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
avichr/Legal-heBERT_ft
|
avichr
| 2022-07-07T07:31:58Z | 28 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:1911.03090",
"arxiv:2010.02559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-05T06:49:36Z |
# Legal-HeBERT
Legal-HeBERT is a BERT model for Hebrew legal and legislative domains. It is intended to improve the legal NLP research and tools development in Hebrew. We release two versions of Legal-HeBERT. The first version is a fine-tuned model of [HeBERT](https://github.com/avichaychriqui/HeBERT) applied on legal and legislative documents. The second version uses [HeBERT](https://github.com/avichaychriqui/HeBERT)'s architecture guidlines to train a BERT model from scratch. <br>
We continue collecting legal data, examining different architectural designs, and performing tagged datasets and legal tasks for evaluating and to development of a Hebrew legal tools.
## Training Data
Our training datasets are:
| Name | Hebrew Description | Size (GB) | Documents | Sentences | Words | Notes |
|----------------------------------------------------------------------------------------------------------------------------------- |-------------------------------------------------------------------------- |----------- |----------- |------------ |------------- |----------------------------------------- |
| The Israeli Law Book | ספר החוקים הישראלי | 0.05 | 2338 | 293352 | 4851063 | |
| Judgments of the Supreme Court | מאגר פסקי הדין של בית המשפט העליון | 0.7 | 212348 | 5790138 | 79672415 | |
| custody courts | החלטות בתי הדין למשמורת | 2.46 | 169,708 | 8,555,893 | 213,050,492 | |
| Law memoranda, drafts of secondary legislation and drafts of support tests that have been distributed to the public for comment | תזכירי חוק, טיוטות חקיקת משנה וטיוטות מבחני תמיכה שהופצו להערות הציבור | 0.4 | 3,291 | 294,752 | 7,218,960 | |
| Supervisors of Land Registration judgments | מאגר פסקי דין של המפקחים על רישום המקרקעין | 0.02 | 559 | 67,639 | 1,785,446 | |
| Decisions of the Labor Court - Corona | מאגר החלטות בית הדין לעניין שירות התעסוקה – קורונה | 0.001 | 146 | 3505 | 60195 | |
| Decisions of the Israel Lands Council | החלטות מועצת מקרקעי ישראל | | 118 | 11283 | 162692 | aggregate file |
| Judgments of the Disciplinary Tribunal and the Israel Police Appeals Tribunal | פסקי דין של בית הדין למשמעת ובית הדין לערעורים של משטרת ישראל | 0.02 | 54 | 83724 | 1743419 | aggregate files |
| Disciplinary Appeals Committee in the Ministry of Health | ועדת ערר לדין משמעתי במשרד הבריאות | 0.004 | 252 | 21010 | 429807 | 465 files are scanned and didn't parser |
| Attorney General's Positions | מאגר התייצבויות היועץ המשפטי לממשלה | 0.008 | 281 | 32724 | 813877 | |
| Legal-Opinion of the Attorney General | מאגר חוות דעת היועץ המשפטי לממשלה | 0.002 | 44 | 7132 | 188053 | |
| | | | | | | |
| total | | 3.665 | 389,139 | 15,161,152 | 309,976,419 | |
We thank <b>Yair Gardin</b> for the referring to the governance data, <b>Elhanan Schwarts</b> for collecting and parsing The Israeli law book, and <b>Jonathan Schler</b> for collecting the judgments of the supreme court.
## Training process
* Vocabulary size: 50,000 tokens
* 4 epochs (1M steps±)
* lr=5e-5
* mlm_probability=0.15
* batch size = 32 (for each gpu)
* NVIDIA GeForce RTX 2080 TI + NVIDIA GeForce RTX 3090 (1 week training)
### Additional training settings:
<b>Fine-tuned [HeBERT](https://github.com/avichaychriqui/HeBERT) model:</b> The first eight layers were freezed (like [Lee et al. (2019)](https://arxiv.org/abs/1911.03090) suggest)<br>
<b>Legal-HeBERT trained from scratch:</b> The training process is similar to [HeBERT](https://github.com/avichaychriqui/HeBERT) and inspired by [Chalkidis et al. (2020)](https://arxiv.org/abs/2010.02559) <br>
## How to use
The models can be found in huggingface hub and can be fine-tunned to any down-stream task:
```
# !pip install transformers==4.14.1
from transformers import AutoTokenizer, AutoModel
model_name = 'avichr/Legal-heBERT_ft' # for the fine-tuned HeBERT model
model_name = 'avichr/Legal-heBERT' # for legal HeBERT model trained from scratch
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model_name,
)
fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.")
```
## Stay tuned!
We are still working on our models and the datasets. We will edit this page as we progress. We are open for collaborations.
## If you used this model please cite us as :
Chriqui, Avihay, Yahav, Inbal and Bar-Siman-Tov, Ittai, Legal HeBERT: A BERT-based NLP Model for Hebrew Legal, Judicial and Legislative Texts (June 27, 2022). Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4147127
```
@article{chriqui2021hebert,
title={Legal HeBERT: A BERT-based NLP Model for Hebrew Legal, Judicial and Legislative Texts},
author={Chriqui, Avihay, Yahav, Inbal and Bar-Siman-Tov, Ittai},
journal={SSRN preprint:4147127},
year={2022}
}
```
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il), The Coller AI Lab <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il), The Coller AI Lab <br>
[Ittai Bar-Siman-Tov](mailto:Ittai.Bar-Siman-Tov@biu.ac.il), the BIU Innovation Lab for Law, Data-Science and Digital Ethics <br>
Thank you, תודה, شكرا <br>
|
ScarlettSun9/autotrain-ZuoZhuan-1100540141
|
ScarlettSun9
| 2022-07-07T07:08:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"autotrain",
"unk",
"dataset:ScarlettSun9/autotrain-data-ZuoZhuan",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-07T07:02:53Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ScarlettSun9/autotrain-data-ZuoZhuan
co2_eq_emissions: 8.343592303925112
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1100540141
- CO2 Emissions (in grams): 8.343592303925112
## Validation Metrics
- Loss: 0.38094884157180786
- Accuracy: 0.8795777325860159
- Precision: 0.8171375141922127
- Recall: 0.8417033571821684
- F1: 0.8292385373953709
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ScarlettSun9/autotrain-ZuoZhuan-1100540141
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ScarlettSun9/autotrain-ZuoZhuan-1100540141", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ScarlettSun9/autotrain-ZuoZhuan-1100540141", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
go2k/q-Taxi-v3
|
go2k
| 2022-07-07T05:45:11Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-07T05:39:36Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="go2k/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
go2k/q-FrozenLake-v1-4x4-noSlippery
|
go2k
| 2022-07-07T05:26:00Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-07T05:25:54Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="go2k/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Evelyn18/distilbert-base-uncased-becasv2-6
|
Evelyn18
| 2022-07-07T04:44:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-07T04:39:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becasv2-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv2-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 4.0542 |
| No log | 2.0 | 18 | 3.0865 |
| No log | 3.0 | 27 | 2.8069 |
| No log | 4.0 | 36 | 3.3330 |
| No log | 5.0 | 45 | 3.4108 |
| No log | 6.0 | 54 | 3.5562 |
| No log | 7.0 | 63 | 3.8846 |
| No log | 8.0 | 72 | 3.8936 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/distilbert-base-uncased-becasv2-3
|
Evelyn18
| 2022-07-07T04:00:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-07T03:55:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becasv2-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv2-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 4.6377 |
| No log | 2.0 | 18 | 3.8511 |
| No log | 3.0 | 27 | 3.3758 |
| No log | 4.0 | 36 | 3.1910 |
| No log | 5.0 | 45 | 3.1187 |
| No log | 6.0 | 54 | 3.1009 |
| No log | 7.0 | 63 | 3.1131 |
| No log | 8.0 | 72 | 3.1218 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/distilbert-base-uncased-becasv2-2
|
Evelyn18
| 2022-07-07T03:47:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-07T03:43:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becasv2-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv2-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 4.8334 |
| No log | 2.0 | 18 | 3.9395 |
| No log | 3.0 | 27 | 3.4886 |
| No log | 4.0 | 36 | 3.2190 |
| No log | 5.0 | 45 | 3.0781 |
| No log | 6.0 | 54 | 2.9878 |
| No log | 7.0 | 63 | 2.9336 |
| No log | 8.0 | 72 | 2.9170 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/distilbert-base-uncased-becasv2-1
|
Evelyn18
| 2022-07-07T03:38:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-07T03:34:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becasv2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv2-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 4.6722 |
| No log | 2.0 | 18 | 3.9450 |
| No log | 3.0 | 27 | 3.4890 |
| No log | 4.0 | 36 | 3.2251 |
| No log | 5.0 | 45 | 2.9906 |
| No log | 6.0 | 54 | 3.0790 |
| No log | 7.0 | 63 | 2.8791 |
| No log | 8.0 | 72 | 2.9654 |
| No log | 9.0 | 81 | 2.9460 |
| No log | 10.0 | 90 | 2.9472 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ChauNguyen23/distilbert-base-uncased-finetuned-imdb
|
ChauNguyen23
| 2022-07-07T02:54:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-07T02:48:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
taln-ls2n/POET
|
taln-ls2n
| 2022-07-06T23:49:35Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"Transformers",
"sequence-tagger-model",
"fr",
"dataset:qanastek/ANTILLES",
"arxiv:1911.03894",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-11T09:33:05Z |
---
tags:
- Transformers
- token-classification
- sequence-tagger-model
language: fr
datasets:
- qanastek/ANTILLES
widget:
- text: "George Washington est allé à Washington"
---
# POET: A French Extended Part-of-Speech Tagger
- Corpora: [ANTILLES](https://github.com/qanastek/ANTILLES)
- Embeddings & Sequence Labelling: [CamemBERT](https://arxiv.org/abs/1911.03894)
- Number of Epochs: 115
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
* [DUFOUR Richard](https://cv.archives-ouvertes.fr/richard-dufour) (2)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
2. [LS2N, TALN team](https://www.ls2n.fr/equipe/taln/), Nantes University, Nantes, France.
## Demo: How to use in HuggingFace Transformers
Requires [transformers](https://pypi.org/project/transformers/): ```pip install transformers```
```python
from transformers import CamembertTokenizer, CamembertForTokenClassification, TokenClassificationPipeline
tokenizer = CamembertTokenizer.from_pretrained('taln-ls2n/POET')
model = CamembertForTokenClassification.from_pretrained('taln-ls2n/POET')
pos = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
def make_prediction(sentence):
labels = [l['entity'] for l in pos(sentence)]
return list(zip(sentence.split(" "), labels))
res = make_prediction("George Washington est allé à Washington")
```
Output:

## Training data
`ANTILLES` is a part-of-speech tagging corpora based on [UD_French-GSD](https://universaldependencies.org/treebanks/fr_gsd/index.html) which was originally created in 2015 and is based on the [universal dependency treebank v2.0](https://github.com/ryanmcd/uni-dep-tb).
Originally, the corpora consists of 400,399 words (16,341 sentences) and had 17 different classes. Now, after applying our tags augmentation we obtain 60 different classes which add linguistic and semantic information such as the gender, number, mood, person, tense or verb form given in the different CoNLL-03 fields from the original corpora.
We based our tags on the level of details given by the [LIA_TAGG](http://pageperso.lif.univ-mrs.fr/frederic.bechet/download.html) statistical POS tagger written by [Frédéric Béchet](http://pageperso.lif.univ-mrs.fr/frederic.bechet/index-english.html) in 2001.
The corpora used for this model is available on [Github](https://github.com/qanastek/ANTILLES) at the [CoNLL-U format](https://universaldependencies.org/format.html).
Training data are fed to the model as free language and doesn't pass a normalization phase. Thus, it's made the model case and punctuation sensitive.
## Original Tags
```plain
PRON VERB SCONJ ADP CCONJ DET NOUN ADJ AUX ADV PUNCT PROPN NUM SYM PART X INTJ
```
## New additional POS tags
| Abbreviation | Description | Examples |
|:--------:|:--------:|:--------:|
| PREP | Preposition | de |
| AUX | Auxiliary Verb | est |
| ADV | Adverb | toujours |
| COSUB | Subordinating conjunction | que |
| COCO | Coordinating Conjunction | et |
| PART | Demonstrative particle | -t |
| PRON | Pronoun | qui ce quoi |
| PDEMMS | Demonstrative Pronoun - Singular Masculine | ce |
| PDEMMP | Demonstrative Pronoun - Plural Masculine | ceux |
| PDEMFS | Demonstrative Pronoun - Singular Feminine | cette |
| PDEMFP | Demonstrative Pronoun - Plural Feminine | celles |
| PINDMS | Indefinite Pronoun - Singular Masculine | tout |
| PINDMP | Indefinite Pronoun - Plural Masculine | autres |
| PINDFS | Indefinite Pronoun - Singular Feminine | chacune |
| PINDFP | Indefinite Pronoun - Plural Feminine | certaines |
| PROPN | Proper noun | Houston |
| XFAMIL | Last name | Levy |
| NUM | Numerical Adjective | trentaine vingtaine |
| DINTMS | Masculine Numerical Adjective | un |
| DINTFS | Feminine Numerical Adjective | une |
| PPOBJMS | Pronoun complements of objects - Singular Masculine | le lui |
| PPOBJMP | Pronoun complements of objects - Plural Masculine | eux y |
| PPOBJFS | Pronoun complements of objects - Singular Feminine | moi la |
| PPOBJFP | Pronoun complements of objects - Plural Feminine | en y |
| PPER1S | Personal Pronoun First-Person - Singular | je |
| PPER2S | Personal Pronoun Second-Person - Singular | tu |
| PPER3MS | Personal Pronoun Third-Person - Singular Masculine | il |
| PPER3MP | Personal Pronoun Third-Person - Plural Masculine | ils |
| PPER3FS | Personal Pronoun Third-Person - Singular Feminine | elle |
| PPER3FP | Personal Pronoun Third-Person - Plural Feminine | elles |
| PREFS | Reflexive Pronoun First-Person - Singular | me m' |
| PREF | Reflexive Pronoun Third-Person - Singular | se s' |
| PREFP | Reflexive Pronoun First / Second-Person - Plural | nous vous |
| VERB | Verb | obtient |
| VPPMS | Past Participle - Singular Masculine | formulé |
| VPPMP | Past Participle - Plural Masculine | classés |
| VPPFS | Past Participle - Singular Feminine | appelée |
| VPPFP | Past Participle - Plural Feminine | sanctionnées |
| DET | Determinant | les l' |
| DETMS | Determinant - Singular Masculine | les |
| DETFS | Determinant - Singular Feminine | la |
| ADJ | Adjective | capable sérieux |
| ADJMS | Adjective - Singular Masculine | grand important |
| ADJMP | Adjective - Plural Masculine | grands petits |
| ADJFS | Adjective - Singular Feminine | française petite |
| ADJFP | Adjective - Plural Feminine | légères petites |
| NOUN | Noun | temps |
| NMS | Noun - Singular Masculine | drapeau |
| NMP | Noun - Plural Masculine | journalistes |
| NFS | Noun - Singular Feminine | tête |
| NFP | Noun - Plural Feminine | ondes |
| PREL | Relative Pronoun | qui dont |
| PRELMS | Relative Pronoun - Singular Masculine | lequel |
| PRELMP | Relative Pronoun - Plural Masculine | lesquels |
| PRELFS | Relative Pronoun - Singular Feminine | laquelle |
| PRELFP | Relative Pronoun - Plural Feminine | lesquelles |
| INTJ | Interjection | merci bref |
| CHIF | Numbers | 1979 10 |
| SYM | Symbol | € % |
| YPFOR | Endpoint | . |
| PUNCT | Ponctuation | : , |
| MOTINC | Unknown words | Technology Lady |
| X | Typos & others | sfeir 3D statu |
## Evaluation results
The test corpora used for this evaluation is available on [Github](https://github.com/qanastek/ANTILLES/blob/main/ANTILLES/test.conllu).
```plain
precision recall f1-score support
ADJ 0.9040 0.8828 0.8933 128
ADJFP 0.9811 0.9585 0.9697 434
ADJFS 0.9606 0.9826 0.9715 918
ADJMP 0.9613 0.9357 0.9483 451
ADJMS 0.9561 0.9611 0.9586 952
ADV 0.9870 0.9948 0.9908 1524
AUX 0.9956 0.9964 0.9960 1124
CHIF 0.9798 0.9774 0.9786 1239
COCO 1.0000 0.9989 0.9994 884
COSUB 0.9939 0.9939 0.9939 328
DET 0.9972 0.9972 0.9972 2897
DETFS 0.9990 1.0000 0.9995 1007
DETMS 1.0000 0.9993 0.9996 1426
DINTFS 0.9967 0.9902 0.9934 306
DINTMS 0.9923 0.9948 0.9935 387
INTJ 0.8000 0.8000 0.8000 5
MOTINC 0.5049 0.5827 0.5410 266
NFP 0.9807 0.9675 0.9740 892
NFS 0.9778 0.9699 0.9738 2588
NMP 0.9687 0.9495 0.9590 1367
NMS 0.9759 0.9560 0.9659 3181
NOUN 0.6164 0.8673 0.7206 113
NUM 0.6250 0.8333 0.7143 6
PART 1.0000 0.9375 0.9677 16
PDEMFP 1.0000 1.0000 1.0000 3
PDEMFS 1.0000 1.0000 1.0000 89
PDEMMP 1.0000 1.0000 1.0000 20
PDEMMS 1.0000 1.0000 1.0000 222
PINDFP 1.0000 1.0000 1.0000 3
PINDFS 0.8571 1.0000 0.9231 12
PINDMP 0.9000 1.0000 0.9474 9
PINDMS 0.9286 0.9701 0.9489 67
PINTFS 0.0000 0.0000 0.0000 2
PPER1S 1.0000 1.0000 1.0000 62
PPER2S 0.7500 1.0000 0.8571 3
PPER3FP 1.0000 1.0000 1.0000 9
PPER3FS 1.0000 1.0000 1.0000 96
PPER3MP 1.0000 1.0000 1.0000 31
PPER3MS 1.0000 1.0000 1.0000 377
PPOBJFP 1.0000 0.7500 0.8571 4
PPOBJFS 0.9167 0.8919 0.9041 37
PPOBJMP 0.7500 0.7500 0.7500 12
PPOBJMS 0.9371 0.9640 0.9504 139
PREF 1.0000 1.0000 1.0000 332
PREFP 1.0000 1.0000 1.0000 64
PREFS 1.0000 1.0000 1.0000 13
PREL 0.9964 0.9964 0.9964 277
PRELFP 1.0000 1.0000 1.0000 5
PRELFS 0.8000 1.0000 0.8889 4
PRELMP 1.0000 1.0000 1.0000 3
PRELMS 1.0000 1.0000 1.0000 11
PREP 0.9971 0.9977 0.9974 6161
PRON 0.9836 0.9836 0.9836 61
PROPN 0.9468 0.9503 0.9486 4310
PUNCT 1.0000 1.0000 1.0000 4019
SYM 0.9394 0.8158 0.8732 76
VERB 0.9956 0.9921 0.9938 2273
VPPFP 0.9145 0.9469 0.9304 113
VPPFS 0.9562 0.9597 0.9580 273
VPPMP 0.8827 0.9728 0.9256 147
VPPMS 0.9778 0.9794 0.9786 630
VPPRE 0.0000 0.0000 0.0000 1
X 0.9604 0.9935 0.9766 1073
XFAMIL 0.9386 0.9113 0.9248 1342
YPFOR 1.0000 1.0000 1.0000 2750
accuracy 0.9778 47574
macro avg 0.9151 0.9285 0.9202 47574
weighted avg 0.9785 0.9778 0.9780 47574
```
## BibTeX Citations
Please cite the following paper when using this model.
ANTILLES corpus and POET taggers:
```latex
@inproceedings{labrak:hal-03696042,
TITLE = {{ANTILLES: An Open French Linguistically Enriched Part-of-Speech Corpus}},
AUTHOR = {Labrak, Yanis and Dufour, Richard},
URL = {https://hal.archives-ouvertes.fr/hal-03696042},
BOOKTITLE = {{25th International Conference on Text, Speech and Dialogue (TSD)}},
ADDRESS = {Brno, Czech Republic},
PUBLISHER = {{Springer}},
YEAR = {2022},
MONTH = Sep,
KEYWORDS = {Part-of-speech corpus ; POS tagging ; Open tools ; Word embeddings ; Bi-LSTM ; CRF ; Transformers},
PDF = {https://hal.archives-ouvertes.fr/hal-03696042/file/ANTILLES_A_freNch_linguisTIcaLLy_Enriched_part_of_Speech_corpus.pdf},
HAL_ID = {hal-03696042},
HAL_VERSION = {v1},
}
```
UD_French-GSD corpora:
```latex
@misc{
universaldependencies,
title={UniversalDependencies/UD_French-GSD},
url={https://github.com/UniversalDependencies/UD_French-GSD}, journal={GitHub},
author={UniversalDependencies}
}
```
LIA TAGG:
```latex
@techreport{LIA_TAGG,
author = {Frédéric Béchet},
title = {LIA_TAGG: a statistical POS tagger + syntactic bracketer},
institution = {Aix-Marseille University & CNRS},
year = {2001}
}
```
Flair Embeddings:
```latex
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
## Acknowledgment
This work was financially supported by [Zenidoc](https://zenidoc.fr/) and the [ANR project DIETS](https://anr-diets.univ-avignon.fr) under the contract [ANR-20-CE23-0005](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-fd7e69d902/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=cb6d54d24c9e21e0d50fabf46bd56646).
|
qanastek/pos-french-camembert-flair
|
qanastek
| 2022-07-06T23:49:12Z | 52 | 3 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"dataset:qanastek/ANTILLES",
"arxiv:1911.03894",
"arxiv:1011.4088",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: fr
datasets:
- qanastek/ANTILLES
widget:
- text: "George Washington est allé à Washington"
---
# POET: A French Extended Part-of-Speech Tagger
- Corpora: [ANTILLES](https://github.com/qanastek/ANTILLES)
- Embeddings: [Flair](https://aclanthology.org/C18-1139.pdf) & [CamemBERT](https://arxiv.org/abs/1911.03894)
- Sequence Labelling: [Bi-LSTM-CRF](https://arxiv.org/abs/1011.4088)
- Number of Epochs: 50
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
* [DUFOUR Richard](https://cv.archives-ouvertes.fr/richard-dufour) (2)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
2. [LS2N, TALN team](https://www.ls2n.fr/equipe/taln/), Nantes University, Nantes, France.
## Demo: How to use in Flair
Requires [Flair](https://pypi.org/project/flair/): ```pip install flair```
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# Load the model
model = SequenceTagger.load("qanastek/pos-french")
sentence = Sentence("George Washington est allé à Washington")
# predict tags
model.predict(sentence)
# print predicted pos tags
print(sentence.to_tagged_string())
```
Output:

## Training data
`ANTILLES` is a part-of-speech tagging corpora based on [UD_French-GSD](https://universaldependencies.org/treebanks/fr_gsd/index.html) which was originally created in 2015 and is based on the [universal dependency treebank v2.0](https://github.com/ryanmcd/uni-dep-tb).
Originally, the corpora consists of 400,399 words (16,341 sentences) and had 17 different classes. Now, after applying our tags augmentation we obtain 60 different classes which add linguistic and semantic information such as the gender, number, mood, person, tense or verb form given in the different CoNLL-03 fields from the original corpora.
We based our tags on the level of details given by the [LIA_TAGG](http://pageperso.lif.univ-mrs.fr/frederic.bechet/download.html) statistical POS tagger written by [Frédéric Béchet](http://pageperso.lif.univ-mrs.fr/frederic.bechet/index-english.html) in 2001.
The corpora used for this model is available on [Github](https://github.com/qanastek/ANTILLES) at the [CoNLL-U format](https://universaldependencies.org/format.html).
Training data are fed to the model as free language and doesn't pass a normalization phase. Thus, it's made the model case and punctuation sensitive.
## Original Tags
```plain
PRON VERB SCONJ ADP CCONJ DET NOUN ADJ AUX ADV PUNCT PROPN NUM SYM PART X INTJ
```
## New additional POS tags
| Abbreviation | Description | Examples |
|:--------:|:--------:|:--------:|
| PREP | Preposition | de |
| AUX | Auxiliary Verb | est |
| ADV | Adverb | toujours |
| COSUB | Subordinating conjunction | que |
| COCO | Coordinating Conjunction | et |
| PART | Demonstrative particle | -t |
| PRON | Pronoun | qui ce quoi |
| PDEMMS | Demonstrative Pronoun - Singular Masculine | ce |
| PDEMMP | Demonstrative Pronoun - Plural Masculine | ceux |
| PDEMFS | Demonstrative Pronoun - Singular Feminine | cette |
| PDEMFP | Demonstrative Pronoun - Plural Feminine | celles |
| PINDMS | Indefinite Pronoun - Singular Masculine | tout |
| PINDMP | Indefinite Pronoun - Plural Masculine | autres |
| PINDFS | Indefinite Pronoun - Singular Feminine | chacune |
| PINDFP | Indefinite Pronoun - Plural Feminine | certaines |
| PROPN | Proper noun | Houston |
| XFAMIL | Last name | Levy |
| NUM | Numerical Adjective | trentaine vingtaine |
| DINTMS | Masculine Numerical Adjective | un |
| DINTFS | Feminine Numerical Adjective | une |
| PPOBJMS | Pronoun complements of objects - Singular Masculine | le lui |
| PPOBJMP | Pronoun complements of objects - Plural Masculine | eux y |
| PPOBJFS | Pronoun complements of objects - Singular Feminine | moi la |
| PPOBJFP | Pronoun complements of objects - Plural Feminine | en y |
| PPER1S | Personal Pronoun First-Person - Singular | je |
| PPER2S | Personal Pronoun Second-Person - Singular | tu |
| PPER3MS | Personal Pronoun Third-Person - Singular Masculine | il |
| PPER3MP | Personal Pronoun Third-Person - Plural Masculine | ils |
| PPER3FS | Personal Pronoun Third-Person - Singular Feminine | elle |
| PPER3FP | Personal Pronoun Third-Person - Plural Feminine | elles |
| PREFS | Reflexive Pronoun First-Person - Singular | me m' |
| PREF | Reflexive Pronoun Third-Person - Singular | se s' |
| PREFP | Reflexive Pronoun First / Second-Person - Plural | nous vous |
| VERB | Verb | obtient |
| VPPMS | Past Participle - Singular Masculine | formulé |
| VPPMP | Past Participle - Plural Masculine | classés |
| VPPFS | Past Participle - Singular Feminine | appelée |
| VPPFP | Past Participle - Plural Feminine | sanctionnées |
| DET | Determinant | les l' |
| DETMS | Determinant - Singular Masculine | les |
| DETFS | Determinant - Singular Feminine | la |
| ADJ | Adjective | capable sérieux |
| ADJMS | Adjective - Singular Masculine | grand important |
| ADJMP | Adjective - Plural Masculine | grands petits |
| ADJFS | Adjective - Singular Feminine | française petite |
| ADJFP | Adjective - Plural Feminine | légères petites |
| NOUN | Noun | temps |
| NMS | Noun - Singular Masculine | drapeau |
| NMP | Noun - Plural Masculine | journalistes |
| NFS | Noun - Singular Feminine | tête |
| NFP | Noun - Plural Feminine | ondes |
| PREL | Relative Pronoun | qui dont |
| PRELMS | Relative Pronoun - Singular Masculine | lequel |
| PRELMP | Relative Pronoun - Plural Masculine | lesquels |
| PRELFS | Relative Pronoun - Singular Feminine | laquelle |
| PRELFP | Relative Pronoun - Plural Feminine | lesquelles |
| INTJ | Interjection | merci bref |
| CHIF | Numbers | 1979 10 |
| SYM | Symbol | € % |
| YPFOR | Endpoint | . |
| PUNCT | Ponctuation | : , |
| MOTINC | Unknown words | Technology Lady |
| X | Typos & others | sfeir 3D statu |
## Evaluation results
The test corpora used for this evaluation is available on [Github](https://github.com/qanastek/ANTILLES/blob/main/ANTILLES/test.conllu).
```plain
Results:
- F-score (micro) 0.9797
- F-score (macro) 0.9178
- Accuracy 0.9797
By class:
precision recall f1-score support
PREP 0.9966 0.9987 0.9976 1483
PUNCT 1.0000 1.0000 1.0000 833
NMS 0.9634 0.9801 0.9717 753
DET 0.9923 0.9984 0.9954 645
VERB 0.9913 0.9811 0.9862 583
NFS 0.9667 0.9839 0.9752 560
ADV 0.9940 0.9821 0.9880 504
PROPN 0.9541 0.8937 0.9229 395
DETMS 1.0000 1.0000 1.0000 362
AUX 0.9860 0.9915 0.9888 355
YPFOR 1.0000 1.0000 1.0000 353
NMP 0.9666 0.9475 0.9570 305
COCO 0.9959 1.0000 0.9980 245
ADJMS 0.9463 0.9385 0.9424 244
DETFS 1.0000 1.0000 1.0000 240
CHIF 0.9648 0.9865 0.9755 222
NFP 0.9515 0.9849 0.9679 199
ADJFS 0.9657 0.9286 0.9468 182
VPPMS 0.9387 0.9745 0.9563 157
COSUB 1.0000 0.9844 0.9921 128
DINTMS 0.9918 0.9918 0.9918 122
XFAMIL 0.9298 0.9217 0.9258 115
PPER3MS 1.0000 1.0000 1.0000 87
ADJMP 0.9294 0.9634 0.9461 82
PDEMMS 1.0000 1.0000 1.0000 75
ADJFP 0.9861 0.9342 0.9595 76
PREL 0.9859 1.0000 0.9929 70
DINTFS 0.9839 1.0000 0.9919 61
PREF 1.0000 1.0000 1.0000 52
PPOBJMS 0.9565 0.9362 0.9462 47
PREFP 0.9778 1.0000 0.9888 44
PINDMS 1.0000 0.9773 0.9885 44
VPPFS 0.8298 0.9750 0.8966 40
PPER1S 1.0000 1.0000 1.0000 42
SYM 1.0000 0.9474 0.9730 38
NOUN 0.8824 0.7692 0.8219 39
PRON 1.0000 0.9677 0.9836 31
PDEMFS 1.0000 1.0000 1.0000 29
VPPMP 0.9286 1.0000 0.9630 26
ADJ 0.9524 0.9091 0.9302 22
PPER3MP 1.0000 1.0000 1.0000 20
VPPFP 1.0000 1.0000 1.0000 19
PPER3FS 1.0000 1.0000 1.0000 18
MOTINC 0.3333 0.4000 0.3636 15
PREFS 1.0000 1.0000 1.0000 10
PPOBJMP 1.0000 0.8000 0.8889 10
PPOBJFS 0.6250 0.8333 0.7143 6
INTJ 0.5000 0.6667 0.5714 6
PART 1.0000 1.0000 1.0000 4
PDEMMP 1.0000 1.0000 1.0000 3
PDEMFP 1.0000 1.0000 1.0000 3
PPER3FP 1.0000 1.0000 1.0000 2
NUM 1.0000 0.3333 0.5000 3
PPER2S 1.0000 1.0000 1.0000 2
PPOBJFP 0.5000 0.5000 0.5000 2
PRELMS 1.0000 1.0000 1.0000 2
PINDFS 0.5000 1.0000 0.6667 1
PINDMP 1.0000 1.0000 1.0000 1
X 0.0000 0.0000 0.0000 1
PINDFP 1.0000 1.0000 1.0000 1
micro avg 0.9797 0.9797 0.9797 10019
macro avg 0.9228 0.9230 0.9178 10019
weighted avg 0.9802 0.9797 0.9798 10019
samples avg 0.9797 0.9797 0.9797 10019
```
## BibTeX Citations
Please cite the following paper when using this model.
ANTILLES corpus and POET taggers:
```latex
@inproceedings{labrak:hal-03696042,
TITLE = {{ANTILLES: An Open French Linguistically Enriched Part-of-Speech Corpus}},
AUTHOR = {Labrak, Yanis and Dufour, Richard},
URL = {https://hal.archives-ouvertes.fr/hal-03696042},
BOOKTITLE = {{25th International Conference on Text, Speech and Dialogue (TSD)}},
ADDRESS = {Brno, Czech Republic},
PUBLISHER = {{Springer}},
YEAR = {2022},
MONTH = Sep,
KEYWORDS = {Part-of-speech corpus ; POS tagging ; Open tools ; Word embeddings ; Bi-LSTM ; CRF ; Transformers},
PDF = {https://hal.archives-ouvertes.fr/hal-03696042/file/ANTILLES_A_freNch_linguisTIcaLLy_Enriched_part_of_Speech_corpus.pdf},
HAL_ID = {hal-03696042},
HAL_VERSION = {v1},
}
```
UD_French-GSD corpora:
```latex
@misc{
universaldependencies,
title={UniversalDependencies/UD_French-GSD},
url={https://github.com/UniversalDependencies/UD_French-GSD}, journal={GitHub},
author={UniversalDependencies}
}
```
LIA TAGG:
```latex
@techreport{LIA_TAGG,
author = {Frédéric Béchet},
title = {LIA_TAGG: a statistical POS tagger + syntactic bracketer},
institution = {Aix-Marseille University & CNRS},
year = {2001}
}
```
Flair Embeddings:
```latex
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
## Acknowledgment
This work was financially supported by [Zenidoc](https://zenidoc.fr/)
|
ricardo-filho/bert_base_tcm_teste
|
ricardo-filho
| 2022-07-06T23:23:13Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-06T18:05:49Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert_base_tcm_teste
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_tcm_teste
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Criterio Julgamento Precision: 0.7209
- Criterio Julgamento Recall: 0.8942
- Criterio Julgamento F1: 0.7983
- Criterio Julgamento Number: 104
- Data Sessao Precision: 0.6351
- Data Sessao Recall: 0.8545
- Data Sessao F1: 0.7287
- Data Sessao Number: 55
- Modalidade Licitacao Precision: 0.9224
- Modalidade Licitacao Recall: 0.9596
- Modalidade Licitacao F1: 0.9406
- Modalidade Licitacao Number: 421
- Numero Exercicio Precision: 0.8872
- Numero Exercicio Recall: 0.9351
- Numero Exercicio F1: 0.9105
- Numero Exercicio Number: 185
- Objeto Licitacao Precision: 0.2348
- Objeto Licitacao Recall: 0.4576
- Objeto Licitacao F1: 0.3103
- Objeto Licitacao Number: 59
- Valor Objeto Precision: 0.5424
- Valor Objeto Recall: 0.7805
- Valor Objeto F1: 0.64
- Valor Objeto Number: 41
- Overall Precision: 0.7683
- Overall Recall: 0.8971
- Overall F1: 0.8277
- Overall Accuracy: 0.9948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Objeto Licitacao Precision | Objeto Licitacao Recall | Objeto Licitacao F1 | Objeto Licitacao Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0346 | 0.96 | 2750 | 0.0329 | 0.6154 | 0.8462 | 0.7126 | 104 | 0.5495 | 0.9091 | 0.6849 | 55 | 0.8482 | 0.9287 | 0.8866 | 421 | 0.7438 | 0.9730 | 0.8431 | 185 | 0.0525 | 0.3220 | 0.0903 | 59 | 0.4762 | 0.7317 | 0.5769 | 41 | 0.5565 | 0.8763 | 0.6807 | 0.9880 |
| 0.0309 | 1.92 | 5500 | 0.0322 | 0.6694 | 0.7788 | 0.72 | 104 | 0.5976 | 0.8909 | 0.7153 | 55 | 0.9178 | 0.9549 | 0.9360 | 421 | 0.8211 | 0.8432 | 0.8320 | 185 | 0.15 | 0.2034 | 0.1727 | 59 | 0.2203 | 0.3171 | 0.26 | 41 | 0.7351 | 0.8243 | 0.7771 | 0.9934 |
| 0.0179 | 2.88 | 8250 | 0.0192 | 0.7209 | 0.8942 | 0.7983 | 104 | 0.6351 | 0.8545 | 0.7287 | 55 | 0.9224 | 0.9596 | 0.9406 | 421 | 0.8872 | 0.9351 | 0.9105 | 185 | 0.2348 | 0.4576 | 0.3103 | 59 | 0.5424 | 0.7805 | 0.64 | 41 | 0.7683 | 0.8971 | 0.8277 | 0.9948 |
| 0.0174 | 3.84 | 11000 | 0.0320 | 0.7522 | 0.8173 | 0.7834 | 104 | 0.5741 | 0.5636 | 0.5688 | 55 | 0.8881 | 0.9430 | 0.9147 | 421 | 0.8490 | 0.8811 | 0.8647 | 185 | 0.2436 | 0.3220 | 0.2774 | 59 | 0.5370 | 0.7073 | 0.6105 | 41 | 0.7719 | 0.8370 | 0.8031 | 0.9946 |
| 0.0192 | 4.8 | 13750 | 0.0261 | 0.6744 | 0.8365 | 0.7468 | 104 | 0.6190 | 0.7091 | 0.6610 | 55 | 0.9169 | 0.9430 | 0.9297 | 421 | 0.8404 | 0.8541 | 0.8472 | 185 | 0.2059 | 0.3559 | 0.2609 | 59 | 0.5088 | 0.7073 | 0.5918 | 41 | 0.7521 | 0.8451 | 0.7959 | 0.9949 |
| 0.0158 | 5.76 | 16500 | 0.0250 | 0.6641 | 0.8173 | 0.7328 | 104 | 0.5610 | 0.8364 | 0.6715 | 55 | 0.9199 | 0.9549 | 0.9371 | 421 | 0.9167 | 0.9514 | 0.9337 | 185 | 0.1912 | 0.4407 | 0.2667 | 59 | 0.4828 | 0.6829 | 0.5657 | 41 | 0.7386 | 0.8821 | 0.8040 | 0.9948 |
| 0.0126 | 6.72 | 19250 | 0.0267 | 0.6694 | 0.7981 | 0.7281 | 104 | 0.6386 | 0.9636 | 0.7681 | 55 | 0.8723 | 0.9572 | 0.9128 | 421 | 0.8812 | 0.9622 | 0.9199 | 185 | 0.2180 | 0.4915 | 0.3021 | 59 | 0.5323 | 0.8049 | 0.6408 | 41 | 0.7308 | 0.9006 | 0.8068 | 0.9945 |
| 0.0162 | 7.68 | 22000 | 0.0328 | 0.675 | 0.7788 | 0.7232 | 104 | 0.6604 | 0.6364 | 0.6481 | 55 | 0.9263 | 0.9549 | 0.9404 | 421 | 0.8535 | 0.9135 | 0.8825 | 185 | 0.2471 | 0.3559 | 0.2917 | 59 | 0.5091 | 0.6829 | 0.5833 | 41 | 0.7788 | 0.8509 | 0.8133 | 0.9948 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier_testing
|
domenicrosati
| 2022-07-06T21:12:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-06T20:34:04Z |
---
license: mit
tags:
- text-classification
- generated_from_trainer
model-index:
- name: deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier_testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier_testing
This model is a fine-tuned version of [domenicrosati/deberta-v3-xsmall-finetuned-review_classifier](https://huggingface.co/domenicrosati/deberta-v3-xsmall-finetuned-review_classifier) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BigTimeCoderSean/q-Taxi-v3
|
BigTimeCoderSean
| 2022-07-06T18:13:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-06T18:13:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="BigTimeCoderSean/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
BigTimeCoderSean/q-FrozenLake-v1-4x4-noSlippery
|
BigTimeCoderSean
| 2022-07-06T17:57:12Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-06T17:57:05Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 0.74 +/- 0.44
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BigTimeCoderSean/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
bigscience/tr11-176B-logs
|
bigscience
| 2022-07-06T17:01:14Z | 0 | 250 | null |
[
"tensorboard",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"region:us"
] | null | 2022-03-03T04:38:09Z |
---
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
---
# BigScience Large Language Model Training
Training a multilingual 176 billion parameters model in the open

[BigScience](https://bigscience.huggingface.co) is a open and collaborative workshop around the study and creation of very large language models gathering more than 1000 researchers around the worlds. You can find more information on the main website at https://bigscience.huggingface.co.
The training of BigScience’s main model started on **March 11, 2022 11:42am PST** and will continue for 3-4 months on 384 A100 80GB GPUs of the Jean Zay public supercomputer
You can follow the training at [https://twitter.com/BigScienceLLM](https://twitter.com/BigScienceLLM) or on [the Tensorboards tab above](https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss).
## More information on the model, dataset, hardware, environmental consideration:
### **The model**
- 176B parameters decoder-only architecture (GPT-like)
- 70 layers - 112 attention heads per layers - hidden dimensionality of 14336 - 2048 tokens sequence length
- ALiBi positional embeddings - GeLU activation function
- **More information**:
- Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: [https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours](https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours)
- More details on the architecture/optimizer: [https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml)
### **The dataset**
- Multilingual: 46 languages: Full list is here: [https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling](https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling)
- 341.6 billion tokens (1.5 TB of text data)
- Tokenizer vocabulary: 250,680 tokens
- More information:
- Blog post detailing the design choices during the dataset creation: [https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling](https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling)
### **The engineering side**
- number of GPU used for the training: 384 A100 GPU with 80 GB of memory each
- one copy of the model takes 48 GPUs (using 60 GB of memory on each GPU)
- checkpoint size: the bf16 weights are 329GB, the full checkpoint with optimizer states is 2.3TB
- training throughput: ~150 TFLOPs
- estimated training time: 3-4 months depending on throughput and unexpected events
- **More information**:
- Blog post on the hardware/engineering side: [https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model](https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model)
- Details on the distributed setup used for the training: [https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml)
- Tensorboard updated during the training: [https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss](https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss)
- Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): [https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md](https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md)
### **Environmental considerations**
- [Jean Zay](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html), the supercomputer we are using for model training, is mostly powered by nuclear energy, which is a low carbon energy source.
- Significant efforts were made to make sure that the computing infrastructure is as efficient as possible — the heat generated by the hardware even gets used for heating buildings on campus!
- **More information**:
- We are currently working on making a precise estimate of the carbon emitted during all of the steps of model training, including intermediate experiments as well as inference.
- More soon!
|
hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep10
|
hsohn3
| 2022-07-06T15:57:53Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-06T14:22:52Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep10
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2895
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 4.1298 | 0 |
| 3.5157 | 1 |
| 3.4732 | 2 |
| 3.4565 | 3 |
| 3.4444 | 4 |
| 3.4349 | 5 |
| 3.4197 | 6 |
| 3.4109 | 7 |
| 3.3493 | 8 |
| 3.2895 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
saekomdalkom/t5-small-finetuned-xsum
|
saekomdalkom
| 2022-07-06T15:25:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-06T13:04:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.3577
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4783
- Rouge1: 28.3577
- Rouge2: 7.759
- Rougel: 22.274
- Rougelsum: 22.2869
- Gen Len: 18.8298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.7158 | 1.0 | 12753 | 2.4783 | 28.3577 | 7.759 | 22.274 | 22.2869 | 18.8298 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cacauvicosa/heart1ohr2x9e-target-classification
|
cacauvicosa
| 2022-07-06T15:11:05Z | 0 | 0 |
sklearn
|
[
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2022-07-06T15:11:03Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on heart1ohr2x9e to apply classification on target
**Metrics of the best model:**
accuracy 0.885854
average_precision 0.949471
roc_auc 0.050633
recall_macro 0.885324
f1_macro 0.885610
Name: LogisticRegression(class_weight='balanced', max_iter=1000), dtype: float64
**See model plot below:**
<style>#sk-container-id-8 {color: black;background-color: white;}#sk-container-id-8 pre{padding: 0;}#sk-container-id-8 div.sk-toggleable {background-color: white;}#sk-container-id-8 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-8 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-8 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-8 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-8 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-8 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-8 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-8 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-8 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-8 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-8 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-8 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-8 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-8 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-8 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-8 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-8 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-8 div.sk-item {position: relative;z-index: 1;}#sk-container-id-8 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-8 div.sk-item::before, #sk-container-id-8 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-8 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-8 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-8 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-8 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-8 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-8 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-8 div.sk-label-container {text-align: center;}#sk-container-id-8 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-8 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-8" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
age False False False ... False False False
sex False False False ... False False False
cp False False False ... False False False
trestbps True False False ... False False False
chol True False False ... False False False
fbs False False False ... False False False
restecg False Fa...... False False False
thalach True False False ... False False False
exang False False False ... False False False
oldpeak True False False ... False False False
slope False False False ... False False False
ca False False False ... False False False
thal False False False ... False False False[13 rows x 7 columns])),('logisticregression',LogisticRegression(C=1, class_weight='balanced',max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-24" type="checkbox" ><label for="sk-estimator-id-24" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
age False False False ... False False False
sex False False False ... False False False
cp False False False ... False False False
trestbps True False False ... False False False
chol True False False ... False False False
fbs False False False ... False False False
restecg False Fa...... False False False
thalach True False False ... False False False
exang False False False ... False False False
oldpeak True False False ... False False False
slope False False False ... False False False
ca False False False ... False False False
thal False False False ... False False False[13 rows x 7 columns])),('logisticregression',LogisticRegression(C=1, class_weight='balanced',max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-25" type="checkbox" ><label for="sk-estimator-id-25" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
age False False False ... False False False
sex False False False ... False False False
cp False False False ... False False False
trestbps True False False ... False False False
chol True False False ... False False False
fbs False False False ... False False False
restecg False False False ... False False False
thalach True False False ... False False False
exang False False False ... False False False
oldpeak True False False ... False False False
slope False False False ... False False False
ca False False False ... False False False
thal False False False ... False False False[13 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-26" type="checkbox" ><label for="sk-estimator-id-26" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=1, class_weight='balanced', max_iter=1000)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
huggingtweets/frnsw-nswrfs-nswses
|
huggingtweets
| 2022-07-06T14:32:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-06T14:32:45Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1150678663265832960/ujqrCyuu_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/895892720194957313/RVLTWlDI_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500778204294180868/3B6rKocs_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NSW RFS & NSW SES & Fire and Rescue NSW</div>
<div style="text-align: center; font-size: 14px;">@frnsw-nswrfs-nswses</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NSW RFS & NSW SES & Fire and Rescue NSW.
| Data | NSW RFS | NSW SES | Fire and Rescue NSW |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3248 | 3249 |
| Retweets | 275 | 2093 | 875 |
| Short tweets | 12 | 12 | 48 |
| Tweets kept | 2963 | 1143 | 2326 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cxt6027/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @frnsw-nswrfs-nswses's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tjbhow2z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tjbhow2z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/frnsw-nswrfs-nswses')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TestZee/t5-small-finetuned-custom-wion-test
|
TestZee
| 2022-07-06T13:28:44Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-06T13:23:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TestZee/t5-small-finetuned-custom-wion-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TestZee/t5-small-finetuned-custom-wion-test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9773
- Validation Loss: 0.8028
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.2933 | 0.9052 | 0 |
| 2.3077 | 0.8923 | 1 |
| 2.1972 | 0.8797 | 2 |
| 2.1740 | 0.8677 | 3 |
| 2.1535 | 0.8564 | 4 |
| 2.1772 | 0.8452 | 5 |
| 2.1227 | 0.8342 | 6 |
| 2.0875 | 0.8234 | 7 |
| 2.0279 | 0.8129 | 8 |
| 1.9773 | 0.8028 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Tokenizers 0.12.1
|
luizapzbn/titanicht_mp88q-Survived-classification
|
luizapzbn
| 2022-07-06T13:25:48Z | 0 | 0 |
sklearn
|
[
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2022-07-06T13:25:46Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on titanicht_mp88q to apply classification on Survived
**Metrics of the best model:**
accuracy 0.803597
average_precision 0.801332
roc_auc 0.848079
recall_macro 0.795883
f1_macro 0.793746
Name: DecisionTreeClassifier(class_weight='balanced', max_depth=5), dtype: float64
**See model plot below:**
<style>#sk-container-id-7 {color: black;background-color: white;}#sk-container-id-7 pre{padding: 0;}#sk-container-id-7 div.sk-toggleable {background-color: white;}#sk-container-id-7 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-7 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-7 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-7 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-7 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-7 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-7 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-7 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-7 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-7 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-7 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-7 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-7 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-7 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-7 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-7 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-7 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-7 div.sk-item {position: relative;z-index: 1;}#sk-container-id-7 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-7 div.sk-item::before, #sk-container-id-7 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-7 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-7 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-7 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-7 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-7 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-7 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-7 div.sk-label-container {text-align: center;}#sk-container-id-7 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-7 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-7" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
Pclass False False False ... False False False
Name False False False ... False True False
Sex False False False ... False False False
Age True False False ... False False False
SibSp False False False ... False False False
Parch False False False ... False False False
Ticket False False False ... False True False
Fare True False False ... False False False
Cabin False False False ... False True False
Embarked False False False ... False False False[10 rows x 7 columns])),('decisiontreeclassifier',DecisionTreeClassifier(class_weight='balanced', max_depth=5))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-21" type="checkbox" ><label for="sk-estimator-id-21" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
Pclass False False False ... False False False
Name False False False ... False True False
Sex False False False ... False False False
Age True False False ... False False False
SibSp False False False ... False False False
Parch False False False ... False False False
Ticket False False False ... False True False
Fare True False False ... False False False
Cabin False False False ... False True False
Embarked False False False ... False False False[10 rows x 7 columns])),('decisiontreeclassifier',DecisionTreeClassifier(class_weight='balanced', max_depth=5))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-22" type="checkbox" ><label for="sk-estimator-id-22" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
Pclass False False False ... False False False
Name False False False ... False True False
Sex False False False ... False False False
Age True False False ... False False False
SibSp False False False ... False False False
Parch False False False ... False False False
Ticket False False False ... False True False
Fare True False False ... False False False
Cabin False False False ... False True False
Embarked False False False ... False False False[10 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-23" type="checkbox" ><label for="sk-estimator-id-23" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(class_weight='balanced', max_depth=5)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
sumitrsch/muril_base_multiconer22_bn
|
sumitrsch
| 2022-07-06T12:33:20Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-30T07:24:11Z |
---
license: afl-3.0
---
Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task for bangla track.
https://colab.research.google.com/drive/1P9827acdS7i6eZTi4B0cOms5qLREqvUO
|
srg/outhimar_64-Close-regression
|
srg
| 2022-07-06T12:33:04Z | 0 | 4 |
sklearn
|
[
"sklearn",
"tabular-regression",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-regression
| 2022-07-06T12:33:02Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-regression
- baseline-trainer
---
## Baseline Model trained on outhimar_64 to apply regression on Close
**Metrics of the best model:**
r2 0.999858
neg_mean_squared_error -1.067685
Name: Ridge(alpha=10), dtype: float64
**See model plot below:**
<style>#sk-container-id-6 {color: black;background-color: white;}#sk-container-id-6 pre{padding: 0;}#sk-container-id-6 div.sk-toggleable {background-color: white;}#sk-container-id-6 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-6 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-6 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-6 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-6 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-6 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-6 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-6 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-6 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-6 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-6 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-6 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-6 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-6 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-6 div.sk-item {position: relative;z-index: 1;}#sk-container-id-6 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-6 div.sk-item::before, #sk-container-id-6 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-6 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-6 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-6 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-6 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-6 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-6 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-6 div.sk-label-container {text-align: center;}#sk-container-id-6 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-6 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-6" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
Date False False False ... True False False
Open True False False ... False False False
High True False False ... False False False
Low True False False ... False False False
Adj Close True False False ... False False False
Volume True False False ... False False False[6 rows x 7 columns])),('ridge', Ridge(alpha=10))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-18" type="checkbox" ><label for="sk-estimator-id-18" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
Date False False False ... True False False
Open True False False ... False False False
High True False False ... False False False
Low True False False ... False False False
Adj Close True False False ... False False False
Volume True False False ... False False False[6 rows x 7 columns])),('ridge', Ridge(alpha=10))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-19" type="checkbox" ><label for="sk-estimator-id-19" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
Date False False False ... True False False
Open True False False ... False False False
High True False False ... False False False
Low True False False ... False False False
Adj Close True False False ... False False False
Volume True False False ... False False False[6 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-20" type="checkbox" ><label for="sk-estimator-id-20" class="sk-toggleable__label sk-toggleable__label-arrow">Ridge</label><div class="sk-toggleable__content"><pre>Ridge(alpha=10)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
sumitrsch/Indic-bert_multiconer22_bn
|
sumitrsch
| 2022-07-06T12:32:40Z | 3 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"token-classification",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-06T10:07:47Z |
---
license: afl-3.0
---
Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task for bangla track.
https://colab.research.google.com/drive/1P9827acdS7i6eZTi4B0cOms5qLREqvUO
|
sumitrsch/xlm_R_large_multiconer22_bn
|
sumitrsch
| 2022-07-06T12:32:05Z | 3 | 2 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-06T10:33:33Z |
---
license: afl-3.0
---
Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task for bangla track.
https://colab.research.google.com/drive/1P9827acdS7i6eZTi4B0cOms5qLREqvUO
|
sumitrsch/muril_base_multiconer22_hi
|
sumitrsch
| 2022-07-06T12:27:42Z | 3 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-28T07:57:21Z |
---
license: afl-3.0
---
Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task. https://colab.research.google.com/drive/17WyqwdoRNnzImeik6wTRE5uuj9QQnkXA#scrollTo=nYtUtmyDFAqP
|
dandelin/vilt-b32-mlm
|
dandelin
| 2022-07-06T12:18:37Z | 66,336 | 11 |
transformers
|
[
"transformers",
"pytorch",
"vilt",
"fill-mask",
"arxiv:2102.03334",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
# Vision-and-Language Transformer (ViLT), pre-trained only
Vision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Note: this model only includes the language modeling head.
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the raw model for masked language modeling given an image and a piece of text with [MASK] tokens.
### How to use
Here is how to use this model in PyTorch:
```
from transformers import ViltProcessor, ViltForMaskedLM
import requests
from PIL import Image
import re
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "a bunch of [MASK] laying on a [MASK]."
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm")
model = ViltForMaskedLM.from_pretrained("dandelin/vilt-b32-mlm")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
tl = len(re.findall("\[MASK\]", text))
inferred_token = [text]
# gradually fill in the MASK tokens, one by one
with torch.no_grad():
for i in range(tl):
encoded = processor.tokenizer(inferred_token)
input_ids = torch.tensor(encoded.input_ids).to(device)
encoded = encoded["input_ids"][0][1:-1]
outputs = model(input_ids=input_ids, pixel_values=pixel_values)
mlm_logits = outputs.logits[0] # shape (seq_len, vocab_size)
# only take into account text features (minus CLS and SEP token)
mlm_logits = mlm_logits[1 : input_ids.shape[1] - 1, :]
mlm_values, mlm_ids = mlm_logits.softmax(dim=-1).max(dim=-1)
# only take into account text
mlm_values[torch.tensor(encoded) != 103] = 0
select = mlm_values.argmax().item()
encoded[select] = mlm_ids[select].item()
inferred_token = [processor.decode(encoded)]
selected_token = ""
encoded = processor.tokenizer(inferred_token)
processor.decode(encoded.input_ids[0], skip_special_tokens=True)
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
```
|
Lakshya/q-Taxi-v3
|
Lakshya
| 2022-07-06T12:06:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-06T12:06:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.46 +/- 2.73
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Lakshya/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
SiddharthaM/beit-base-patch16-224-pt22k-ft22k-rim_one-new
|
SiddharthaM
| 2022-07-06T11:17:32Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-06T10:31:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-pt22k-ft22k-rim_one-new
results:
- task:
type: image-classification
name: Image Classification
dataset:
type: rimonedl
name: RIM ONE DL
split: test
metrics:
- type: f1
value: 0.9197860962566845
name: F1
- task:
type: image-classification
name: Image Classification
dataset:
type: rim one
name: RIMONEDL
split: test
metrics:
- type: precision
value: 0.9247311827956989
name: precision
- type: recall
value: 0.9148936170212766
name: Recall
- type: accuracy
value: 0.8972602739726028
name: Accuracy
- type: roc_auc
value: 0.8901391162029461
name: AUC
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-ft22k-rim_one-new
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4550
- Accuracy: 0.8767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.73 | 2 | 0.2411 | 0.9178 |
| No log | 1.73 | 4 | 0.2182 | 0.8973 |
| No log | 2.73 | 6 | 0.3085 | 0.8973 |
| No log | 3.73 | 8 | 0.2794 | 0.8973 |
| 0.1392 | 4.73 | 10 | 0.2398 | 0.9110 |
| 0.1392 | 5.73 | 12 | 0.2925 | 0.8973 |
| 0.1392 | 6.73 | 14 | 0.2798 | 0.9110 |
| 0.1392 | 7.73 | 16 | 0.2184 | 0.9178 |
| 0.1392 | 8.73 | 18 | 0.3007 | 0.9110 |
| 0.0416 | 9.73 | 20 | 0.3344 | 0.9041 |
| 0.0416 | 10.73 | 22 | 0.3626 | 0.9110 |
| 0.0416 | 11.73 | 24 | 0.4842 | 0.8904 |
| 0.0416 | 12.73 | 26 | 0.3664 | 0.8973 |
| 0.0416 | 13.73 | 28 | 0.3458 | 0.9110 |
| 0.0263 | 14.73 | 30 | 0.2810 | 0.9110 |
| 0.0263 | 15.73 | 32 | 0.4695 | 0.8699 |
| 0.0263 | 16.73 | 34 | 0.3723 | 0.9041 |
| 0.0263 | 17.73 | 36 | 0.3447 | 0.9041 |
| 0.0263 | 18.73 | 38 | 0.3708 | 0.8904 |
| 0.0264 | 19.73 | 40 | 0.4052 | 0.9110 |
| 0.0264 | 20.73 | 42 | 0.4492 | 0.9041 |
| 0.0264 | 21.73 | 44 | 0.4649 | 0.8904 |
| 0.0264 | 22.73 | 46 | 0.4061 | 0.9178 |
| 0.0264 | 23.73 | 48 | 0.4136 | 0.9110 |
| 0.0139 | 24.73 | 50 | 0.4183 | 0.8973 |
| 0.0139 | 25.73 | 52 | 0.4504 | 0.8904 |
| 0.0139 | 26.73 | 54 | 0.4368 | 0.8973 |
| 0.0139 | 27.73 | 56 | 0.4711 | 0.9110 |
| 0.0139 | 28.73 | 58 | 0.3928 | 0.9110 |
| 0.005 | 29.73 | 60 | 0.4550 | 0.8767 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kws/q-Taxi-v3
|
kws
| 2022-07-06T10:24:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-06T10:23:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kws/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
sumitrsch/Indic-bert_multiconer22_hi
|
sumitrsch
| 2022-07-06T10:00:34Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"token-classification",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-06T09:43:28Z |
---
license: afl-3.0
---
Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task. https://colab.research.google.com/drive/17WyqwdoRNnzImeik6wTRE5uuj9QQnkXA#scrollTo=nYtUtmyDFAqP
|
dnouri/brats_mri_segmentation
|
dnouri
| 2022-07-06T09:54:53Z | 0 | 1 | null |
[
"monai",
"arxiv:1810.11654",
"region:us"
] | null | 2022-07-06T09:13:12Z |
---
tags:
- monai
---
# Model Overview
A pre-trained model for volumetric (3D) segmentation of brain tumor subregions from multimodal MRIs based on BraTS 2018 data. The whole pipeline is modified from [clara_pt_brain_mri_segmentation](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/med/models/clara_pt_brain_mri_segmentation).
## Workflow
The model is trained to segment 3 nested subregions of primary brain tumors (gliomas): the "enhancing tumor" (ET), the "tumor core" (TC), the "whole tumor" (WT) based on 4 aligned input MRI scans (T1c, T1, T2, FLAIR).
- The ET is described by areas that show hyper intensity in T1c when compared to T1, but also when compared to "healthy" white matter in T1c.
- The TC describes the bulk of the tumor, which is what is typically resected. The TC entails the ET, as well as the necrotic (fluid-filled) and the non-enhancing (solid) parts of the tumor.
- The WT describes the complete extent of the disease, as it entails the TC and the peritumoral edema (ED), which is typically depicted by hyper-intense signal in FLAIR.
## Data
The training data is from the [Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018](https://www.med.upenn.edu/sbia/brats2018/data.html).
- Target: 3 tumor subregions
- Task: Segmentation
- Modality: MRI
- Size: 285 3D volumes (4 channels each)
The provided labelled data was partitioned, based on our own split, into training (200 studies), validation (42 studies) and testing (43 studies) datasets.
Please run `scripts/prepare_datalist.py` to produce the data list. The command is like:
```
python scripts/prepare_datalist.py --path your-brats18-dataset-path
```
## Training configuration
This model utilized a similar approach described in 3D MRI brain tumor segmentation
using autoencoder regularization, which was a winning method in BraTS2018 [1]. The training was performed with the following:
- GPU: At least 16GB of GPU memory.
- Actual Model Input: 224 x 224 x 144
- AMP: True
- Optimizer: Adam
- Learning Rate: 1e-4
- Loss: DiceLoss
## Input
Input: 4 channel MRI (4 aligned MRIs T1c, T1, T2, FLAIR at 1x1x1 mm)
1. Normalizing to unit std with zero mean
2. Randomly cropping to (224, 224, 144)
3. Randomly spatial flipping
4. Randomly scaling and shifting intensity of the volume
## Output
Output: 3 channels
- Label 0: TC tumor subregion
- Label 1: WT tumor subregion
- Label 2: ET tumor subregion
## Model Performance
The achieved Dice scores on the validation data are:
- Tumor core (TC): 0.8559
- Whole tumor (WT): 0.9026
- Enhancing tumor (ET): 0.7905
- Average: 0.8518
## commands example
Execute training:
```
python -m monai.bundle run training --meta_file configs/metadata.json --config_file configs/train.json --logging_file configs/logging.conf
```
Override the `train` config to execute multi-GPU training:
```
torchrun --standalone --nnodes=1 --nproc_per_node=8 -m monai.bundle run training --meta_file configs/metadata.json --config_file "['configs/train.json','configs/multi_gpu_train.json']" --logging_file configs/logging.conf
```
Override the `train` config to execute evaluation with the trained model:
```
python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file "['configs/train.json','configs/evaluate.json']" --logging_file configs/logging.conf
```
Execute inference:
```
python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
```
# Disclaimer
This is an example, not to be used for diagnostic purposes.
# References
[1] Myronenko, Andriy. "3D MRI brain tumor segmentation using autoencoder regularization." International MICCAI Brainlesion Workshop. Springer, Cham, 2018. https://arxiv.org/abs/1810.11654.
|
vinayak361/token_fine_tunned_flipkart
|
vinayak361
| 2022-07-06T09:32:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-06T07:42:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: token_fine_tunned_flipkart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token_fine_tunned_flipkart
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0992
- Precision: 0.9526
- Recall: 0.9669
- F1: 0.9597
- Accuracy: 0.9730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 135 | 0.5967 | 0.7227 | 0.7830 | 0.7516 | 0.7932 |
| No log | 2.0 | 270 | 0.3673 | 0.8105 | 0.8623 | 0.8356 | 0.8747 |
| No log | 3.0 | 405 | 0.2679 | 0.8676 | 0.8854 | 0.8764 | 0.9094 |
| 0.6219 | 4.0 | 540 | 0.1972 | 0.8955 | 0.9217 | 0.9084 | 0.9355 |
| 0.6219 | 5.0 | 675 | 0.1500 | 0.9229 | 0.9374 | 0.9301 | 0.9525 |
| 0.6219 | 6.0 | 810 | 0.1240 | 0.9341 | 0.9509 | 0.9424 | 0.9609 |
| 0.6219 | 7.0 | 945 | 0.1041 | 0.9516 | 0.9650 | 0.9582 | 0.9720 |
| 0.2085 | 8.0 | 1080 | 0.0992 | 0.9526 | 0.9669 | 0.9597 | 0.9730 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
laurian/pouet
|
laurian
| 2022-07-06T08:44:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-06T08:42:05Z |
valkiry robot
desert technology
|
messham/ppo-LunarLander-v2_1pt5m
|
messham
| 2022-07-06T08:33:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-06T08:33:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 275.55 +/- 24.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArneD/xlm-roberta-base-finetuned-panx-de
|
ArneD
| 2022-07-06T07:23:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-06T06:47:26Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
espnet/aishell2_transducer
|
espnet
| 2022-07-06T07:11:48Z | 2 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aishell2",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-07-06T06:55:04Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- aishell2
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/aishell2_transducer`
This model was trained by jctian98 using aishell2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 40c5f6919244c2ec8eac14b9011854dd02511a04
pip install -e .
cd egs2/aishell2/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/aishell2_transducer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Jul 5 22:02:55 CST 2022`
- python version: `3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]`
- espnet version: `espnet 202205`
- pytorch version: `pytorch 1.7.1`
- Git hash: `40c5f6919244c2ec8eac14b9011854dd02511a04`
- Commit date: `Fri Jun 17 11:07:26 2022 +0800`
## asr_train_conformer-rnn_transducer_raw_zh_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.cer_transducer.ave/test_android|5000|5002|63.2|36.8|0.0|0.0|36.8|36.8|
|decode_asr_model_valid.cer_transducer.ave/test_ios|5000|5002|66.2|33.7|0.0|0.0|33.8|33.8|
|decode_asr_model_valid.cer_transducer.ave/test_mic|5000|5002|63.9|36.1|0.0|0.0|36.1|36.1|
|decode_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.cer_transducer.ave/test_android|5000|5002|64.4|35.5|0.0|0.0|35.6|35.6|
|decode_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.cer_transducer.ave/test_ios|5000|5002|67.4|32.5|0.0|0.0|32.6|32.6|
|decode_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.cer_transducer.ave/test_mic|5000|5002|65.3|34.6|0.0|0.0|34.7|34.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.cer_transducer.ave/test_android|5000|49534|94.0|5.7|0.2|0.1|6.1|36.8|
|decode_asr_model_valid.cer_transducer.ave/test_ios|5000|49534|94.8|5.0|0.2|0.1|5.4|33.8|
|decode_asr_model_valid.cer_transducer.ave/test_mic|5000|49534|94.1|5.7|0.2|0.1|6.0|36.1|
|decode_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.cer_transducer.ave/test_android|5000|49534|94.2|5.5|0.3|0.1|5.9|35.6|
|decode_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.cer_transducer.ave/test_ios|5000|49534|94.9|4.9|0.2|0.1|5.2|32.6|
|decode_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.cer_transducer.ave/test_mic|5000|49534|94.3|5.4|0.2|0.1|5.8|34.7|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/transducer/train_conformer-rnn_transducer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_conformer-rnn_transducer_raw_zh_char_sp
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 51051
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_transducer
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 20000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char_sp/train/speech_shape
- exp/asr_stats_raw_zh_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char_sp/valid/speech_shape
- exp/asr_stats_raw_zh_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_noeng_sp/wav.scp
- speech
- sound
- - dump/raw/train_noeng_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_ios/wav.scp
- speech
- sound
- - dump/raw/dev_ios/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- 的
- 一
- 十
- 二
- 三
- 有
- 我
- 在
- 度
- 五
- 是
- 四
- 人
- 六
- 七
- 八
- 九
- 中
- 百
- 不
- 了
- 零
- 大
- 到
- 为
- 开
- 上
- 国
- 调
- 市
- 点
- 业
- 歌
- 么
- 来
- 个
- 这
- 年
- 要
- 公
- 什
- 会
- 出
- 地
- 发
- 行
- 能
- 温
- 电
- 空
- 万
- 千
- 成
- 和
- 分
- 时
- 下
- 你
- 场
- 新
- 家
- 打
- 产
- 机
- 对
- 以
- 房
- 生
- 把
- 小
- 首
- 放
- 之
- 现
- 日
- 动
- 高
- 子
- 后
- 多
- 们
- 者
- 方
- 前
- 也
- 他
- 视
- 资
- 将
- 关
- 金
- 天
- 于
- 进
- 过
- 经
- 听
- 月
- 可
- 用
- 自
- 最
- 司
- 幺
- 车
- 比
- 体
- 手
- 目
- 化
- 道
- 作
- 部
- 被
- 给
- 报
- 加
- 就
- 第
- 全
- 乐
- 定
- 得
- 还
- 事
- 城
- 本
- 想
- 女
- 赛
- 面
- 工
- 设
- 都
- 音
- 力
- 品
- 理
- 保
- 记
- 心
- 好
- 而
- 企
- 法
- 实
- 帮
- 价
- 长
- 看
- 合
- 已
- 海
- 但
- 与
- 名
- 北
- 同
- 入
- 元
- 商
- 通
- 量
- 区
- 学
- 情
- 京
- 网
- 所
- 务
- 主
- 说
- 两
- 政
- 播
- 利
- 重
- 制
- 员
- 平
- 其
- 交
- 内
- 风
- 提
- 器
- 间
- 没
- 请
- 去
- 相
- 台
- 美
- 期
- 增
- 明
- 信
- 式
- 次
- 爱
- 曲
- 建
- 安
- 当
- 管
- 表
- 东
- 店
- 里
- 起
- 并
- 从
- 果
- 回
- 民
- 影
- 展
- 据
- 着
- 示
- 更
- 等
- 应
- 很
- 无
- 门
- 外
- 数
- 运
- 因
- 投
- 正
- 今
- 收
- 路
- 些
- 需
- 儿
- 性
- 南
- 计
- 色
- 如
- 然
- 世
- 亿
- 物
- 光
- 项
- 特
- 联
- 智
- 持
- 随
- 向
- 搜
- 老
- 西
- 位
- 院
- 模
- 规
- 身
- 气
- 消
- 达
- 意
- 切
- 男
- 队
- 斯
- 米
- 低
- 格
- 水
- 张
- 此
- 布
- 灯
- 华
- 那
- 住
- 步
- 集
- 受
- 基
- 换
- 整
- 险
- 科
- 续
- 让
- 线
- 广
- 股
- 求
- 转
- 强
- 演
- 件
- 息
- 费
- 变
- 做
- 样
- 该
- 未
- 近
- 她
- 系
- 至
- 代
- 技
- 查
- 证
- 少
- 接
- 山
- 统
- 楼
- 节
- 标
- 只
- 战
- 及
- 文
- 总
- 王
- 局
- 己
- 再
- 问
- 监
- 处
- 传
- 服
- 州
- 显
- 销
- 快
- 由
- 频
- 改
- 便
- 卫
- 题
- 购
- 林
- 告
- 创
- 限
- 售
- 讯
- 常
- 界
- 营
- 原
- 单
- 超
- 认
- 种
- 流
- 亮
- 净
- 排
- 案
- 知
- 推
- 降
- 环
- 获
- 程
- 走
- 友
- 源
- 立
- 马
- 客
- 称
- 速
- 剧
- 周
- 决
- 尔
- 别
- 跑
- 取
- 完
- 片
- 警
- 头
- 球
- 选
- 士
- 级
- 拉
- 解
- 策
- 结
- 术
- 约
- 银
- 江
- 星
- 活
- 口
- 直
- 备
- 支
- 供
- 户
- 医
- 存
- 花
- 易
- 各
- 造
- 置
- 准
- 任
- 非
- 红
- 游
- 专
- 较
- 款
- 预
- 积
- 站
- 园
- 升
- 先
- 牌
- 社
- 办
- 每
- 李
- 村
- 型
- 使
- 难
- 势
- 真
- 带
- 指
- 停
- 构
- 导
- 深
- 唱
- 参
- 清
- 见
- 龙
- 研
- 团
- 照
- 确
- 阳
- 响
- 太
- 亚
- 克
- 闭
- 火
- 央
- 微
- 感
- 组
- 减
- 或
- 委
- 领
- 军
- 率
- 伤
- 始
- 类
- 书
- 融
- 具
- 济
- 土
- 施
- 望
- 教
- 奥
- 吗
- 际
- 育
- 权
- 涨
- 德
- 几
- 控
- 师
- 热
- 死
- 共
- 则
- 话
- 汽
- 许
- 份
- 府
- 居
- 态
- 连
- 黄
- 白
- 烦
- 引
- 英
- 声
- 狐
- 何
- 划
- 除
- 媒
- 季
- 继
- 孩
- 眼
- 财
- 岁
- 买
- 越
- 健
- 责
- 卡
- 助
- 索
- 宝
- 负
- 镇
- 争
- 松
- 况
- 半
- 条
- 税
- 注
- 校
- 终
- 仅
- 刘
- 某
- 号
- 福
- 才
- 额
- 博
- 包
- 优
- 众
- 质
- 究
- 反
- 农
- 苹
- 晚
- 紧
- 县
- 景
- 诉
- 酒
- 落
- 离
- 观
- 青
- 致
- 装
- 又
- 仍
- 套
- 亲
- 复
- 河
- 依
- 飞
- 故
- 极
- 娱
- 普
- 失
- 范
- 效
- 互
- 启
- 神
- 左
- 湖
- 击
- 值
- 绩
- 陈
- 语
- 段
- 兴
- 容
- 采
- 充
- 右
- 曾
- 往
- 票
- 均
- 举
- 域
- 形
- 维
- 找
- 像
- 纪
- 属
- 图
- 断
- 贷
- 省
- 康
- 试
- 杨
- 港
- 喜
- 街
- 益
- 拿
- 幅
- 功
- 苏
- 药
- 杰
- 足
- 考
- 疑
- 觉
- 配
- 香
- 宅
- 厂
- 根
- 议
- 境
- 双
- 宁
- 练
- 露
- 罗
- 吧
- 货
- 远
- 却
- 边
- 冠
- 钱
- 板
- 云
- 乡
- 审
- 算
- 丽
- 护
- 且
- 严
- 卖
- 奇
- 论
- 底
- 破
- 满
- 券
- 竞
- 拍
- 职
- 救
- 食
- 希
- 善
- 核
- 锦
- 检
- 突
- 哪
- 夜
- 言
- 麻
- 官
- 候
- 跟
- 够
- 它
- 妈
- 精
- 料
- 治
- 付
- 状
- 巴
- 止
- 早
- 稳
- 即
- 戏
- 象
- 录
- 群
- 必
- 婚
- 黑
- 田
- 验
- 养
- 库
- 欢
- 赶
- 送
- 协
- 绿
- 涉
- 例
- 昨
- 轻
- 室
- 武
- 盘
- 历
- 病
- 刚
- 春
- 留
- 尼
- 按
- 批
- 跳
- 志
- 怎
- 移
- 退
- 闻
- 摄
- 古
- 租
- 威
- 字
- 秒
- 石
- 夫
- 占
- 爸
- 压
- 登
- 思
- 虽
- 厅
- 雨
- 软
- 汉
- 摇
- 替
- 否
- 围
- 朋
- 雪
- 革
- 波
- 余
- 列
- 胜
- 债
- 临
- 遇
- 层
- 测
- 激
- 障
- 修
- 罪
- 假
- 忙
- 防
- 滚
- 介
- 判
- 承
- 钟
- 遭
- 执
- 角
- 征
- 铁
- 担
- 童
- 版
- 油
- 爆
- 补
- 史
- 杀
- 冲
- 待
- 吃
- 湾
- 训
- 母
- 律
- 亡
- 命
- 森
- 富
- 佳
- 略
- 蓝
- 义
- 庆
- 评
- 润
- 阿
- 镜
- 午
- 端
- 托
- 适
- 密
- 庭
- 浪
- 馆
- 差
- 尽
- 干
- 初
- 独
- 丝
- 洲
- 兰
- 旅
- 座
- 愿
- 艺
- 宣
- 短
- 块
- 彩
- 违
- 害
- 餐
- 追
- 辆
- 舞
- 良
- 菜
- 父
- 伟
- 择
- 嫌
- 念
- 识
- 似
- 副
- 访
- 圳
- 逐
- 令
- 奖
- 档
- 透
- 紫
- 味
- 孙
- 谈
- 籍
- 滑
- 犯
- 顺
- 络
- 穿
- 韩
- 巨
- 冷
- 乎
- 申
- 甚
- 惠
- 派
- 幸
- 永
- 暗
- 吉
- 素
- 梦
- 迪
- 瑞
- 绝
- 纷
- 笑
- 桥
- 血
- 刑
- 谢
- 材
- 另
- 夏
- 写
- 弟
- 纳
- 席
- 硬
- 画
- 夺
- 免
- 轮
- 幕
- 倒
- 毒
- 欧
- 脑
- 航
- 屋
- 跌
- 疗
- 玩
- 杯
- 哥
- 吸
- 戴
- 伦
- 届
- 睡
- 扫
- 错
- 习
- 背
- 吴
- 川
- 述
- 聚
- 促
- 尚
- 抢
- 恋
- 豪
- 班
- 析
- 径
- 读
- 伙
- 静
- 抓
- 漫
- 细
- 扩
- 妻
- 括
- 饭
- 衣
- 借
- 董
- 迷
- 庄
- 探
- 冰
- 插
- 阶
- 呢
- 损
- 粉
- 骗
- 休
- 秀
- 织
- 峰
- 谁
- 肯
- 鲁
- 谷
- 陆
- 娘
- 岛
- 励
- 迎
- 础
- 察
- 晨
- 朝
- 丰
- 诗
- 驾
- 异
- 招
- 印
- 草
- 惊
- 坚
- 沙
- 摆
- 久
- 私
- 措
- 劳
- 宗
- 池
- 洋
- 泳
- 须
- 圈
- 泰
- 肉
- 针
- 币
- 享
- 拳
- 窗
- 津
- 乘
- 梅
- 弱
- 罚
- 困
- 链
- 虑
- 延
- 顶
- 拥
- 玉
- 缺
- 姐
- 危
- 氏
- 柯
- 急
- 汤
- 丹
- 慧
- 操
- 廉
- 竟
- 趋
- 贴
- 裁
- 嘉
- 怀
- 旧
- 赔
- 盛
- 签
- 灵
- 鼓
- 典
- 释
- 掉
- 忘
- 暂
- 贵
- 叫
- 郑
- 归
- 挥
- 晓
- 徐
- 牛
- 雅
- 抱
- 靠
- 妹
- 载
- 偿
- 慢
- 卢
- 悉
- 综
- 简
- 植
- 筑
- 暴
- 尤
- 兄
- 礼
- 鱼
- 伊
- 序
- 厦
- 伴
- 木
- 野
- 脸
- 烈
- 潮
- 顾
- 雄
- 杭
- 藏
- 族
- 魔
- 撞
- 汇
- 娜
- 冬
- 枪
- 邓
- 患
- 截
- 累
- 暖
- 堂
- 浦
- 秘
- 珠
- 渐
- 丁
- 笔
- 唐
- 培
- 距
- 烟
- 返
- 束
- 晒
- 若
- 坐
- 刺
- 熟
- 婆
- 驶
- 翰
- 贸
- 诺
- 麦
- 讨
- 缘
- 挑
- 督
- 绍
- 码
- 勇
- 攻
- 浙
- 虹
- 讲
- 贝
- 迅
- 寻
- 洗
- 曝
- 斗
- 尘
- 蒙
- 莱
- 昆
- 毛
- 订
- 雷
- 兵
- 估
- 词
- 恩
- 荣
- 刻
- 泽
- 误
- 刀
- 树
- 胡
- 朱
- 输
- 避
- 呼
- 架
- 附
- 吹
- 遗
- 宇
- 侠
- 键
- 宏
- 哈
- 皮
- 筹
- 渠
- 叶
- 姑
- 盖
- 逃
- 阅
- 梁
- 泪
- 予
- 狂
- 羊
- 摩
- 徽
- 赵
- 倍
- 莉
- 凌
- 披
- 郭
- 偷
- 缓
- 齐
- 宽
- 拟
- 储
- 赞
- 凤
- 爵
- 编
- 涛
- 污
- 抗
- 秋
- 败
- 折
- 肥
- 帘
- 鲜
- 鸟
- 郎
- 凯
- 询
- 映
- 菲
- 守
- 旋
- 脱
- 旗
- 阵
- 遍
- 禁
- 脚
- 屏
- 染
- 概
- 曼
- 奶
- 棋
- 昌
- 苦
- 琪
- 梯
- 般
- 虚
- 混
- 募
- 恶
- 拘
- 妇
- 锁
- 烧
- 钢
- 毕
- 顿
- 页
- 虎
- 玲
- 召
- 辉
- 洛
- 痛
- 符
- 隐
- 鸡
- 弹
- 炸
- 震
- 弃
- 迹
- 账
- 隆
- 趣
- 坏
- 眠
- 挂
- 蛋
- 龄
- 鬼
- 厨
- 焦
- 牙
- 恐
- 章
- 杂
- 扬
- 跨
- 汪
- 封
- 幼
- 蔡
- 授
- 盗
- 俄
- 拆
- 芯
- 敢
- 狗
- 宾
- 末
- 船
- 烤
- 翻
- 辑
- 途
- 冒
- 锅
- 宫
- 答
- 扣
- 盈
- 莫
- 祝
- 丈
- 诈
- 帅
- 缩
- 泉
- 巢
- 怕
- 宜
- 沈
- 盟
- 恒
- 床
- 努
- 散
- 锋
- 弗
- 振
- 拒
- 逆
- 塞
- 诚
- 喝
- 洁
- 触
- 捕
- 炒
- 侵
- 君
- 既
- 泡
- 颜
- 娃
- 懂
- 骨
- 猫
- 仪
- 伍
- 沟
- 跃
- 献
- 援
- 祖
- 乱
- 尸
- 胎
- 奏
- 剑
- 骑
- 寿
- 呈
- 酷
- 溪
- 潜
- 陷
- 艾
- 坛
- 孕
- 舒
- 抽
- 徒
- 劲
- 纯
- 掌
- 佛
- 搞
- 亏
- 奔
- 翔
- 冻
- 圣
- 扶
- 添
- 熊
- 邮
- 醒
- 莲
- 琴
- 唯
- 陪
- 甜
- 谱
- 赢
- 衡
- 含
- 偏
- 撑
- 尾
- 册
- 榜
- 萨
- 怪
- 课
- 疯
- 咖
- 茶
- 燃
- 踪
- 诊
- 射
- 燕
- 党
- 固
- 纸
- 坦
- 卓
- 灾
- 阻
- 洪
- 腾
- 纠
- 递
- 猪
- 塔
- 晶
- 著
- 恢
- 蜜
- 楚
- 啦
- 姆
- 捐
- 饰
- 鉴
- 祥
- 卷
- 乌
- 幻
- 敏
- 疾
- 缴
- 琳
- 豆
- 皇
- 箱
- 湿
- 凡
- 麟
- 句
- 玛
- 拼
- 抵
- 沫
- 甲
- 覆
- 搭
- 爷
- 谣
- 饮
- 薪
- 芝
- 欲
- 忆
- 谓
- 啡
- 搏
- 哭
- 握
- 婷
- 隔
- 铜
- 刷
- 袭
- 矿
- 腿
- 岗
- 厕
- 滨
- 哲
- 岸
- 亦
- 漂
- 偶
- 鞋
- 鸭
- 宋
- 馨
- 朗
- 揭
- 枚
- 惯
- 陶
- 械
- 赚
- 耳
- 扰
- 乔
- 泥
- 棒
- 井
- 忧
- 杜
- 剩
- 旬
- 醉
- 拓
- 迁
- 颖
- 澳
- 瓦
- 扮
- 兆
- 闪
- 奋
- 闹
- 聊
- 鑫
- 辛
- 坡
- 淡
- 吻
- 诸
- 伯
- 欣
- 晋
- 仙
- 芳
- 旦
- 沉
- 症
- 扎
- 署
- 残
- 狼
- 洞
- 毫
- 辅
- 迫
- 闲
- 尝
- 谋
- 舍
- 鸿
- 桩
- 纽
- 灰
- 伏
- 赫
- 耗
- 液
- 啊
- 碍
- 慎
- 帝
- 赌
- 横
- 涵
- 姓
- 滴
- 凉
- 圆
- 迟
- 毁
- 牵
- 捷
- 俱
- 侧
- 厚
- 剂
- 橙
- 杆
- 柳
- 绑
- 妙
- 霍
- 凰
- 卧
- 甘
- 羽
- 侦
- 莞
- 彭
- 淘
- 旁
- 宿
- 繁
- 仁
- 窃
- 炼
- 煮
- 魂
- 砸
- 俊
- 墙
- 乏
- 勒
- 荷
- 煤
- 兼
- 呀
- 劫
- 悲
- 寺
- 霸
- 恰
- 旺
- 仓
- 拜
- 脏
- 茜
- 泛
- 吕
- 婴
- 凶
- 扇
- 邀
- 湘
- 仑
- 沃
- 欠
- 滩
- 寓
- 坠
- 拖
- 萌
- 桌
- 塑
- 炫
- 艳
- 忍
- 贤
- 赖
- 肖
- 锡
- 殊
- 猛
- 誉
- 殴
- 潘
- 漏
- 敌
- 废
- 柏
- 塘
- 逼
- 糖
- 浩
- 摘
- 敬
- 轩
- 桃
- 妍
- 黎
- 坊
- 允
- 畅
- 垃
- 圾
- 萧
- 玮
- 敦
- 轨
- 挺
- 辽
- 绪
- 浮
- 姜
- 铺
- 悬
- 柔
- 乒
- 倾
- 碎
- 槛
- 咨
- 凭
- 兹
- 稿
- 绕
- 斤
- 邦
- 庞
- 瓶
- 彻
- 屈
- 拨
- 堡
- 丢
- 鼠
- 粮
- 炳
- 浓
- 您
- 秦
- 怒
- 仔
- 栏
- 尊
- 沿
- 谭
- 姿
- 巧
- 阴
- 蒋
- 嫁
- 鹏
- 撤
- 迈
- 荐
- 碰
- 壁
- 喊
- 押
- 肃
- 墨
- 冯
- 曹
- 祸
- 辞
- 莎
- 循
- 轿
- 桂
- 贡
- 赴
- 忠
- 俩
- 薄
- 孤
- 挖
- 忽
- 贩
- 朵
- 匹
- 溢
- 默
- 嘴
- 狱
- 抛
- 篮
- 涯
- 歉
- 竹
- 渡
- 斌
- 墅
- 弄
- 泄
- 睛
- 珍
- 苑
- 堵
- 仕
- 苗
- 腐
- 裂
- 疆
- 茂
- 牧
- 虫
- 璃
- 垄
- 贾
- 稀
- 览
- 辣
- 霞
- 颁
- 僵
- 搬
- 番
- 佩
- 聘
- 姻
- 赏
- 妮
- 逊
- 串
- 玻
- 砍
- 岳
- 遥
- 堪
- 邻
- 飘
- 奸
- 赁
- 酬
- 纵
- 诞
- 灭
- 旭
- 碳
- 慈
- 拦
- 匆
- 仿
- 闯
- 猜
- 蒂
- 蓄
- 摸
- 驱
- 瑟
- 悦
- 讼
- 蕾
- 胶
- 悄
- 惜
- 淀
- 恨
- 宴
- 寂
- 刊
- 栋
- 尖
- 怡
- 氛
- 贿
- 岭
- 糕
- 碑
- 炉
- 埃
- 吓
- 辈
- 役
- 肇
- 劝
- 摔
- 饼
- 惨
- 吐
- 拔
- 携
- 卸
- 瑰
- 寸
- 朴
- 吨
- 磨
- 驻
- 孔
- 玫
- 鼎
- 伪
- 惹
- 韦
- 郁
- 肌
- 霆
- 烂
- 伸
- 蝶
- 戒
- 渔
- 艰
- 咬
- 崇
- 颗
- 贯
- 塌
- 勤
- 篇
- 攀
- 诱
- 娇
- 契
- 袁
- 陵
- 割
- 厉
- 酸
- 驰
- 甄
- 腰
- 裤
- 胖
- 瘦
- 巡
- 敲
- 瓜
- 魏
- 芬
- 莹
- 磊
- 踏
- 贺
- 浴
- 薇
- 剪
- 摊
- 催
- 奕
- 壮
- 郊
- 拐
- 咏
- 胞
- 匪
- 氧
- 沪
- 盾
- 姚
- 阔
- 寄
- 盐
- 肩
- 熙
- 阎
- 澎
- 夕
- 菌
- 伐
- 劵
- 聪
- 仰
- 兽
- 裸
- 陕
- 癌
- 叔
- 堆
- 雯
- 汰
- 傅
- 窄
- 佐
- 潭
- 涌
- 吊
- 坤
- 骂
- 臣
- 窝
- 袋
- 樊
- 寞
- 乳
- 愈
- 抑
- 岩
- 挤
- 傻
- 腹
- 吵
- 逸
- 奈
- 谎
- 颇
- 详
- 欺
- 捞
- 锻
- 丑
- 澄
- 虐
- 谨
- 孟
- 鹿
- 填
- 戈
- 靓
- 蓉
- 爬
- 疼
- 耀
- 寨
- 翅
- 爽
- 寒
- 耐
- 猎
- 悔
- 扭
- 芒
- 怖
- 俗
- 趁
- 矛
- 廷
- 址
- 宠
- 棉
- 描
- 淇
- 膜
- 煌
- 喷
- 尺
- 帕
- 桑
- 媛
- 碧
- 胸
- 瞬
- 铃
- 柜
- 蔬
- 毅
- 庙
- 颠
- 憾
- 贫
- 壳
- 冕
- 佑
- 葛
- 辩
- 噪
- 夹
- 侣
- 蜂
- 犹
- 抚
- 纹
- 惑
- 脉
- 虾
- 抄
- 钻
- 梨
- 嘛
- 删
- 蹈
- 胁
- 瓷
- 肤
- 魅
- 赠
- 琦
- 弯
- 兔
- 暑
- 蛇
- 稍
- 卜
- 荡
- 惩
- 涂
- 楠
- 恭
- 萍
- 邱
- 秩
- 臂
- 帽
- 犬
- 辰
- 挪
- 葡
- 乓
- 杉
- 劣
- 柱
- 履
- 貌
- 陌
- 疲
- 屡
- 萄
- 疫
- 屠
- 淫
- 乃
- 妆
- 躺
- 茹
- 芸
- 盲
- 舰
- 巩
- 傲
- 汗
- 贼
- 鸣
- 擦
- 彼
- 鼻
- 炮
- 肚
- 倩
- 雾
- 雇
- 扑
- 柴
- 疏
- 佣
- 框
- 啥
- 践
- 淮
- 墓
- 玄
- 湃
- 侯
- 裕
- 棚
- 殖
- 耶
- 馈
- 挡
- 晴
- 珊
- 饺
- 掘
- 辖
- 扔
- 眉
- 膀
- 鹤
- 沧
- 杠
- 屯
- 捧
- 翠
- 擅
- 雕
- 锂
- 晰
- 遵
- 碗
- 痕
- 怨
- 笼
- 舆
- 媳
- 尿
- 冀
- 牢
- 厘
- 痴
- 巫
- 颈
- 埋
- 逾
- 翼
- 锐
- 桠
- 衔
- 纱
- 纺
- 饱
- 棍
- 荒
- 逮
- 贪
- 妥
- 昂
- 谊
- 槽
- 孝
- 坪
- 粗
- 掀
- 呆
- 崔
- 撒
- 崛
- 糟
- 皆
- 滞
- 躲
- 绮
- 硕
- 刮
- 滋
- 阁
- 兑
- 踢
- 帖
- 峡
- 浅
- 靖
- 溺
- 尬
- 弥
- 幽
- 狠
- 咱
- 丛
- 绵
- 勿
- 炎
- 尴
- 抬
- 叹
- 铅
- 勾
- 胆
- 削
- 掩
- 蟹
- 捉
- 箭
- 筷
- 粤
- 纤
- 逢
- 菱
- 奎
- 肠
- 吁
- 淳
- 颂
- 俏
- 御
- 愤
- 谐
- 闫
- 赋
- 垒
- 闸
- 淑
- 娟
- 盒
- 蓬
- 轰
- 厌
- 赤
- 豫
- 垫
- 逝
- 泼
- 妃
- 昏
- 谅
- 纬
- 挣
- 亨
- 穷
- 糊
- 衰
- 狮
- 萝
- 逻
- 铭
- 晕
- 旨
- 倡
- 衷
- 缝
- 漠
- 坑
- 揽
- 抒
- 稽
- 巷
- 亭
- 哦
- 喆
- 廊
- 鹰
- 樱
- 勃
- 坝
- 仇
- 茨
- 贞
- 耕
- 飙
- 韶
- 脂
- 肢
- 梳
- 乞
- 椅
- 肿
- 壤
- 臭
- 喂
- <space>
- 斜
- 渝
- 跪
- 灌
- 巍
- 悠
- 慰
- 枝
- 奉
- 译
- 浏
- 驳
- 谍
- 睿
- 砖
- 酿
- 驹
- 捡
- 蔚
- 渤
- 娅
- 垂
- 轴
- 腕
- 舟
- 夸
- 吞
- 鲸
- 弦
- 厢
- 斥
- 渴
- 趴
- 钓
- 霖
- 帆
- 芭
- 吟
- 彦
- 辱
- 愁
- 耍
- 恼
- 瑜
- 笨
- 侨
- 逗
- 缠
- 戚
- 桶
- 乖
- 胀
- 慕
- 硅
- 丧
- 钰
- 灿
- 缉
- 冤
- 罐
- 斩
- 叠
- 斋
- 裙
- 坞
- 蜀
- 囚
- 稻
- 叛
- 叉
- 藤
- 绘
- 膝
- 烫
- 擂
- 坎
- 悟
- 钧
- 燥
- 撕
- 扳
- 龚
- 辟
- 绳
- 艇
- 钥
- 苍
- 豹
- 逛
- 裹
- 匙
- 纲
- 蝴
- 誓
- 薛
- 蛮
- 禹
- 胃
- 邵
- 丘
- 阙
- 盆
- 砂
- 骏
- 瞒
- 桐
- 芦
- 磁
- 谜
- 凸
- 猴
- 婉
- 筋
- 荆
- 漆
- 昕
- 罕
- 驼
- 亩
- 谦
- 呦
- 甩
- 峻
- 巅
- 钉
- 猥
- 肺
- 榆
- 牲
- 萎
- 蛛
- 钩
- 袖
- 骄
- 佰
- 拾
- 鹅
- 祁
- 遂
- 歧
- 掏
- 祈
- 孵
- 洒
- 雳
- 螺
- 弊
- 韵
- 踩
- 沸
- 炭
- 桦
- 闺
- 扯
- 瑶
- 霹
- 盼
- 罩
- 穆
- 斑
- 杏
- 芙
- 骚
- 葬
- 侃
- 橘
- 咒
- 菇
- 盯
- 慌
- 妨
- 宰
- 喀
- 翁
- 勘
- 滥
- 瞩
- 咪
- 卦
- 伞
- 烯
- 衍
- 崩
- 昔
- 邢
- 爹
- 晖
- 佬
- 牺
- 拯
- 娴
- 妖
- 仲
- 邑
- 馥
- 饿
- 棠
- 渣
- 宪
- 贬
- 瘾
- 鲍
- 芜
- 奠
- 瞄
- 奢
- 渗
- 郝
- 函
- 楷
- 潇
- 淋
- 澡
- 榄
- 辨
- 巾
- 溜
- 芮
- 浆
- 瘫
- 咸
- 啤
- 蜡
- 亵
- 菊
- 羞
- 茫
- 姨
- 矶
- 捅
- 凝
- 卉
- 叙
- 氮
- 蜘
- 舱
- 弘
- 醛
- 堰
- 嗯
- 挫
- 挽
- 雁
- 酝
- 鞭
- 惧
- 肝
- 粹
- 蚁
- 竖
- 卵
- 灶
- 剥
- 陀
- 彰
- 蔓
- 廖
- 鄂
- 讽
- 遣
- 僧
- 嵩
- 俯
- 葩
- 蛙
- 睐
- 碾
- 蘑
- 饲
- 甸
- 脆
- 莓
- 遏
- 詹
- 蒸
- 刹
- 磅
- 囊
- 芽
- 锈
- 粒
- 竣
- 璇
- 辐
- 瑄
- 酱
- 顽
- 瘤
- 喻
- 伽
- 铝
- 琛
- 殿
- 腊
- 粥
- 兜
- 汝
- 宵
- 撼
- 洽
- 娶
- 斐
- 饶
- 澜
- 缆
- 嫂
- 橄
- 禾
- 撰
- 挨
- 枯
- 蒲
- 倪
- 骤
- 龟
- 冈
- 姗
- 哀
- 簿
- 遮
- 羡
- 坍
- 汁
- 煎
- 脖
- 赣
- 愉
- 唤
- 泊
- 匿
- 邹
- 舌
- 雀
- 畜
- 邪
- 狄
- 尹
- 烹
- 夷
- 腔
- 闷
- 闽
- 彪
- 宙
- 鸽
- 竭
- 睹
- 眨
- 阜
- 趟
- 禅
- 埔
- 熬
- 铸
- 幂
- 畴
- 泷
- 咚
- 湛
- 肾
- 嘲
- 翘
- 抹
- 卿
- 崎
- 溃
- 琼
- 梓
- 隋
- 饪
- 隧
- 霾
- 艘
- 帷
- 嫖
- 钞
- 鞍
- 淄
- 涩
- 炜
- 凑
- 彤
- 擎
- 琐
- 衫
- 浸
- 濠
- 绎
- 潼
- 镖
- 哎
- 枫
- 慨
- 浇
- 狸
- 辜
- 滤
- 屁
- 棵
- 禄
- 齿
- 魄
- 窑
- 帐
- 丸
- 肘
- 裴
- 栈
- 讶
- 昊
- 荧
- 哄
- 乙
- 蕉
- 株
- 愧
- 沂
- 岚
- 叮
- 徨
- 冶
- 葱
- 泸
- 谌
- 汕
- 蜗
- 姣
- 彷
- 祭
- 坟
- 奴
- 牡
- 姬
- 傍
- 茅
- 懒
- 侄
- 兮
- 罢
- 碟
- 绣
- 忌
- 仗
- 钦
- 祷
- 歪
- 歇
- 锣
- 哑
- 猝
- 庾
- 掐
- 崖
- 曙
- 狙
- 黛
- 窦
- 唇
- 椒
- 赂
- 氨
- 茵
- 悍
- 硫
- 葫
- 庸
- 喉
- 俪
- 峪
- 筒
- 赎
- 橡
- 哺
- 彬
- 盔
- 毙
- 颐
- 渊
- 驴
- 衬
- 毯
- 剖
- 钮
- 捆
- 鳄
- 骆
- 跻
- 佟
- 焰
- 嗨
- 怜
- 粱
- 堤
- 沥
- 剔
- 扒
- 蕴
- 嬛
- 媚
- 玟
- 蹲
- 肆
- 凳
- 贱
- 汀
- 靡
- 畔
- 焚
- 匠
- 呃
- 弈
- 绯
- 苛
- 摧
- 肋
- 溯
- 蠢
- 玖
- 勋
- 迄
- 捍
- 阱
- 呵
- 丙
- 猩
- 宛
- 捣
- 铠
- 焊
- 淹
- 掷
- 歹
- 禺
- 闵
- 晗
- 葵
- 泓
- 牟
- 泣
- 舅
- 饥
- 霏
- 躁
- 壹
- 碌
- 矩
- 璨
- 咕
- 庐
- 犀
- 坨
- 咋
- 缔
- 酵
- 萤
- 矮
- 缸
- 禽
- 哇
- 沦
- 刃
- 孪
- 俞
- 蝠
- 驿
- 呕
- 筛
- 涮
- 剿
- 迭
- 睁
- 秉
- 徘
- 徊
- 屿
- 捂
- 丞
- 顷
- 惕
- 肪
- 皓
- 寡
- 粘
- 垮
- 烨
- 昭
- 囧
- 蝙
- 壶
- 潢
- 襄
- 蔽
- 沛
- 炽
- ,
- 嵌
- 疤
- 侈
- 渭
- 笛
- 腻
- 彝
- 枣
- 鸦
- 曦
- 苇
- 珂
- ?
- 狭
- 诀
- 魁
- 膨
- 倚
- 墩
- 诙
- 郸
- 崭
- 耻
- 愚
- 窜
- 秽
- 蹭
- 璐
- 霉
- 旱
- 铲
- 氢
- 蓓
- 暨
- 锤
- 埠
- 倦
- 吾
- 丫
- 裘
- 铮
- 蜢
- 桓
- 隶
- 蝉
- 焕
- 卑
- 婿
- 恺
- 栗
- 舶
- 搅
- 爪
- 慑
- 窥
- 瞻
- 敞
- 茗
- 嘟
- 妞
- 颅
- 脊
- 侬
- 儒
- 浑
- 缅
- 诡
- 撬
- 甫
- 搁
- 畏
- 拱
- 弓
- 懈
- 峥
- 嚣
- 丐
- 赃
- 榕
- 珀
- 勉
- 汶
- 枕
- 屹
- 萱
- 髓
- 栖
- 妒
- 茄
- 脾
- 啸
- 谴
- 侮
- 隙
- 耽
- 柄
- 逍
- 仆
- 孚
- 鲨
- 螂
- 蚂
- 晃
- 晏
- 呛
- 挟
- 粪
- 昧
- 炯
- 袍
- 穴
- 抖
- 殡
- 邯
- 雍
- 悼
- 梗
- 穗
- 痪
- 韧
- 漳
- 绸
- 擒
- 瑾
- 涡
- 耷
- 痒
- 聂
- 捏
- 乾
- 蝎
- 沾
- 嫩
- 荔
- 弼
- 颓
- 嫉
- 敛
- 诠
- 殷
- 踹
- 惫
- 篡
- 姥
- 泾
- 婧
- 隍
- 敷
- 矣
- 瞎
- 玥
- 烽
- 阐
- 讳
- 衅
- 讹
- 蔷
- 耿
- 哨
- 醋
- 朔
- 幢
- 瞧
- 喔
- 膏
- 阮
- 膊
- 郡
- 觅
- 磷
- 熏
- 灼
- 翡
- 蟑
- 蝇
- 赐
- 悚
- 硝
- 荃
- 抉
- 汛
- 冥
- 咘
- 哗
- 锯
- 榴
- 螃
- 惟
- 绒
- 蚕
- 琥
- 涤
- 蚊
- 杖
- 豚
- 濮
- 拢
- 磕
- 霄
- 栽
- 粟
- 滕
- 拽
- 嗓
- 馅
- 晟
- 鹭
- 狩
- 羚
- 屎
- 邰
- 梧
- 吼
- 汹
- 哒
- 绰
- 绽
- 臀
- 棕
- 瑛
- 浒
- 琶
- 聋
- 搂
- 刁
- 咽
- 炙
- 拎
- 菩
- 沐
- 岔
- 涧
- 皱
- 婕
- 睫
- 炖
- 矫
- 昱
- 碱
- 洼
- 玺
- 篷
- 黏
- 淼
- 膳
- 羹
- 旷
- 枢
- 撇
- 勺
- 溅
- 蜕
- 漓
- 劈
- 浣
- 戳
- 庚
- 蓟
- 觞
- 烛
- 椎
- 僻
- 胳
- 霜
- 呐
- 冉
- 柿
- 铐
- 絮
- 瀚
- 扁
- 祠
- 喘
- 湉
- 宥
- 腺
- 翩
- 暧
- 蹄
- 嘱
- 喇
- 铬
- 溶
- 揣
- 岌
- 禧
- 蒜
- 跷
- 尧
- 咳
- 绅
- 扛
- 畸
- 淤
- 罄
- 臻
- 绞
- 矢
- 瀑
- 屌
- 倘
- 麒
- 咯
- 嘀
- 莒
- 辄
- 峨
- 攒
- 氰
- 醇
- 弧
- 斧
- 墟
- 憬
- 薯
- 矜
- 窍
- 郴
- 阀
- 栅
- 绊
- 鞠
- 娼
- 琢
- 剃
- 暮
- 瑚
- 竿
- 皂
- 挠
- 沮
- 莺
- 馍
- 腥
- 蚀
- 窘
- 檬
- 羁
- 饽
- 炬
- 瑕
- 雏
- 沽
- 寝
- 辙
- 漩
- 袱
- 匈
- 煞
- 猿
- 囤
- 癫
- 辗
- 揍
- 拇
- 诟
- 窒
- 憧
- 垦
- 寰
- 铀
- 潍
- 沼
- 绷
- 憨
- 窟
- 嘿
- 揪
- 疵
- 梭
- 敖
- 耘
- 蒿
- 翟
- 镑
- 莘
- 莽
- 孽
- 滔
- 苯
- 滢
- 胰
- 氯
- 厮
- 缪
- 麓
- 寇
- 诬
- 噬
- 嘘
- 匕
- 呗
- 槟
- 渎
- 涪
- 榨
- 鸥
- 轧
- 氓
- 舵
- 泵
- 堕
- 陨
- 呷
- 猖
- 熔
- 嬉
- 稚
- 亟
- 忐
- 豁
- 韬
- 赘
- 恳
- 陡
- 蚌
- 俨
- 娥
- 娄
- 焱
- 颤
- 眷
- 町
- 嘻
- 棱
- 琵
- 匀
- 躬
- 椰
- 耒
- 沁
- 坻
- 邂
- 筝
- 簸
- 陋
- 嗅
- 橱
- 踝
- 喧
- 黯
- 趾
- 凿
- 烘
- 掴
- 缚
- 啃
- 罂
- 瞳
- 蹦
- 鸳
- 毋
- 忑
- 靴
- 泻
- 樟
- 伺
- 跤
- 甥
- 熄
- 菠
- 瓯
- 啼
- 裔
- 骸
- 埭
- 捶
- 煲
- 缭
- 蹬
- 遛
- 寅
- 叭
- 隅
- 帜
- 磋
- 酪
- 馒
- 茉
- 陂
- 岂
- 嫣
- 妓
- 桔
- 珑
- 滁
- 谬
- 厄
- 珏
- 忏
- 逅
- 噱
- 幌
- 柠
- 窖
- 淆
- 锏
- 璧
- 菡
- 汾
- 荫
- 鳝
- 疚
- 蹊
- 哽
- 蕊
- 祺
- 鸯
- 钠
- 鳌
- 芋
- 挚
- 秤
- 阪
- 凹
- 嗒
- 茧
- 涅
- 盱
- 眙
- 鄙
- 饵
- 芹
- 莆
- 飓
- 帼
- 簧
- 骇
- 榔
- 蜓
- 宕
- 穹
- 疹
- 骁
- 诫
- 殇
- 迦
- 濑
- 寥
- 嗡
- 恙
- 妄
- 渌
- 薰
- 慷
- 怠
- 惺
- 峙
- 诅
- 羲
- 岱
- 踞
- 镶
- 笋
- 哟
- 恤
- 秆
- 扼
- 枭
- 剽
- 锰
- 亥
- 俺
- 阚
- 骥
- 痫
- 菏
- 荼
- 芷
- 釜
- 鹊
- 坷
- 糙
- 髦
- 俘
- 崴
- 坂
- 嘎
- 苟
- 獒
- 棘
- 箍
- 郫
- 拧
- 攸
- 呱
- 咙
- 琉
- 圭
- 蕙
- 诿
- 卤
- 檐
- 赡
- 栓
- 煜
- 唬
- 拙
- 夯
- 袜
- 秸
- 憋
- 漕
- 缇
- 篱
- 溉
- 嗽
- 咧
- 酋
- 绛
- 哩
- 拴
- 蜇
- 蟒
- 拣
- 缤
- 宸
- 呜
- 驯
- 筠
- 辍
- 伶
- 熠
- 菁
- 礁
- 哮
- 烙
- 陇
- 荟
- 枉
- 蛟
- 吒
- 雌
- 橇
- 酯
- 嬅
- 舫
- 拷
- 拌
- 竺
- 峭
- 铛
- 邬
- 溧
- 戎
- 锌
- 钾
- 遴
- 畊
- 撮
- 譬
- 濒
- 噩
- 蹿
- 殃
- 圩
- 惶
- 纶
- 唰
- 桨
- 倔
- 鹂
- 尉
- 沓
- 觑
- 钊
- 麋
- 匮
- 淌
- 瀛
- 锵
- 酌
- 獗
- 驭
- 杞
- 羯
- 俐
- 戮
- 诵
- 姊
- 脐
- 绢
- 涞
- 嚷
- 馗
- 谤
- 暇
- 渺
- 庇
- 懋
- 佘
- 泌
- 圃
- 恕
- 籁
- 胺
- 瑙
- 赓
- 膛
- 抠
- 啪
- 砰
- 铎
- 棺
- 砺
- 梵
- 筵
- 佼
- 殉
- 涿
- 琅
- 咫
- 瞪
- 媲
- 嗷
- 眈
- 湄
- 眶
- 栾
- 簋
- 昼
- 腼
- 腆
- 伎
- 炊
- 癖
- 鄞
- 侥
- 掺
- 璀
- 躯
- 渍
- 剐
- 耸
- 搡
- 瓣
- 廓
- 焖
- 焉
- 诽
- 摒
- 卯
- 睦
- 泗
- 虞
- 稣
- 锄
- 骡
- 喵
- 侏
- 蜻
- 喋
- (
- )
- 甬
- 璋
- 拄
- 膺
- 轶
- 柬
- 岖
- 檀
- 袂
- 缜
- 垣
- 蛰
- 秃
- 匡
- 吝
- 咎
- 扉
- 昙
- 诧
- 鲤
- 晤
- 绚
- 毗
- 辫
- 跆
- 藕
- 雹
- 藩
- 飚
- 嘞
- 隽
- 篓
- 梆
- 掠
- 泔
- 懊
- 坯
- 肴
- 嚼
- 鳅
- 毽
- 浚
- 蔑
- 痰
- 沣
- 亢
- 蜚
- 踵
- 蚝
- 瞅
- 崂
- 戛
- 翎
- 怦
- 惋
- 谙
- 胧
- 懿
- 茱
- 靶
- 藻
- 羔
- 哼
- 酉
- 喽
- 锚
- 眩
- 碘
- 侍
- 咔
- 叼
- 谩
- 裳
- 洱
- 徙
- 掂
- 踊
- 磐
- 嗑
- 榈
- 槐
- 皖
- 歆
- 怯
- 昀
- 汲
- 缮
- 挎
- 剁
- 瞿
- 朦
- 啧
- 觎
- 峦
- 蜈
- 祯
- 栩
- 忡
- 瘟
- 砾
- 叨
- 嗜
- 痞
- 藉
- 鳞
- 肛
- 腌
- 锭
- 铿
- 岐
- 漾
- 熹
- 汞
- 馋
- 窈
- 窕
- 焯
- 钵
- 髅
- 奚
- 榭
- 狡
- 禀
- 珉
- 茸
- 籽
- 掰
- 镀
- 庵
- 寐
- 掣
- 笆
- 迸
- 睽
- 唠
- 鹃
- 钣
- 覃
- 噢
- 婺
- 镐
- 蹶
- 胭
- 咤
- 婵
- 厥
- 簇
- 矗
- 胫
- 璞
- 黔
- 锆
- 皙
- 孜
- 骷
- 襟
- 抨
- 咐
- 衢
- 傣
- 煦
- 镍
- 屑
- 漯
- 灞
- 嘹
- 颊
- 遐
- 涝
- 瓮
- 觊
- 仨
- 萃
- 俭
- 胥
- 舔
- 枸
- 翊
- 烁
- 赦
- 缕
- 霓
- 辕
- 镁
- 钗
- 唧
- 滦
- 醺
- 迥
- 硚
- 乍
- 惦
- 懵
- 靳
- 垤
- 垢
- 浊
- 褐
- 婪
- 嚎
- 烊
- 袄
- 惬
- 蔗
- 馊
- 摁
- 榷
- 哆
- 匝
- 痘
- 夭
- 笃
- 僚
- 咆
- 悖
- 褪
- 铉
- 镉
- 蜷
- 柚
- 拭
- 卞
- 眸
- 捻
- 蚣
- 匾
- 酥
- 畈
- 茬
- 噜
- 驸
- 酮
- 鹦
- 鹉
- 燎
- 痹
- 屉
- 腩
- 婶
- 瓢
- 郜
- 虔
- 搀
- 嵋
- 抡
- 肮
- 祛
- 紊
- 奂
- 戟
- 迂
- 悸
- 枞
- 叩
- 逞
- 痊
- 鲶
- 晔
- 酣
- 飒
- 忱
- 襁
- 褓
- 怂
- 馄
- 饨
- 睬
- 嗤
- 寮
- 蜿
- 蜒
- 滘
- 拂
- 祉
- 镰
- 沱
- 笈
- 灏
- 孰
- 毓
- 钙
- 淅
- 涟
- 鞘
- 牒
- 诶
- 蹼
- 钜
- 壕
- 痼
- 镯
- 愕
- 崃
- 惮
- 哉
- 熨
- 螳
- 鸠
- 撂
- 糯
- 铨
- 朽
- 碚
- 胯
- 袒
- 琰
- 舸
- 樨
- 骅
- 唏
- 晾
- 酗
- 沌
- 汐
- 炅
- 淞
- 茎
- 煊
- 唾
- 瘠
- 皎
- 骊
- 缨
- 盏
- 铂
- 斛
- 贮
- 腑
- 萦
- 眯
- 煽
- 鱿
- 梢
- 唆
- 阄
- 岑
- 挞
- 搐
- 吱
- 犁
- 祎
- 缢
- 硼
- 忤
- 翱
- 柘
- 骋
- 邛
- 攥
- 褚
- 叱
- 邺
- 锥
- 斟
- 钝
- 鹫
- 憔
- 悴
- 蹴
- 嬷
- 吆
- 褒
- 瑁
- 瞰
- 匣
- 楂
- 裆
- 唉
- 兢
- 褂
- 邸
- 辘
- 钛
- 缀
- 鹜
- 砌
- 锹
- 咀
- 稠
- 胤
- 亳
- 蛐
- 饕
- 佯
- 犸
- 缰
- 跋
- 忻
- 酶
- 芊
- 孢
- 虏
- 刨
- 珈
- 枷
- 咭
- 懦
- 狒
- 榫
- 蔼
- 邋
- 遢
- 秧
- 拮
- 莜
- 沅
- 锷
- 羿
- 陛
- 琬
- 氦
- 焙
- 讪
- 衙
- 囍
- 岷
- 搪
- 殒
- 莴
- 苣
- 珮
- 裟
- 榻
- 啬
- 玳
- 稼
- 诩
- 嘶
- 臼
- 骼
- 瘀
- 箴
- 涕
- 杳
- 恬
- 颍
- 聆
- ''''
- 捎
- 砝
- 钨
- 貂
- 铤
- 淝
- 脍
- 赝
- 摹
- 蚤
- 韭
- 琨
- 弑
- 崆
- 痱
- 砥
- 钏
- 沭
- 汨
- 苓
- 垛
- 涠
- 砒
- 箩
- 筐
- 姝
- 烃
- 迢
- 鏖
- 伢
- 茁
- 遁
- 垡
- 椿
- 鲟
- 涎
- 楞
- 罹
- 凋
- 芍
- 咄
- 窨
- 闰
- 莠
- 吩
- 浜
- 苔
- 荞
- 殆
- 燊
- 盹
- 鳖
- 胚
- 洙
- 曰
- 娲
- 瘸
- 餮
- 娆
- 卒
- 腱
- 湫
- 砚
- 盎
- 钳
- 铷
- 崮
- 湍
- 骜
- 藜
- 蟋
- 蟀
- 垭
- 疡
- 臧
- 灸
- 脓
- 昵
- 偎
- 愣
- 叽
- 憎
- 掳
- 蜃
- 鄱
- 腈
- 嵊
- 鲈
- 昶
- 笙
- 舜
- 啕
- 涓
- 胛
- 槌
- 荤
- 靛
- 溥
- 臃
- 蛀
- 拗
- 嗦
- 黝
- 袈
- 揉
- 炕
- 珲
- 虱
- 腋
- 筱
- 舛
- 猾
- 噎
- 綦
- 鄢
- 夙
- 眺
- 喱
- 徉
- 贰
- 渚
- 桎
- 梏
- 谛
- 吭
- 坳
- 晦
- 锴
- 弩
- 搓
- 贻
- 惭
- 逯
- 娉
- 箫
- 杈
- 俑
- 洮
- 掮
- 摞
- 栀
- 妾
- 痧
- 骝
- 漉
- 崽
- 儋
- 柑
- 埝
- 啄
- 蛊
- 椭
- 淬
- 轼
- 喃
- 帚
- 跺
- 漱
- 蕃
- 氟
- 渲
- 吏
- 塾
- 癣
- 媞
- 嫦
- 蔺
- 伉
- 啰
- 翌
- 茆
- 娓
- 澈
- 讧
- 暹
- 镳
- 隘
- 恿
- 狰
- 狞
- 麾
- 漪
- 瞌
- 轲
- 滇
- 缄
- 泮
- 瞭
- 璟
- 傀
- 儡
- 魇
- 掖
- 皋
- 塍
- 疃
- 惰
- 葆
- 犇
- 泯
- 烩
- 妩
- 潞
- 晞
- 咩
- 赈
- 撅
- 惆
- 怅
- 斓
- 兀
- 睾
- 绥
- 糠
- 讥
- 菀
- 衩
- 纭
- 诏
- 嘈
- 琊
- 癜
- 砣
- 帧
- 痣
- 泫
- 洵
- 砀
- 涸
- 奄
- 庶
- 烬
- 撵
- 酊
- 蛾
- 唢
- 燮
- 潦
- 篆
- 冗
- 瞥
- 珞
- 猷
- 粳
- 苋
- 嗖
- 犟
- 睇
- 鼹
- 唛
- 毡
- 碴
- 颚
- 泞
- 谕
- 噼
- 犒
- 碉
- 佶
- 垅
- 磺
- 铆
- 侑
- 貔
- 貅
- 嚏
- 悯
- 畿
- 恍
- 蜥
- 蜴
- 彗
- 闳
- 蚯
- 蚓
- 瘪
- 俾
- 腓
- 邃
- 凄
- 茴
- 趸
- 弛
- 颢
- 溆
- 楔
- 蠕
- 怵
- 篪
- 臆
- 疙
- 瘩
- 擞
- 鹞
- 粽
- 隼
- 珺
- 墉
- 桢
- 仟
- 荨
- 笠
- 钯
- 壑
- 樽
- 骐
- 赊
- 楸
- 蓥
- 矸
- 歩
- 锨
- 铡
- 叁
- 缱
- 绻
- 鳍
- 豌
- 褥
- 龈
- 剌
- 锒
- 嚓
- 旌
- 喳
- 皿
- 煳
- 鲳
- 筏
- 轳
- 鲠
- 嶙
- 峋
- 冢
- 郧
- 鬟
- 疮
- 垩
- 鲭
- 蕲
- 挝
- 钿
- 琏
- 糗
- 戬
- 霁
- 宦
- 锢
- 撩
- 髋
- 楣
- 佃
- 捺
- 螨
- 猬
- 萋
- 妊
- 抿
- 阂
- 俚
- 阆
- 踉
- 跄
- 砷
- 绌
- 苡
- 仄
- 樯
- 哐
- 柞
- 镭
- 殓
- 霎
- 犄
- 暄
- 唁
- 粕
- 噗
- 铟
- 濡
- 庖
- 柒
- 脯
- 扪
- 赳
- 擀
- 冽
- 谧
- 踱
- 踌
- 躇
- 诋
- 蟊
- 褶
- 皑
- 祐
- 蝌
- 蚪
- 硌
- 鹌
- 鹑
- 蝈
- 铖
- 娣
- 妤
- 撸
- 壬
- 攘
- 诣
- 阕
- 矾
- 胗
- 酩
- 甭
- 蹂
- 躏
- 疟
- 诃
- 溏
- 阑
- 俸
- 雒
- 睢
- 澍
- 桉
- 窿
- 荇
- 钴
- 哔
- 嵇
- 饷
- 耙
- 劭
- 峒
- 搔
- 瞑
- 祀
- 徜
- 恻
- 蟾
- 蹩
- 蕨
- 酰
- 薏
- 绫
- 濂
- 茛
- 囱
- 鲑
- 粑
- 鳗
- 札
- 觐
- 醍
- 掸
- 逑
- 阖
- 菖
- 嗲
- 幡
- 缙
- 逵
- 蔫
- 崧
- 惚
- 铰
- 嫔
- 倌
- 罡
- 邝
- 婀
- 纨
- 绔
- 嵘
- 孛
- 铣
- 娠
- 槿
- 厩
- 犷
- 朐
- 疝
- 狈
- 黍
- 幄
- 荚
- 淖
- 犊
- 塬
- 艮
- 胱
- 蝗
- 圪
- 擘
- 旮
- 旯
- 憩
- 孺
- 瞟
- 啵
- 焘
- 嗣
- 忿
- 嬗
- 蘸
- 纫
- 喟
- 慵
- 祟
- 踺
- 孳
- 棣
- 埸
- 淦
- 炔
- 纰
- 轫
- 偕
- 奘
- 纣
- 孀
- 舷
- 羌
- 圻
- 拈
- 鲅
- 镌
- 恃
- 骛
- 旻
- 煨
- 婊
- 雉
- 蔻
- 霈
- 垚
- 铩
- 莪
- 揩
- 枰
- 痢
- 庹
- 瘴
- 钎
- 腮
- 嵬
- 谯
- 嫡
- 埂
- 捋
- 纾
- 蛤
- 瑭
- 螈
- 邙
- 罔
- 郯
- 樵
- 茌
- 郓
- 枳
- 咦
- 讴
- 厝
- 砼
- 茯
- 衲
- 潋
- 噘
- 谚
- 烷
- 斡
- 嫫
- 嗪
- 邳
- 铄
- 歼
- 堇
- 渑
- 疽
- 怄
- 涣
- 囹
- 稔
- 弋
- 篝
- 蹋
- 窠
- 谟
- 浠
- 悱
- 蜍
- 孬
- 芥
- 馏
- 屐
- 栎
- 玷
- 萸
- 扞
- 阡
- 荀
- 曳
- 邕
- 诛
- 阉
- 堀
- 骠
- 琤
- 盂
- 妲
- 虻
- 醐
- 谀
- 舀
- 鳟
- 绀
- 呲
- 娩
- 牾
- 僮
- 笳
- 渥
- 仡
- 镊
- 嶝
- 泱
- 汴
- 咣
- 嘭
- 锟
- 咂
- 宓
- 侗
- 洹
- 妫
- A
- 峁
- 蜊
- 攫
- 膑
- 毂
- 秣
- 泠
- 尅
- 冼
- 嶂
- 浈
- 陬
- 啖
- 兖
- 褴
- 褛
- 妯
- 娌
- 恣
- 恸
- 掬
- 篦
- 蹚
- 逡
- 鲷
- 叵
- 驷
- 飧
- 釉
- 粼
- 踯
- 躅
- 讷
- 吮
- 琮
- 啾
- 粲
- 佻
- 疸
- 臊
- 蓦
- 椋
- 眬
- 憷
- 绋
- 珙
- 揶
- 谏
- 䶮
- 帛
- 衮
- 晷
- 裨
- 鸾
- 槎
- 讣
- 嫚
- 遨
- 瘙
- 疱
- 呻
- 鞅
- 痉
- 挛
- 骰
- 瘳
- 棂
- 偃
- 鸢
- 钲
- 尕
- 呸
- 埇
- 浃
- 濯
- 坩
- 埚
- 嗝
- 炀
- 隗
- 扈
- 谆
- 丕
- 魉
- 噙
- 圹
- 埕
- 恪
- 孱
- 凛
- 曜
- 拚
- 浔
- 吖
- 轱
- 搽
- 芪
- 箕
- 箔
- 戊
- 蛆
- 蜱
- 嗔
- 榛
- 蹒
- 跚
- 镣
- 鲫
- 镂
- 摈
- 愫
- 纂
- 麝
- 趺
- 碜
- 馁
- 唷
- 悻
- 伫
- 樾
- 剜
- 咝
- 銮
- 撺
- 掇
- 哂
- 咻
- 酐
- 訾
- 鳕
- 稷
- 嘣
- 碣
- 扦
- 柩
- 蟠
- 芩
- 鬓
- 裱
- 嗄
- 枋
- 钇
- 怼
- 喁
- 龊
- 疴
- 蛹
- 偓
- 蓼
- 汩
- 疖
- 蛎
- 诘
- 焓
- 荠
- 闩
- 噌
- 苷
- 藓
- 蚱
- 亘
- 缎
- 鼬
- 籼
- 疣
- 轸
- 玹
- 潺
- 妪
- 馀
- 啶
- 耄
- 耋
- 鬃
- 滹
- 莅
- 倜
- 傥
- 蓁
- 岬
- 貉
- 獾
- 敝
- 瘁
- 蒯
- 碓
- 殚
- 漭
- 嵛
- 榉
- 诓
- 泖
- 艋
- 凇
- 靑
- 沏
- 磴
- 氪
- 诲
- 忪
- 炷
- 杓
- 暾
- 藿
- T
- M
- 洺
- 擢
- 藠
- 晌
- 瞠
- 桁
- 遑
- 囗
- 谑
- 嗬
- 卲
- 硒
- 鼾
- 觥
- 茳
- 枇
- 杷
- 邡
- 桷
- 椁
- 鹳
- 饴
- 跶
- 绉
- 浐
- 迩
- 啲
- 颌
- 泺
- 睑
- 踮
- 荛
- 镔
- 祢
- 韫
- 笸
- 俎
- 羸
- 怿
- 昝
- 艿
- 薷
- 赅
- 怆
- 刍
- 獭
- 蚴
- 噶
- 噤
- 氤
- 氲
- 豺
- 倭
- 豉
- 葺
- 珥
- 痨
- 蹁
- 跹
- 蚬
- 唳
- 舐
- 竽
- 馑
- 徇
- 垌
- 魍
- 葚
- 涑
- 跛
- 荏
- 吋
- 髌
- 髂
- 骓
- 悌
- 戌
- 揄
- 矽
- 钒
- 𫖯
- 谶
- 捌
- 矍
- 铧
- 骈
- 枥
- 殁
- 鲢
- 腭
- 弭
- 镕
- 篑
- 馕
- 堃
- 锑
- 搧
- 闾
- 囫
- 囵
- 鞑
- 辊
- 魟
- 𫚉
- 鲼
- 郅
- 坭
- 栌
- 佗
- 驮
- 哕
- 颦
- 偌
- 颀
- 耜
- 仞
- 贲
- 烀
- 瘢
- 祚
- 悭
- 沢
- 瑠
- 钼
- 鹧
- 鸪
- 蛳
- 苞
- 柃
- 麂
- 暌
- 刎
- 溟
- 菘
- 钐
- 蹉
- 跎
- 篁
- 耆
- 纡
- 熵
- 簪
- 铋
- 幔
- 巳
- 陉
- 増
- 鹁
- 矬
- 锉
- 偈
- 篼
- 龃
- 龉
- 郇
- 孑
- 忒
- 龌
- 稞
- 囔
- 蝮
- 蠊
- 苫
- 菅
- 霪
- 藁
- 膈
- 敕
- 潸
- 槃
- 湎
- 椟
- 茼
- 戗
- 奁
- 芗
- 褔
- 稹
- 澧
- 嬴
- 铍
- 潆
- 橐
- 堺
- 佚
- 嫒
- 葳
- 氚
- 酚
- 椤
- 赉
- 砭
- 匏
- 戾
- 恁
- 腴
- 蛉
- 麸
- 玑
- 痍
- 啜
- 劾
- 忖
- 蛔
- 芾
- 餍
- 诤
- 逋
- 鸵
- 荸
- 夔
- 懑
- 嘏
- 檗
- 牠
- 痔
- 酞
- 猹
- 盅
- 旖
- 鸫
- 椴
- 戍
- 耪
- 豇
- 牍
- 铑
- 噻
- 龅
- 猁
- 蝽
- 欸
- 肱
- 桴
- 镏
- 缬
- 怫
- 唑
- 曈
- 缛
- 吠
- 歙
- 谖
- 俟
- 刽
- 槭
- 硖
- 髯
- 饯
- 藐
- 娈
- 勐
- 颧
- 荻
- 焗
- 鳃
- 昴
- 黟
- 羧
- 趵
- 澶
- 骞
- 鸩
- 婢
- 圄
- 佝
- 偻
- 嗫
- 囯
- 跬
- 朕
- 袅
- 锲
- 杵
- 豢
- 骺
- 诹
- 椹
- 谮
- 㶧
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf:
joint_space_size: 512
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.0
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transducer
decoder_conf:
rnn_type: lstm
num_layers: 1
hidden_size: 512
dropout: 0.1
dropout_embed: 0.2
required:
- output_dir
- token_list
version: '202205'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
miyoung/newProject
|
miyoung
| 2022-07-06T01:16:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-17T04:39:53Z |
### What's Hugging Face?!!!
https://towardsdatascience.com/whats-hugging-face-122f4e7eb11a
Hugging Face is a community and data science platform that provides: Tools that enable users to build, train and deploy ML models based on open source (OS) code and technologies!!!!!.
|
domenicrosati/deberta-v3-xsmall-finetuned-review_classifier
|
domenicrosati
| 2022-07-06T01:09:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-05T20:16:35Z |
---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-xsmall-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-xsmall-finetuned-review_classifier
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1441
- Accuracy: 0.9513
- F1: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.1518 | 1.0 | 6667 | 0.1575 | 0.9510 | 0.7155 |
| 0.1247 | 2.0 | 13334 | 0.1441 | 0.9513 | 0.7458 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rbawden/CCASS-semi-auto-titrages-base
|
rbawden
| 2022-07-05T21:42:57Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"fsmt",
"fr",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-06-16T09:32:27Z |
---
language: fr
license: cc-by-4.0
---
# Cour de Cassation semi-automatic *titrage* prediction model
Model for the semi-automatic prediction of *titrages* (keyword sequence) from *sommaires* (synthesis of legal cases).
The models are similar to the automatic models described in [this paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf) and to the model available [here](https://huggingface.co/rbawden/CCASS-pred-titrages-base). If you use this semi-automatic model, please cite our research paper (see [below](#cite)).
## Model description
The model is a transformer-base model trained on parallel data (sommaires-titrages) provided by the Cour de Cassation. The model was intially trained using the Fairseq toolkit, converted to HuggingFace and then fine-tuned on the original training data to smooth out minor differences that arose during the conversion process. Tokenisation is performed using a SentencePiece model, the BPE strategy and a vocab size of 8000.
### Intended uses & limitations
This model is to be used to help in the production of *titrages* for those *sommaires* that do not have them or to complement existing (manually) created *titrages*.
### How to use
Contrary to the [automatic *titrage* prediction model](https://huggingface.co/rbawden/CCASS-pred-titrages-base) (designed to predict the entire sequence), this model is designed to help in the manual production of *titrages*, by proposing the next *titre* (keyword) in the sequence given a *sommaire* and the beginning of the *titrage*.
Model input is the *matière* (matter) concatenated to the *titres* already decided on (separated by <t>), concatenated to the text from the sommaire separated by the token `<t>`. Each example should be on a single line. E.g. `bail <t> résiliation <t> causes <t> La recommendation du tribunal selon l'article...` (fictive example for illustrative purposes, where the matter=bail, the beginning of the *titrage*=résiliation <t> causes. The maximum input length of the model is 1024 input tokens (after tokenisation).
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokeniser = AutoTokenizer.from_pretrained("rbawden/CCASS-semi-auto-titrages-base")
model = AutoModelForSeq2SeqLM.from_pretrained("rbawden/CCASS-semi-auto-titrages-base")
matiere_and_titrage_prefix = "matter <t> titre"
sommaire = "full text from the sommaire on a single line"
inputs = tokeniser([matiere_and_titrage_prefix + " <t> " + sommaire], return_tensors='pt')
outputs = model.generate(inputs['input_ids'])
tokeniser.batch_decode(outputs, skip_special_tokens=True, clean_up_tokenisation_spaces=True)
```
### Limitations and bias
The models' predictions should not be taken as ground-truth *titrages* and the final decision should be the expert's. The model is not constrained to predict *titres* that have previously been seen, so this should be taken into account in the deployment of this model as a *titrage* tool in order to avoid the multiplication of different *titres*.
## Training data
Training data is provided by the Cour de Cassation (the original source being Jurinet data, but with pseudo-anonymisation applied). For training, we use a total of 159,836 parallel examples (each example is a sommaire-titrage pair). Our development data consists of 1,833 held-out examples.
## Training procedure
### Preprocessing
We use SentencePiece, the BPE strategy and a joint vocabulary of 8000 tokens. This model was converted into the HuggingFace format and integrates a number of normalisation processes (e.g. removing double doubles, apostrophes and quotes, normalisation of different accent formats, lowercasing).
### Training
The model was initialised trained using Fairseq until convergence on the development set (according to our customised weighted accuracy measure - please see [the paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf) for more details). The model was then converted to HuggingFace and training continued to smooth out incoherences introduced during the conversion procedure (incompatibilities in the way the SentencePiece and NMT vocabularies are defined, linked to HuggingFace vocabularies being necessarily the same as the tokeniser vocabulary, a constraint that is not imposed in Fairseq).
### Evaluation results
Full results for the initial (automatic) Fairseq models can be found in [the paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf).
Results on this semi-automatic model coming soon!
## BibTex entry and citation info
<a name="cite"></a>
If you use this work, please cite the following article:
Thibault Charmet, Inès Cherichi, Matthieu Allain, Urszula Czerwinska, Amaury Fouret, Benoît Sagot and Rachel Bawden, 2022. [**Complex Labelling and Similarity Prediction in Legal Texts: Automatic Analysis of France’s Court of Cassation Rulings**](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.509.pdf). In Proceedings of the 13th Language Resources and Evaluation Conference, Marseille, France.]
```
@inproceedings{charmet-et-al-2022-complex,
tite = {Complex Labelling and Similarity Prediction in Legal Texts: Automatic Analysis of France’s Court of Cassation Rulings},
author = {Charmet, Thibault and Cherichi, Inès and Allain, Matthieu and Czerwinska, Urszula and Fouret, Amaury, and Sagot, Benoît and Bawden, Rachel},
booktitle = {Proceedings of the 13th Language Resources and Evaluation Conference},
year = {2022},
address = {Marseille, France},
pages = {4754--4766},
url = {http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.509.pdf}
```
|
pm390/Reinforce-pong-01
|
pm390
| 2022-07-05T19:49:27Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-05T19:49:16Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pong-01
results:
- metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Tinchoroman/distilbert-base-uncased-finetuned-imdb
|
Tinchoroman
| 2022-07-05T19:25:58Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-05T13:43:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Tinchoroman/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Tinchoroman/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8509
- Validation Loss: 2.5629
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8509 | 2.5629 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
osanseviero/ppo-LunarLander-v2
|
osanseviero
| 2022-07-05T19:07:18Z | 4 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-03-02T23:29:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -580.22 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
coledie/reinforce-Pixelcopter-PLE-v0
|
coledie
| 2022-07-05T18:37:39Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-05T18:04:20Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-Pixelcopter-PLE-v0
results:
- metrics:
- type: mean_reward
value: 17.20 +/- 18.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
osanseviero/tipsuhtxfu-sex-classification
|
osanseviero
| 2022-07-05T17:18:06Z | 0 | 0 |
sklearn
|
[
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2022-07-05T17:18:04Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on tipsuhtxfu to apply classification on sex
**Metrics of the best model:**
accuracy 0.647364
average_precision 0.507660
roc_auc 0.625546
recall_macro 0.589832
f1_macro 0.585292
Name: MultinomialNB(), dtype: float64
**See model plot below:**
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
total_bill True False False ... False False False
tip True False False ... False False False
smoker False False False ... False False False
day False False False ... False False False
time False False False ... False False False
size False False False ... False False False[6 rows x 7 columns])),('pipeline',Pipeline(steps=[('minmaxscaler', MinMaxScaler()),('multinomialnb', MultinomialNB())]))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
total_bill True False False ... False False False
tip True False False ... False False False
smoker False False False ... False False False
day False False False ... False False False
time False False False ... False False False
size False False False ... False False False[6 rows x 7 columns])),('pipeline',Pipeline(steps=[('minmaxscaler', MinMaxScaler()),('multinomialnb', MultinomialNB())]))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
total_bill True False False ... False False False
tip True False False ... False False False
smoker False False False ... False False False
day False False False ... False False False
time False False False ... False False False
size False False False ... False False False[6 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">pipeline: Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('minmaxscaler', MinMaxScaler()),('multinomialnb', MultinomialNB())])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" ><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">MinMaxScaler</label><div class="sk-toggleable__content"><pre>MinMaxScaler()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-5" type="checkbox" ><label for="sk-estimator-id-5" class="sk-toggleable__label sk-toggleable__label-arrow">MultinomialNB</label><div class="sk-toggleable__content"><pre>MultinomialNB()</pre></div></div></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
abhishek/autotrain-adult-census-xgboost
|
abhishek
| 2022-07-05T17:14:07Z | 28 | 3 |
transformers
|
[
"transformers",
"joblib",
"xgboost",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:abhishek/autotrain-data-adult-train",
"dataset:scikit-learn/adult-census-income",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
tabular-classification
| 2022-07-05T12:06:35Z |
---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- abhishek/autotrain-data-adult-train
- scikit-learn/adult-census-income
co2_eq_emissions: 0.12693590577861977
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 9725286
- CO2 Emissions (in grams): 0.12693590577861977
## Validation Metrics
- Loss: 0.26716182056213406
- Accuracy: 0.8750191923844618
- Precision: 0.7840481565086531
- Recall: 0.6641172721478649
- AUC: 0.9345322809861784
- F1: 0.7191166321601105
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
Eleven/xlm-roberta-base-finetuned-panx-it
|
Eleven
| 2022-07-05T16:53:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-05T16:37:09Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8247845711940912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.809 | 1.0 | 70 | 0.3380 | 0.7183 |
| 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 |
| 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
a-doering/MLAgents-Pyramids
|
a-doering
| 2022-07-05T16:49:09Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-05T16:49:02Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: a-doering/MLAgents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Eleven/xlm-roberta-base-finetuned-panx-fr
|
Eleven
| 2022-07-05T16:36:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-05T16:20:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.835464333781965
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2867
- F1: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5817 | 1.0 | 191 | 0.3395 | 0.7854 |
| 0.2617 | 2.0 | 382 | 0.2856 | 0.8278 |
| 0.1708 | 3.0 | 573 | 0.2867 | 0.8355 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Eleven/xlm-roberta-base-finetuned-panx-de-fr
|
Eleven
| 2022-07-05T15:59:42Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-05T15:37:17Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
enoriega/rule_learning_margin_1mm_spanpred_nospec
|
enoriega
| 2022-07-05T13:56:15Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:enoriega/odinsynth_dataset",
"endpoints_compatible",
"region:us"
] | null | 2022-07-05T03:00:49Z |
---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm_spanpred_nospec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm_spanpred_nospec
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3972
- Margin Accuracy: 0.8136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.5864 | 0.16 | 20 | 0.5454 | 0.7564 |
| 0.4995 | 0.32 | 40 | 0.4761 | 0.7867 |
| 0.4866 | 0.48 | 60 | 0.4353 | 0.8057 |
| 0.4568 | 0.64 | 80 | 0.4229 | 0.8098 |
| 0.4409 | 0.8 | 100 | 0.4136 | 0.8140 |
| 0.4369 | 0.96 | 120 | 0.4124 | 0.8118 |
| 0.4172 | 1.12 | 140 | 0.4043 | 0.8118 |
| 0.4208 | 1.28 | 160 | 0.4072 | 0.8119 |
| 0.4256 | 1.44 | 180 | 0.4041 | 0.8124 |
| 0.4201 | 1.6 | 200 | 0.4041 | 0.8127 |
| 0.4159 | 1.76 | 220 | 0.4006 | 0.8125 |
| 0.4103 | 1.92 | 240 | 0.4004 | 0.8131 |
| 0.4282 | 2.08 | 260 | 0.3999 | 0.8138 |
| 0.4169 | 2.24 | 280 | 0.4006 | 0.8136 |
| 0.4263 | 2.4 | 300 | 0.3962 | 0.8133 |
| 0.4252 | 2.56 | 320 | 0.3994 | 0.8137 |
| 0.4202 | 2.72 | 340 | 0.3965 | 0.8137 |
| 0.4146 | 2.88 | 360 | 0.3967 | 0.8139 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ramonzaca/dqn-SpaceInvadersNoFrameskip-v4
|
ramonzaca
| 2022-07-05T13:32:19Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-05T13:31:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 480.00 +/- 135.11
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ramonzaca -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ramonzaca
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
amyeroberts/resnet-18-finetuned-eurosat
|
amyeroberts
| 2022-07-05T12:36:20Z | 51 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"resnet",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-05T12:25:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: amyeroberts/resnet-18-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amyeroberts/resnet-18-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5582
- Validation Loss: 2.1533
- Validation Accuracy: 0.2059
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:---------------:|:-------------------:|:-----:|
| 3.0662 | 2.7376 | 0.1374 | 0 |
| 1.3977 | 2.3876 | 0.1685 | 1 |
| 0.5582 | 2.1533 | 0.2059 | 2 |
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.9.1
- Datasets 2.3.3.dev0
- Tokenizers 0.11.0
|
datien228/distilbart-ftn-wiki_lingua
|
datien228
| 2022-07-05T12:12:07Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:wiki_lingua",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-07-03T16:21:47Z |
---
language:
- en
tags:
- summarization
license: mit
datasets:
- wiki_lingua
metrics:
- rouge
---
#### Pre-trained BART Model fine-tune on WikiLingua dataset
The repository for the fine-tuned BART model (by sshleifer) using the **wiki_lingua** dataset (English)
**Purpose:** Examine the performance of a fine-tuned model research purposes
**Observation:**
- Pre-trained model was trained on the XSum dataset, which summarize a not-too-long documents into one-liner summary
- Fine-tuning this model using WikiLingua is appropriate since the summaries for that dataset are also short
- In the end, however, the model cannot capture much clearer key points, but instead it mostly extracts the opening sentence
- Some data pre-processing and models' hyperparameter are also need to be tuned more properly.
|
arashba/xlm-roberta-base-finetuned-panx-de
|
arashba
| 2022-07-05T12:05:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-05T11:41:42Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
abhishek/autotrain-iris-knn
|
abhishek
| 2022-07-05T11:59:16Z | 9 | 0 |
transformers
|
[
"transformers",
"joblib",
"knn",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:abhishek/autotrain-data-iris-train",
"dataset:scikit-learn/iris",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
tabular-classification
| 2022-07-05T11:37:31Z |
---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- abhishek/autotrain-data-iris-train
- scikit-learn/iris
co2_eq_emissions: 0.15028701199056024
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 9705277
- CO2 Emissions (in grams): 0.15028701199056024
## Validation Metrics
- Loss: 0.15622713916762193
- Accuracy: 0.9
- Macro F1: 0.899749373433584
- Micro F1: 0.9
- Weighted F1: 0.8997493734335841
- Macro Precision: 0.9023569023569024
- Micro Precision: 0.9
- Weighted Precision: 0.9023569023569024
- Macro Recall: 0.9
- Micro Recall: 0.9
- Weighted Recall: 0.9
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
abhishek/autotrain-iris-logistic-regression
|
abhishek
| 2022-07-05T11:58:57Z | 13 | 0 |
transformers
|
[
"transformers",
"joblib",
"logistic_regression",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:abhishek/autotrain-data-iris-train",
"dataset:scikit-learn/iris",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
tabular-classification
| 2022-07-05T11:36:06Z |
---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- abhishek/autotrain-data-iris-train
- scikit-learn/iris
co2_eq_emissions: 0.0006300767567816624
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 9705273
- CO2 Emissions (in grams): 0.0006300767567816624
## Validation Metrics
- Loss: 0.15987505325856152
- Accuracy: 0.9
- Macro F1: 0.899749373433584
- Micro F1: 0.9
- Weighted F1: 0.8997493734335841
- Macro Precision: 0.9023569023569024
- Micro Precision: 0.9
- Weighted Precision: 0.9023569023569025
- Macro Recall: 0.9
- Micro Recall: 0.9
- Weighted Recall: 0.9
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
HekmatTaherinejad/swin-tiny-patch4-window7-224-finetuned-eurosat
|
HekmatTaherinejad
| 2022-07-05T09:17:32Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-05T08:15:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.98
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0653
- Accuracy: 0.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.203 | 1.0 | 190 | 0.1294 | 0.9574 |
| 0.2017 | 2.0 | 380 | 0.0773 | 0.9763 |
| 0.1563 | 3.0 | 570 | 0.0653 | 0.98 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kws/ppo-LunarLander-v2
|
kws
| 2022-07-05T07:50:05Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-05T06:55:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 252.49 +/- 42.04
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shubhamitra/TinyBERT_General_4L_312D-finetuned-toxic-classification
|
shubhamitra
| 2022-07-05T07:29:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-03T13:23:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: TinyBERT_General_4L_312D-finetuned-toxic-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyBERT_General_4L_312D-finetuned-toxic-classification
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 498 | 0.0483 | 0.7486 | 0.8563 | 0.9171 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mmeet611/finetuning-sentiment-model-3000-samples
|
mmeet611
| 2022-07-05T07:16:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-15T07:33:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8628762541806019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3052
- Accuracy: 0.8633
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
hsohn3/cchs-bert-event-uncased-wordlevel-block512-batch8-ep10
|
hsohn3
| 2022-07-05T06:37:28Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-05T05:33:36Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hsohn3/cchs-bert-event-uncased-wordlevel-block512-batch8-ep10
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/cchs-bert-event-uncased-wordlevel-block512-batch8-ep10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9667
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 4.3518 | 0 |
| 3.1030 | 1 |
| 3.0459 | 2 |
| 3.0120 | 3 |
| 2.9969 | 4 |
| 2.9879 | 5 |
| 2.9823 | 6 |
| 2.9811 | 7 |
| 2.9722 | 8 |
| 2.9667 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Samlit/rare-puppers2
|
Samlit
| 2022-07-05T06:14:13Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-05T05:49:48Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6222222447395325
---
# rare-puppers2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### La Goulue Toulouse-Lautrec

#### Marcelle Lender Bolero

#### aristide bruant Lautrec

#### la goulue Toulouse-Lautrec

|
steven123/Check_GoodBad_Teeth
|
steven123
| 2022-07-05T03:52:40Z | 126 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-05T03:52:30Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Check_GoodBad_Teeth
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# Check_GoodBad_Teeth
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bad Teeth

#### Good Teeth

|
liuxuefei01/q-Taxi-v3
|
liuxuefei01
| 2022-07-05T02:35:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-05T02:35:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="liuxuefei01/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
liuxuefei01/q-FrozenLake-v1-4x4-noSlippery
|
liuxuefei01
| 2022-07-05T02:20:02Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-05T02:19:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="liuxuefei01/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
coledie/reinforce-CartPole-v1
|
coledie
| 2022-07-05T01:27:42Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-05T00:39:33Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-CartPole-v1
results:
- metrics:
- type: mean_reward
value: 273.60 +/- 40.64
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
NAOKITY/bert-squad
|
NAOKITY
| 2022-07-05T01:05:50Z | 15 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-04T23:36:55Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: NAOKITY/bert-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NAOKITY/bert-squad
This model is a fine-tuned version of [pierreguillou/bert-base-cased-squad-v1.1-portuguese](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9778
- Validation Loss: 0.0
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 987, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5286 | 0.0 | 0 |
| 0.9778 | 0.0 | 1 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
teven/all_bs160_allneg
|
teven
| 2022-07-05T00:14:56Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-07-05T00:14:48Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# teven/all_bs160_allneg
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/all_bs160_allneg')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/all_bs160_allneg)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 780828 with parameters:
```
{'batch_size': 20, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 315504 with parameters:
```
{'batch_size': 20, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 300017 with parameters:
```
{'batch_size': 20, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 2000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
hsohn3/mayo-bert-uncased-wordlevel-block512-ep10
|
hsohn3
| 2022-07-04T22:52:37Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-04T01:17:58Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hsohn3/mayo-bert-uncased-wordlevel-block512-ep10
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/mayo-bert-uncased-wordlevel-block512-ep10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3171
- Epoch: 9
## Model description
- base_model: bert-base-uncased
- block_size: 512
- tokenizer: ehr-bert-wordlevel-uncased
## Intended uses & limitations
More information needed
## Training and evaluation data
- MAYO visit-level texts
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
- mlm_probability: 0.15
- batch_size: 8
- epochs: 10
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.0885 | 0 |
| 2.8340 | 1 |
| 2.7975 | 2 |
| 2.6720 | 3 |
| 2.4868 | 4 |
| 2.1750 | 5 |
| 1.8143 | 6 |
| 1.0948 | 7 |
| 0.4915 | 8 |
| 0.3171 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Danitg95/autotrain-kaggle-effective-arguments-1086739296
|
Danitg95
| 2022-07-04T21:53:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:Danitg95/autotrain-data-kaggle-effective-arguments",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-04T21:49:45Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Danitg95/autotrain-data-kaggle-effective-arguments
co2_eq_emissions: 5.2497206864306065
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1086739296
- CO2 Emissions (in grams): 5.2497206864306065
## Validation Metrics
- Loss: 0.744236171245575
- Accuracy: 0.6719238613188308
- Macro F1: 0.5450301061253738
- Micro F1: 0.6719238613188308
- Weighted F1: 0.6349879540623229
- Macro Precision: 0.6691326843926052
- Micro Precision: 0.6719238613188308
- Weighted Precision: 0.6706209016443158
- Macro Recall: 0.5426627824078865
- Micro Recall: 0.6719238613188308
- Weighted Recall: 0.6719238613188308
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Danitg95/autotrain-kaggle-effective-arguments-1086739296
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Danitg95/autotrain-kaggle-effective-arguments-1086739296", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Danitg95/autotrain-kaggle-effective-arguments-1086739296", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Farshid/distilbert-base-uncased_allagree3
|
Farshid
| 2022-07-04T21:04:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-04T17:35:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased_allagree3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_allagree
metrics:
- name: Accuracy
type: accuracy
value: 0.9778761061946902
- name: F1
type: f1
value: 0.9780006392634297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_allagree3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0937
- Accuracy: 0.9779
- F1: 0.9780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6418 | 1.0 | 57 | 0.3340 | 0.8805 | 0.8768 |
| 0.1821 | 2.0 | 114 | 0.1088 | 0.9690 | 0.9691 |
| 0.0795 | 3.0 | 171 | 0.0822 | 0.9823 | 0.9823 |
| 0.0385 | 4.0 | 228 | 0.0939 | 0.9646 | 0.9646 |
| 0.0218 | 5.0 | 285 | 0.1151 | 0.9735 | 0.9737 |
| 0.0149 | 6.0 | 342 | 0.1126 | 0.9690 | 0.9694 |
| 0.006 | 7.0 | 399 | 0.0989 | 0.9779 | 0.9780 |
| 0.0093 | 8.0 | 456 | 0.1009 | 0.9779 | 0.9780 |
| 0.0063 | 9.0 | 513 | 0.0899 | 0.9779 | 0.9780 |
| 0.0039 | 10.0 | 570 | 0.0937 | 0.9779 | 0.9780 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.3.2
- Tokenizers 0.12.1
|
samuelrince/bert-base-cased-finetuned-panx-en
|
samuelrince
| 2022-07-04T20:08:03Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-04T19:46:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: bert-base-cased-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-panx-en
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2941 | 1.0 | 1250 | 0.2432 |
| 0.186 | 2.0 | 2500 | 0.2214 |
| 0.1387 | 3.0 | 3750 | 0.2478 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ramonzaca/q-FrozenLake-v1-4x4-noSlippery
|
ramonzaca
| 2022-07-04T19:53:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-04T19:53:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ramonzaca/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
reso/DialoGPT-medium-v3ga
|
reso
| 2022-07-04T19:39:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-04T18:49:13Z |
---
thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
pcuenq/lpips-jax
|
pcuenq
| 2022-07-04T18:47:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-07-04T18:24:46Z |
---
license: apache-2.0
---
## Weights for JAX/Flax version of VGG
- VGG16 weights, taken from [the `flaxmodels` repo](https://github.com/matthias-wright/flaxmodels/blob/main/flaxmodels/vgg/vgg.py).
- Additional weights to use VGG16 as a feature extractor for LPIPS. They were downloaded in PyTorch format from [the URL referenced in the Taming Transformers repo](https://github.com/CompVis/taming-transformers/blob/master/taming/modules/losses/lpips.py), and converted to hdf5 format.
## License
Apache 2, for this compilation.
Please, refer to the original licenses of the source repos.
- [Taming Transformers License](https://github.com/CompVis/taming-transformers/blob/master/License.txt). Weights for additional layers.
- [Perceptual Similarity License](https://github.com/richzhang/PerceptualSimilarity/blob/master/LICENSE). Weights for additional layers.
- [Flaxmodels / VGG License](https://github.com/matthias-wright/flaxmodels/tree/main/flaxmodels/vgg#license), for the VGG model and (I presume) VGG weights.
|
YKXBCi/vit-base-patch16-224-in21k-aidSat
|
YKXBCi
| 2022-07-04T18:46:44Z | 29 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-04T13:39:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: YKXBCi/vit-base-patch16-224-in21k-aidSat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# YKXBCi/vit-base-patch16-224-in21k-aidSat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4026
- Train Accuracy: 0.9981
- Train Top-3-accuracy: 0.9998
- Validation Loss: 0.4715
- Validation Accuracy: 0.9796
- Validation Top-3-accuracy: 0.9980
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1325, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 2.3544 | 0.7383 | 0.8687 | 1.5415 | 0.9266 | 0.9857 | 0 |
| 1.1313 | 0.9522 | 0.9942 | 0.8788 | 0.9613 | 0.9966 | 1 |
| 0.6741 | 0.9841 | 0.9985 | 0.6268 | 0.9640 | 0.9986 | 2 |
| 0.4785 | 0.9953 | 0.9995 | 0.5058 | 0.9755 | 0.9980 | 3 |
| 0.4026 | 0.9981 | 0.9998 | 0.4715 | 0.9796 | 0.9980 | 4 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.