modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Cdial/hausa-asr
|
[
"wav2vec2",
"automatic-speech-recognition",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: cc
datasets:
- Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat
language:
- en
inference: false
---
<html>
<head>
<style>
.alert {
padding: 15px;
background-color: #f44336;
color: white;
}
</style>
</head>
<body>
<div class="alert">
<strong>Warning:</strong> This fine-tuned model has only undergone 200 steps of fine-tuning and may not be reliable. The final model will be released in the coming days.
</div>
</body>
</html>
<br>
# RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4
RedPajama-INCITE-Chat-3B Model finetuned <a href="https://huggingface.co/datasets/Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat">on this dataset</a>
## Reproduction
The code for the finetuning of this model can be found at https://github.com/fredi-python/Fine-tune-RedPajama-Chat-3B
## Usage and License Notices
The Model is intended and licensed for research use only. The model is under the CC BY NC 4.0 license (allowing only non-commercial use)
|
dccuchile/albert-base-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 34 | null |
---
license: apache-2.0
language:
- en
---
testmodel README
|
dccuchile/albert-base-spanish-finetuned-pos
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: living_nature_classification_wnt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# living_nature_classification_wnt
This model is a fine-tuned version of [emanjavacas/GysBERT](https://huggingface.co/emanjavacas/GysBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
- Precision: 0.9795
- Recall: 0.9719
- F1: 0.9757
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 438 | 0.0045 | 0.9747 | 0.9594 | 0.9670 | 0.9989 |
| 0.0192 | 2.0 | 876 | 0.0036 | 0.9795 | 0.9719 | 0.9757 | 0.9992 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dccuchile/albert-base-spanish-finetuned-qa-mlqa
|
[
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum-fi
metrics:
- rouge
model-index:
- name: my_awesome_mt5-sum-fi-v6
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum-fi
type: xlsum-fi
config: finnish
split: test
args: finnish
metrics:
- name: Rouge1
type: rouge
value: 0.1324
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mt5-sum-fi-v6
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum-fi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2568
- Rouge1: 0.1324
- Rouge2: 0.0354
- Rougel: 0.1237
- Rougelsum: 0.1237
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 56 | 3.4251 | 0.1253 | 0.0307 | 0.1141 | 0.1147 | 18.78 |
| No log | 1.99 | 112 | 3.3721 | 0.1204 | 0.0246 | 0.1116 | 0.1116 | 19.0 |
| No log | 2.99 | 168 | 3.3277 | 0.0979 | 0.0208 | 0.0939 | 0.0939 | 18.9 |
| No log | 4.0 | 225 | 3.3172 | 0.1193 | 0.028 | 0.1135 | 0.1133 | 19.0 |
| No log | 5.0 | 281 | 3.2864 | 0.1288 | 0.0339 | 0.1197 | 0.1198 | 19.0 |
| No log | 5.99 | 337 | 3.2662 | 0.1361 | 0.0368 | 0.1214 | 0.1207 | 19.0 |
| No log | 6.99 | 393 | 3.2664 | 0.1313 | 0.0306 | 0.1206 | 0.1202 | 19.0 |
| No log | 8.0 | 450 | 3.2666 | 0.1276 | 0.0329 | 0.1197 | 0.1199 | 18.99 |
| 3.7095 | 9.0 | 506 | 3.2567 | 0.1349 | 0.0361 | 0.1273 | 0.1268 | 19.0 |
| 3.7095 | 9.96 | 560 | 3.2568 | 0.1324 | 0.0354 | 0.1237 | 0.1237 | 19.0 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dccuchile/albert-large-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
Access to model LPL-far/Pokemon-Lora is restricted and you are not in the authorized list. Visit https://huggingface.co/LPL-far/Pokemon-Lora to ask for access.
|
dccuchile/albert-large-spanish-finetuned-pos
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-large-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: mio_Dataset2
split: validation
args: mio_Dataset2
metrics:
- name: Accuracy
type: accuracy
value: 0.7485380116959064
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-large-224-finetuned-eurosat
This model is a fine-tuned version of [facebook/convnext-large-224](https://huggingface.co/facebook/convnext-large-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6440
- Accuracy: 0.7485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 19 | 1.0763 | 0.4386 |
| No log | 2.0 | 38 | 0.9918 | 0.5322 |
| No log | 3.0 | 57 | 0.8919 | 0.6725 |
| No log | 4.0 | 76 | 0.8088 | 0.7135 |
| No log | 5.0 | 95 | 0.7502 | 0.7368 |
| No log | 6.0 | 114 | 0.7037 | 0.7310 |
| No log | 7.0 | 133 | 0.6792 | 0.7427 |
| No log | 8.0 | 152 | 0.6507 | 0.7368 |
| No log | 9.0 | 171 | 0.6440 | 0.7485 |
| No log | 10.0 | 190 | 0.6415 | 0.7485 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dccuchile/albert-large-spanish-finetuned-qa-mlqa
|
[
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | 2023-05-16T15:24:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-tiny.en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-tiny.en
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3883
- Wer: 14.9814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4727 | 2.04 | 500 | 0.4062 | 16.9369 |
| 0.2717 | 5.04 | 1000 | 0.3271 | 15.3713 |
| 0.2022 | 8.04 | 1500 | 0.3251 | 15.2362 |
| 0.1483 | 11.03 | 2000 | 0.3438 | 14.1539 |
| 0.0833 | 14.03 | 2500 | 0.3583 | 15.5601 |
| 0.0848 | 17.03 | 3000 | 0.3755 | 14.8514 |
| 0.0475 | 20.02 | 3500 | 0.3849 | 15.0281 |
| 0.0424 | 23.02 | 4000 | 0.3883 | 14.9814 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.14.0a0+44dac51
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dccuchile/albert-xlarge-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 91 | null |
# Pretrained checkpoint: roberta-large-mnli
# Traning hyperparameters:
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- prompt_format: sentence aspect - sentiment
# Training results
|Epoch | Train loss| Subtask 3 f1 | Subtask 3 precision | Subtask 3 recall | Subtask4 accuracy |
|:----:|:---------:|:------------:|:-------------------:|:----------------:|:-----------------:|
|1|341.82094554277137|0.8827514330380406|0.9474272930648769|0.8263414634146341|0.8429268292682927|
|2|164.69039884814993|0.9055276381909548|0.933678756476684|0.8790243902439024|0.8839024390243903|
|3|79.89190268042148|0.9282533399307275|0.9417670682730924|0.9151219512195122|0.8692682926829268|
|4|34.275944823923055|0.9211045364891519|0.9312063808574277|0.911219512195122|0.8751219512195122|
|
Chalponkey/DialoGPT-small-Barry
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-no_ear-loto_jews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-no_ear-loto_jews
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 72.9514 | 0.03 | 10 | 65.3603 |
| 32.1127 | 0.06 | 20 | 18.4547 |
| 8.2152 | 0.08 | 30 | 6.7267 |
| 3.4911 | 0.11 | 40 | 2.9798 |
| 1.6339 | 0.14 | 50 | 1.2141 |
| 0.8984 | 0.17 | 60 | 0.9466 |
| 0.8166 | 0.2 | 70 | 0.7656 |
| 0.6533 | 0.23 | 80 | 0.6374 |
| 0.6208 | 0.25 | 90 | 0.6106 |
| 0.6942 | 0.28 | 100 | 0.5941 |
| 0.6241 | 0.31 | 110 | 0.5825 |
| 0.736 | 0.34 | 120 | 0.5790 |
| 0.5359 | 0.37 | 130 | 0.5745 |
| 0.6451 | 0.4 | 140 | 0.5694 |
| 0.5871 | 0.42 | 150 | 0.5625 |
| 0.6146 | 0.45 | 160 | 0.5635 |
| 0.5091 | 0.48 | 170 | 0.5578 |
| 0.5911 | 0.51 | 180 | 0.5580 |
| 0.5398 | 0.54 | 190 | 0.5528 |
| 0.6379 | 0.57 | 200 | 0.5484 |
| 0.5205 | 0.59 | 210 | 0.5481 |
| 0.5752 | 0.62 | 220 | 0.5448 |
| 0.6035 | 0.65 | 230 | 0.5419 |
| 0.5582 | 0.68 | 240 | 0.5417 |
| 0.5331 | 0.71 | 250 | 0.5407 |
| 0.5062 | 0.74 | 260 | 0.5398 |
| 0.562 | 0.76 | 270 | 0.5375 |
| 0.5845 | 0.79 | 280 | 0.5332 |
| 0.4904 | 0.82 | 290 | 0.5317 |
| 0.596 | 0.85 | 300 | 0.5303 |
| 0.5976 | 0.88 | 310 | 0.5298 |
| 0.5614 | 0.91 | 320 | 0.5284 |
| 0.6057 | 0.93 | 330 | 0.5287 |
| 0.4378 | 0.96 | 340 | 0.5290 |
| 0.6069 | 0.99 | 350 | 0.5267 |
| 0.4918 | 1.02 | 360 | 0.5291 |
| 0.5506 | 1.05 | 370 | 0.5315 |
| 0.4013 | 1.08 | 380 | 0.5270 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.11.0+cu113
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Chun/w-en2zh-hsk
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.23 +/- 0.42
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Gaivoronsky/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CleveGreen/FieldClassifier
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 34 | null |
---
license: apache-2.0
language:
- en
tags:
- sentiment
- multiclass
- emotion
- multimodal
---
# Usage
This repository contains only trained weights for model. For initialization and use example see [this repository](https://github.com/T-Sh/EmoSentiRecognition).
|
CleveGreen/JobClassifier_v2
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 37 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mxalmeida/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mxalmeida/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.4556
- Validation Loss: 3.3695
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 17608, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.4019 | 3.9033 | 0 |
| 5.1452 | 3.5258 | 1 |
| 4.4556 | 3.3695 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CleveGreen/JobClassifier_v2_gpt
|
[
"pytorch",
"gpt2",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.38 +/- 0.85
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CoderBoy432/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: skuan/cat_dog_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# skuan/cat_dog_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5070
- Validation Loss: 0.5766
- Train Accuracy: 1.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 10, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6654 | 0.6772 | 1.0 | 0 |
| 0.6124 | 0.6468 | 1.0 | 1 |
| 0.5732 | 0.6197 | 1.0 | 2 |
| 0.5481 | 0.5959 | 1.0 | 3 |
| 0.5070 | 0.5766 | 1.0 | 4 |
### Framework versions
- Transformers 4.29.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ConstellationBoi/Oop
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-thaisum-title-mt5tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-thaisum-title-mt5tokenizer
This model is a fine-tuned version of [Nopphakorn/t5-small-thaisum-title-mt5tokenizer](https://huggingface.co/Nopphakorn/t5-small-thaisum-title-mt5tokenizer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4541
- Rouge1: 0.0552
- Rouge2: 0.0069
- Rougel: 0.0547
- Rougelsum: 0.0546
- Gen Len: 18.9956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 4.7436 | 1.0 | 765 | 4.8643 | 0.0029 | 0.0 | 0.0029 | 0.0029 | 18.9897 |
| 4.7069 | 2.0 | 1530 | 4.8482 | 0.0114 | 0.0015 | 0.0111 | 0.0115 | 18.9941 |
| 4.6588 | 3.0 | 2295 | 4.8272 | 0.0054 | 0.0 | 0.0055 | 0.0055 | 18.9883 |
| 4.6664 | 4.0 | 3060 | 4.8044 | 0.0116 | 0.0 | 0.0113 | 0.0116 | 18.9853 |
| 4.6283 | 5.0 | 3825 | 4.7903 | 0.0094 | 0.0 | 0.0095 | 0.0093 | 18.9868 |
| 4.6145 | 6.0 | 4590 | 4.7669 | 0.009 | 0.0 | 0.0088 | 0.0088 | 18.9941 |
| 4.5759 | 7.0 | 5355 | 4.7432 | 0.008 | 0.0 | 0.0079 | 0.0079 | 18.9883 |
| 4.5419 | 8.0 | 6120 | 4.7275 | 0.012 | 0.0015 | 0.012 | 0.012 | 18.9941 |
| 4.5486 | 9.0 | 6885 | 4.7043 | 0.0098 | 0.0015 | 0.0099 | 0.0097 | 18.9941 |
| 4.5095 | 10.0 | 7650 | 4.6773 | 0.0085 | 0.0 | 0.0088 | 0.0086 | 18.9941 |
| 4.4682 | 11.0 | 8415 | 4.6561 | 0.0115 | 0.0 | 0.0112 | 0.0114 | 18.9927 |
| 4.4783 | 12.0 | 9180 | 4.6326 | 0.0063 | 0.0 | 0.0063 | 0.0061 | 18.9868 |
| 4.4825 | 13.0 | 9945 | 4.6069 | 0.0111 | 0.0 | 0.0109 | 0.0111 | 18.9838 |
| 4.4455 | 14.0 | 10710 | 4.5836 | 0.0086 | 0.0 | 0.0084 | 0.0085 | 18.9941 |
| 4.4328 | 15.0 | 11475 | 4.5587 | 0.0089 | 0.0 | 0.0091 | 0.0088 | 18.9912 |
| 4.3982 | 16.0 | 12240 | 4.5294 | 0.0111 | 0.0 | 0.0112 | 0.0112 | 18.9868 |
| 4.3463 | 17.0 | 13005 | 4.5069 | 0.0135 | 0.0 | 0.0137 | 0.0137 | 18.9853 |
| 4.3379 | 18.0 | 13770 | 4.4717 | 0.011 | 0.0007 | 0.0109 | 0.011 | 18.9897 |
| 4.3303 | 19.0 | 14535 | 4.4460 | 0.0117 | 0.0007 | 0.0119 | 0.0118 | 18.9853 |
| 4.2983 | 20.0 | 15300 | 4.4110 | 0.0107 | 0.0 | 0.0109 | 0.0108 | 18.9868 |
| 4.265 | 21.0 | 16065 | 4.3800 | 0.0095 | 0.0 | 0.0096 | 0.0095 | 18.9941 |
| 4.2507 | 22.0 | 16830 | 4.3528 | 0.0105 | 0.0 | 0.0106 | 0.0104 | 18.9941 |
| 4.2184 | 23.0 | 17595 | 4.3246 | 0.0125 | 0.0007 | 0.0128 | 0.0125 | 18.9941 |
| 4.1746 | 24.0 | 18360 | 4.3013 | 0.0116 | 0.0 | 0.0118 | 0.0115 | 19.0 |
| 4.1744 | 25.0 | 19125 | 4.2724 | 0.0127 | 0.0 | 0.0128 | 0.0127 | 18.9956 |
| 4.1123 | 26.0 | 19890 | 4.2408 | 0.0127 | 0.0 | 0.0128 | 0.0128 | 19.0 |
| 4.131 | 27.0 | 20655 | 4.2045 | 0.0118 | 0.0 | 0.0121 | 0.0118 | 18.9897 |
| 4.0725 | 28.0 | 21420 | 4.1815 | 0.0112 | 0.0 | 0.0113 | 0.011 | 19.0 |
| 4.0439 | 29.0 | 22185 | 4.1452 | 0.0154 | 0.0024 | 0.0154 | 0.0152 | 19.0 |
| 4.0814 | 30.0 | 22950 | 4.1151 | 0.0142 | 0.0007 | 0.0143 | 0.014 | 19.0 |
| 4.009 | 31.0 | 23715 | 4.0963 | 0.015 | 0.0007 | 0.015 | 0.015 | 18.9985 |
| 4.0326 | 32.0 | 24480 | 4.0679 | 0.016 | 0.0 | 0.016 | 0.0161 | 19.0 |
| 3.948 | 33.0 | 25245 | 4.0460 | 0.0137 | 0.0015 | 0.0133 | 0.0132 | 18.9971 |
| 3.9401 | 34.0 | 26010 | 4.0070 | 0.0162 | 0.0024 | 0.0161 | 0.016 | 18.9985 |
| 3.9111 | 35.0 | 26775 | 3.9874 | 0.0209 | 0.0042 | 0.0209 | 0.021 | 18.9985 |
| 3.899 | 36.0 | 27540 | 3.9650 | 0.0185 | 0.0 | 0.0186 | 0.0186 | 18.9985 |
| 3.8821 | 37.0 | 28305 | 3.9457 | 0.0236 | 0.0029 | 0.024 | 0.024 | 19.0 |
| 3.8687 | 38.0 | 29070 | 3.9105 | 0.0241 | 0.0034 | 0.0239 | 0.0238 | 19.0 |
| 3.8076 | 39.0 | 29835 | 3.9029 | 0.0199 | 0.002 | 0.0199 | 0.0201 | 19.0 |
| 3.8063 | 40.0 | 30600 | 3.8750 | 0.0251 | 0.0034 | 0.0244 | 0.0246 | 19.0 |
| 3.7896 | 41.0 | 31365 | 3.8448 | 0.025 | 0.0028 | 0.0248 | 0.0249 | 18.9941 |
| 3.7672 | 42.0 | 32130 | 3.8287 | 0.0336 | 0.0029 | 0.0333 | 0.0332 | 19.0 |
| 3.7919 | 43.0 | 32895 | 3.8022 | 0.0268 | 0.0039 | 0.0268 | 0.0268 | 18.9956 |
| 3.75 | 44.0 | 33660 | 3.7723 | 0.0286 | 0.0044 | 0.0286 | 0.0284 | 18.9971 |
| 3.7263 | 45.0 | 34425 | 3.7630 | 0.0308 | 0.0039 | 0.0308 | 0.0307 | 19.0 |
| 3.7053 | 46.0 | 35190 | 3.7412 | 0.0341 | 0.0037 | 0.0335 | 0.0335 | 19.0 |
| 3.7022 | 47.0 | 35955 | 3.7214 | 0.0347 | 0.0044 | 0.0335 | 0.0336 | 18.9897 |
| 3.6528 | 48.0 | 36720 | 3.7059 | 0.0318 | 0.0044 | 0.032 | 0.032 | 19.0 |
| 3.6614 | 49.0 | 37485 | 3.6833 | 0.0313 | 0.0044 | 0.031 | 0.0309 | 18.9956 |
| 3.6339 | 50.0 | 38250 | 3.6691 | 0.0357 | 0.0051 | 0.0353 | 0.0352 | 18.9853 |
| 3.6153 | 51.0 | 39015 | 3.6500 | 0.0373 | 0.0044 | 0.0363 | 0.0365 | 18.9912 |
| 3.6083 | 52.0 | 39780 | 3.6360 | 0.0358 | 0.0051 | 0.0354 | 0.0354 | 18.9985 |
| 3.5857 | 53.0 | 40545 | 3.6272 | 0.0409 | 0.0044 | 0.0397 | 0.0396 | 19.0 |
| 3.5903 | 54.0 | 41310 | 3.6141 | 0.0455 | 0.0039 | 0.044 | 0.0439 | 18.9956 |
| 3.5429 | 55.0 | 42075 | 3.6044 | 0.0405 | 0.0054 | 0.0393 | 0.0394 | 18.9883 |
| 3.5526 | 56.0 | 42840 | 3.5933 | 0.0379 | 0.0049 | 0.037 | 0.0367 | 18.9883 |
| 3.5075 | 57.0 | 43605 | 3.5820 | 0.0431 | 0.0071 | 0.041 | 0.041 | 18.9985 |
| 3.5233 | 58.0 | 44370 | 3.5698 | 0.045 | 0.0064 | 0.0433 | 0.0434 | 18.9897 |
| 3.5022 | 59.0 | 45135 | 3.5680 | 0.0432 | 0.0061 | 0.0419 | 0.0416 | 18.9941 |
| 3.5258 | 60.0 | 45900 | 3.5604 | 0.047 | 0.0069 | 0.0452 | 0.0453 | 18.9956 |
| 3.4763 | 61.0 | 46665 | 3.5532 | 0.0465 | 0.0069 | 0.0452 | 0.0451 | 18.9985 |
| 3.4591 | 62.0 | 47430 | 3.5468 | 0.0429 | 0.0078 | 0.0425 | 0.0422 | 18.9971 |
| 3.471 | 63.0 | 48195 | 3.5359 | 0.0426 | 0.0078 | 0.0428 | 0.0426 | 18.9971 |
| 3.4671 | 64.0 | 48960 | 3.5348 | 0.0441 | 0.0064 | 0.0437 | 0.0436 | 18.9941 |
| 3.4588 | 65.0 | 49725 | 3.5291 | 0.0436 | 0.0064 | 0.0427 | 0.0426 | 18.9941 |
| 3.4214 | 66.0 | 50490 | 3.5168 | 0.0409 | 0.0071 | 0.0407 | 0.0408 | 18.9956 |
| 3.4531 | 67.0 | 51255 | 3.5091 | 0.0476 | 0.0082 | 0.0477 | 0.0479 | 18.9956 |
| 3.3936 | 68.0 | 52020 | 3.5016 | 0.044 | 0.0073 | 0.0441 | 0.0442 | 18.9956 |
| 3.4113 | 69.0 | 52785 | 3.5028 | 0.0473 | 0.0069 | 0.0472 | 0.0472 | 18.9956 |
| 3.4092 | 70.0 | 53550 | 3.4993 | 0.0483 | 0.0078 | 0.0488 | 0.0485 | 18.9985 |
| 3.3847 | 71.0 | 54315 | 3.4959 | 0.053 | 0.0078 | 0.0524 | 0.0524 | 18.9956 |
| 3.4099 | 72.0 | 55080 | 3.4906 | 0.0549 | 0.0069 | 0.0541 | 0.0542 | 18.9985 |
| 3.3774 | 73.0 | 55845 | 3.4821 | 0.0527 | 0.0064 | 0.052 | 0.052 | 18.9971 |
| 3.3677 | 74.0 | 56610 | 3.4790 | 0.0542 | 0.0069 | 0.0534 | 0.0534 | 18.9956 |
| 3.3707 | 75.0 | 57375 | 3.4747 | 0.0562 | 0.0069 | 0.0556 | 0.0557 | 18.9956 |
| 3.3953 | 76.0 | 58140 | 3.4713 | 0.0567 | 0.0069 | 0.056 | 0.056 | 18.9956 |
| 3.3767 | 77.0 | 58905 | 3.4695 | 0.0559 | 0.0069 | 0.0552 | 0.0551 | 18.9956 |
| 3.3455 | 78.0 | 59670 | 3.4668 | 0.0518 | 0.0069 | 0.0514 | 0.0508 | 18.9985 |
| 3.3749 | 79.0 | 60435 | 3.4648 | 0.055 | 0.0069 | 0.0546 | 0.0545 | 18.9941 |
| 3.3447 | 80.0 | 61200 | 3.4648 | 0.0534 | 0.0069 | 0.0529 | 0.0526 | 18.9956 |
| 3.3892 | 81.0 | 61965 | 3.4643 | 0.0572 | 0.0078 | 0.0568 | 0.0564 | 18.9985 |
| 3.3681 | 82.0 | 62730 | 3.4640 | 0.0545 | 0.0069 | 0.0542 | 0.0537 | 18.9956 |
| 3.3186 | 83.0 | 63495 | 3.4595 | 0.0574 | 0.0069 | 0.0569 | 0.0569 | 18.9956 |
| 3.3422 | 84.0 | 64260 | 3.4598 | 0.0553 | 0.0069 | 0.0549 | 0.0548 | 18.9956 |
| 3.3511 | 85.0 | 65025 | 3.4565 | 0.0563 | 0.0078 | 0.0561 | 0.0561 | 18.9956 |
| 3.3469 | 86.0 | 65790 | 3.4576 | 0.0569 | 0.0069 | 0.0562 | 0.0563 | 18.9956 |
| 3.345 | 87.0 | 66555 | 3.4579 | 0.0553 | 0.0069 | 0.0549 | 0.0548 | 18.9956 |
| 3.3611 | 88.0 | 67320 | 3.4558 | 0.0553 | 0.0069 | 0.0549 | 0.0548 | 18.9956 |
| 3.3423 | 89.0 | 68085 | 3.4559 | 0.0569 | 0.0069 | 0.0562 | 0.0563 | 18.9956 |
| 3.3575 | 90.0 | 68850 | 3.4560 | 0.0553 | 0.0069 | 0.0549 | 0.0548 | 18.9956 |
| 3.3322 | 91.0 | 69615 | 3.4560 | 0.0569 | 0.0069 | 0.0562 | 0.0563 | 18.9956 |
| 3.3303 | 92.0 | 70380 | 3.4551 | 0.0569 | 0.0069 | 0.0562 | 0.0563 | 18.9956 |
| 3.3676 | 93.0 | 71145 | 3.4542 | 0.0569 | 0.0069 | 0.0562 | 0.0563 | 18.9956 |
| 3.3219 | 94.0 | 71910 | 3.4541 | 0.0552 | 0.0069 | 0.0547 | 0.0546 | 18.9956 |
| 3.3563 | 95.0 | 72675 | 3.4540 | 0.0569 | 0.0069 | 0.0562 | 0.0563 | 18.9956 |
| 3.3616 | 96.0 | 73440 | 3.4541 | 0.0569 | 0.0069 | 0.0562 | 0.0563 | 18.9956 |
| 3.3417 | 97.0 | 74205 | 3.4543 | 0.0552 | 0.0069 | 0.0547 | 0.0546 | 18.9956 |
| 3.3683 | 98.0 | 74970 | 3.4541 | 0.0552 | 0.0069 | 0.0547 | 0.0546 | 18.9956 |
| 3.3402 | 99.0 | 75735 | 3.4541 | 0.0552 | 0.0069 | 0.0547 | 0.0546 | 18.9956 |
| 3.3413 | 100.0 | 76500 | 3.4541 | 0.0552 | 0.0069 | 0.0547 | 0.0546 | 18.9956 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Contrastive-Tension/BERT-Base-CT-STSb
|
[
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_30Kphish_benignFall_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_30Kphish_benignFall_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0374
- Accuracy: 0.9962
- F1: 0.9589
- Precision: 0.9998
- Recall: 0.9212
- Roc Auc Score: 0.9606
- Tpr At Fpr 0.01: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0045 | 1.0 | 19688 | 0.0304 | 0.9933 | 0.9241 | 0.9993 | 0.8594 | 0.9297 | 0.874 |
| 0.0029 | 2.0 | 39376 | 0.0210 | 0.9967 | 0.9643 | 0.9953 | 0.9352 | 0.9675 | 0.917 |
| 0.0003 | 3.0 | 59064 | 0.0434 | 0.9947 | 0.9407 | 0.9980 | 0.8896 | 0.9448 | 0.8936 |
| 0.0016 | 4.0 | 78752 | 0.0408 | 0.9952 | 0.9468 | 0.9998 | 0.8992 | 0.9496 | 0.9336 |
| 0.0008 | 5.0 | 98440 | 0.0374 | 0.9962 | 0.9589 | 0.9998 | 0.9212 | 0.9606 | 0.9438 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Contrastive-Tension/BERT-Base-CT
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.90 +/- 21.15
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Coolhand/Abuela
|
[
"en",
"image_restoration",
"superresolution",
"license:mit"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-16T20:52:55Z |
without pretrained weights for the classifier on top of roberta encoder
|
Coolhand/Sentiment
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-16T21:01:42Z |
---
datasets:
- aalksii/ml-arxiv-papers
- CShorten/ML-ArXiv-Papers
language:
- en
metrics:
- perplexity
pipeline_tag: fill-mask
---
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
base_model: andite/anything-v4.0
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Nani Maker
|
D3vil/DialoGPT-smaall-harrypottery
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
datasets:
- bigscience/P3
language:
- en
---
A 3B T5 model trained on the [P3](https://huggingface.co/datasets/bigscience/P3) (T0 split) dataset for 20,000 steps with a batch size of 2048 a maximum input sequence length of 1024, a maximum output sequence length of 256, and the Adafactor optimizer with a constant learning rate of 0.001.
The model is trained from the [T5 v1.1 lm-adapt checkpoint](https://huggingface.co/google/t5-xl-lm-adapt) and fully finetuned.
For more details, see [HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation](https://arxiv.org/abs/2212.10315).
Performance on T0 held-out tasks (average accuracy across prompts using rank classification):
| Model | ANLI (avg) | HellaSwag | StoryCloze | CB | COPA | RTE | WiC | WSC | WinoGrande | Average |
|--|--|--|--|--|--|--|--|--|--|--|
| [T0-3B](https://huggingface.co/bigscience/T0_3B) | 33.4 | 27.2 | 84.0 | 45.4 | 75.9 | 64.6 | 50.7 | 65.1 | 51.0 | 55.2 |
| hypertask_T0_3B (this model) | 41.7 | 30.1 | 96.9 | 72.7 | 89.1 | 81.2 | 51.7 | 57.2 | 59.2 | 64.4 |
|
D3xter1922/distilbert-base-uncased-finetuned-cola
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.80 +/- 14.90
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DCU-NLP/bert-base-irish-cased-v1
|
[
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,244 | null |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - ZavGeorge/SD_1.4_simpson_tune_lora
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the Norod78/simpsons-blip-captions dataset. You can find some example images in the following.






|
DCU-NLP/electra-base-irish-cased-discriminator-v1
|
[
"pytorch",
"electra",
"pretraining",
"ga",
"transformers",
"irish",
"license:apache-2.0"
] | null |
{
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.59 +/- 27.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alexandrainst/da-emotion-classification-base
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 837 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-carpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Darren/darren
|
[
"pytorch"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-truncated
results: []
---
# distilbert-truncated
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [20 Newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
It achieves the following results on the evaluation set:
## Training and evaluation data
The data was split into training and testing: model trained on 90% of the data, and had a testing data size of 10% of the original dataset.
## Training procedure
DistilBERT has a maximum input length of 512, so with this in mind the following was performed:
1. I used the `distilbert-base-uncased` pretrained model to initialize an `AutoTokenizer`.
2. Setting a maximum length of 256, each entry in the training, testing and validation data was truncated if it exceeded the limit and padded if it didn't reach the limit.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
EPOCHS = 3
batches_per_epoch = 636
total_train_steps = 1908
Model accuracy 0.8337758779525757
Model loss 0.568471074104309
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Darya/layoutlmv2-finetuned-funsd-test
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: building-with-llms
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# building-with-llms
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Daryaflp/roberta-retrained_ru_covid
|
[
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: openrail++
tags:
- stable-diffusion
- image-to-image
pinned: true
duplicated_from: stabilityai/stable-diffusion-2-1-unclip
pipeline_tag: image-to-image
---
# Stable Diffusion v2-1-unclip Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-1-unclip` is a finetuned version of Stable Diffusion 2.1, modified to accept (noisy) CLIP image embedding in addition to the text prompt, and can be used to create image variations (Examples) or can be chained with text-to-image CLIP priors. The amount of noise added to the image embedding can be specified via the noise_level (0 means no noise, 1000 full noise).
- Use it with 🧨 [`diffusers`](#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion UnCLIP 2-1-small in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
```python
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-unclip-small", torch_dtype=torch.float16)
pipe.to("cuda")
# get image
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
image = load_image(url)
# run image variation
image = pipe(image).images[0]
```

# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
DataikuNLP/average_word_embeddings_glove.6B.300d
|
[
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
DataikuNLP/camembert-base
|
[
"pytorch",
"tf",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: llms_in_production_session_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# llms_in_production_session_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3333
- Train Accuracy: 0.6764
- Validation Loss: 0.6242
- Validation Accuracy: 0.8295
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 1.3333 | 0.6764 | 0.6242 | 0.8295 | 0 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DataikuNLP/distiluse-base-multilingual-cased-v1
|
[
"pytorch",
"distilbert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
datasets:
- tatsu-lab/alpaca
- wbbbbb/pclue
language:
- en
widget:
- text: |-
### Instruction:
我們如何減少空氣污染
### Response:
tags:
- text-generation-inference
---
具備繁體中文指令遵循能力的bloomz
請搭配Alpaca模板使用
```python
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
```
|
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
|
[
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,517 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Sam Dreambooth model trained by samwell with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Davlan/bert-base-multilingual-cased-finetuned-igbo
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_50Kphish_benignFall_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_50Kphish_benignFall_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0282
- Accuracy: 0.9962
- F1: 0.9580
- Precision: 0.9996
- Recall: 0.9198
- Roc Auc Score: 0.9599
- Tpr At Fpr 0.01: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0045 | 1.0 | 32813 | 0.0247 | 0.9960 | 0.9561 | 0.9937 | 0.9212 | 0.9605 | 0.8662 |
| 0.002 | 2.0 | 65626 | 0.0205 | 0.9965 | 0.9624 | 0.9987 | 0.9286 | 0.9643 | 0.9376 |
| 0.0021 | 3.0 | 98439 | 0.0302 | 0.9961 | 0.9569 | 0.9993 | 0.918 | 0.9590 | 0.9378 |
| 0.0017 | 4.0 | 131252 | 0.0297 | 0.9970 | 0.9672 | 0.9975 | 0.9388 | 0.9693 | 0.9368 |
| 0.0007 | 5.0 | 164065 | 0.0282 | 0.9962 | 0.9580 | 0.9996 | 0.9198 | 0.9599 | 0.94 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | 2023-05-17T01:35:41Z |
---
license: apache-2.0
datasets:
- allenai/s2orc
tags:
- medical
---
This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset.
Notably, different from `chaoyi-wu/PMC_LLAMA_7B`, this model is further trained for 10 epochs.
The model was trained with the following hyperparameters:
* Epochs: **10**
* Batch size: 128
* Cutoff length: 512
* Learning rate: 2e-5
Each epoch we sample 512 tokens per paper for training.
The model can be loaded as follows:
```
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
sentence = 'Hello, doctor'
batch = tokenizer(
sentence,
return_tensors="pt",
add_special_tokens=False
)
with torch.no_grad():
generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
print('model predict: ',tokenizer.decode(generated[0]))
```
|
Davlan/bert-base-multilingual-cased-masakhaner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 88 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: knat/bert_eli5_mlm_model_kt
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# knat/bert_eli5_mlm_model_kt
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9916
- Validation Loss: 1.8110
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9916 | 1.8110 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Davlan/xlm-roberta-base-finetuned-igbo
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 68 | null |
## Model Info
Merge of my [VicUnlocked-alpaca-half-30b LoRA](https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA)
**Important Note**: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the *Alpaca* format, so prompting should be something like:
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Benchmarks
wikitext2: 4.372413635253906
ptb-new: 24.69171714782715
c4-new: 6.469308853149414
Results generated with GPTQ evals (not quantized) thanks to [Neko-Institute-of-Science](https://huggingface.co/Neko-Institute-of-Science)
|
Davlan/xlm-roberta-base-finetuned-luganda
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- recall
- precision
- accuracy
- f1
model-index:
- name: kematangan-pisang-vit-l-16-100eph-224-v1.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kematangan-pisang-vit-l-16-100eph-224-v1.9
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0483
- Recall: 0.9843
- Specificity: 0.9978
- Precision: 0.9886
- Npv: 0.9979
- Accuracy: 0.9927
- F1: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Specificity | Precision | Npv | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-----------:|:---------:|:------:|:--------:|:------:|
| No log | 1.0 | 273 | 0.0410 | 0.9750 | 0.9957 | 0.9714 | 0.9956 | 0.9853 | 0.9731 |
| 0.1779 | 2.0 | 546 | 0.0775 | 0.9722 | 0.9930 | 0.9603 | 0.9933 | 0.9780 | 0.9641 |
| 0.1779 | 3.0 | 819 | 0.0998 | 0.9216 | 0.9893 | 0.9495 | 0.9903 | 0.9651 | 0.9313 |
| 0.1317 | 4.0 | 1092 | 0.0933 | 0.9396 | 0.9921 | 0.9672 | 0.9929 | 0.9743 | 0.9496 |
| 0.1317 | 5.0 | 1365 | 0.0853 | 0.9509 | 0.9937 | 0.9769 | 0.9945 | 0.9798 | 0.9607 |
| 0.074 | 6.0 | 1638 | 0.1223 | 0.9239 | 0.9890 | 0.9659 | 0.9909 | 0.9670 | 0.9378 |
| 0.074 | 7.0 | 1911 | 0.0295 | 0.9864 | 0.9978 | 0.9864 | 0.9978 | 0.9927 | 0.9864 |
| 0.066 | 8.0 | 2184 | 0.1323 | 0.9468 | 0.9882 | 0.9272 | 0.9875 | 0.9578 | 0.9276 |
| 0.066 | 9.0 | 2457 | 0.0710 | 0.9529 | 0.9933 | 0.9653 | 0.9937 | 0.9780 | 0.9582 |
| 0.0973 | 10.0 | 2730 | 0.1141 | 0.9330 | 0.9914 | 0.9695 | 0.9926 | 0.9725 | 0.9451 |
| 0.0635 | 11.0 | 3003 | 0.0633 | 0.9530 | 0.9938 | 0.9733 | 0.9944 | 0.9798 | 0.9611 |
| 0.0635 | 12.0 | 3276 | 0.1567 | 0.9107 | 0.9886 | 0.9609 | 0.9902 | 0.9633 | 0.9245 |
| 0.0516 | 13.0 | 3549 | 0.0713 | 0.9686 | 0.9955 | 0.9770 | 0.9958 | 0.9853 | 0.9724 |
| 0.0516 | 14.0 | 3822 | 0.0876 | 0.9530 | 0.9938 | 0.9733 | 0.9944 | 0.9798 | 0.9611 |
| 0.0506 | 15.0 | 4095 | 0.1739 | 0.9218 | 0.9898 | 0.9595 | 0.9910 | 0.9670 | 0.9337 |
| 0.0506 | 16.0 | 4368 | 0.0363 | 0.9820 | 0.9973 | 0.9840 | 0.9973 | 0.9908 | 0.9830 |
| 0.0507 | 17.0 | 4641 | 0.0320 | 0.9907 | 0.9980 | 0.9833 | 0.9977 | 0.9927 | 0.9867 |
| 0.0507 | 18.0 | 4914 | 0.0452 | 0.9816 | 0.9960 | 0.9759 | 0.9961 | 0.9872 | 0.9784 |
| 0.0498 | 19.0 | 5187 | 0.2982 | 0.8882 | 0.9848 | 0.9457 | 0.9871 | 0.9523 | 0.9020 |
| 0.0498 | 20.0 | 5460 | 0.2174 | 0.9039 | 0.9875 | 0.9520 | 0.9891 | 0.9596 | 0.9169 |
| 0.0475 | 21.0 | 5733 | 0.0744 | 0.9663 | 0.9950 | 0.9723 | 0.9952 | 0.9835 | 0.9691 |
| 0.0543 | 22.0 | 6006 | 0.1414 | 0.9016 | 0.9869 | 0.9507 | 0.9885 | 0.9578 | 0.9148 |
| 0.0543 | 23.0 | 6279 | 0.0716 | 0.9575 | 0.9943 | 0.9754 | 0.9949 | 0.9817 | 0.9648 |
| 0.0338 | 24.0 | 6552 | 0.0809 | 0.9486 | 0.9932 | 0.9713 | 0.9939 | 0.9780 | 0.9573 |
| 0.0338 | 25.0 | 6825 | 0.0381 | 0.9664 | 0.9955 | 0.9796 | 0.9959 | 0.9853 | 0.9721 |
| 0.0317 | 26.0 | 7098 | 0.1822 | 0.9519 | 0.9849 | 0.9764 | 0.9901 | 0.9670 | 0.9627 |
| 0.0317 | 27.0 | 7371 | 0.1652 | 0.9152 | 0.9891 | 0.9626 | 0.9906 | 0.9651 | 0.9287 |
| 0.0368 | 28.0 | 7644 | 0.1473 | 0.9196 | 0.9897 | 0.9643 | 0.9911 | 0.9670 | 0.9329 |
| 0.0368 | 29.0 | 7917 | 0.1372 | 0.9196 | 0.9897 | 0.9643 | 0.9911 | 0.9670 | 0.9329 |
| 0.0338 | 30.0 | 8190 | 0.2038 | 0.8929 | 0.9863 | 0.9545 | 0.9883 | 0.9560 | 0.9068 |
| 0.0338 | 31.0 | 8463 | 0.0637 | 0.9664 | 0.9955 | 0.9796 | 0.9959 | 0.9853 | 0.9721 |
| 0.0313 | 32.0 | 8736 | 0.0349 | 0.9907 | 0.9980 | 0.9833 | 0.9977 | 0.9927 | 0.9867 |
| 0.0225 | 33.0 | 9009 | 0.0229 | 0.9923 | 0.9983 | 0.9909 | 0.9980 | 0.9945 | 0.9916 |
| 0.0225 | 34.0 | 9282 | 0.1540 | 0.9232 | 0.9897 | 0.9643 | 0.9907 | 0.9670 | 0.9356 |
| 0.0422 | 35.0 | 9555 | 0.0289 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0422 | 36.0 | 9828 | 0.0500 | 0.9798 | 0.9972 | 0.9863 | 0.9974 | 0.9908 | 0.9828 |
| 0.0282 | 37.0 | 10101 | 0.1265 | 0.9509 | 0.9937 | 0.9769 | 0.9945 | 0.9798 | 0.9607 |
| 0.0282 | 38.0 | 10374 | 0.0644 | 0.9684 | 0.9943 | 0.9775 | 0.9952 | 0.9835 | 0.9727 |
| 0.0197 | 39.0 | 10647 | 0.0722 | 0.9686 | 0.9955 | 0.9770 | 0.9958 | 0.9853 | 0.9724 |
| 0.0197 | 40.0 | 10920 | 0.0548 | 0.9709 | 0.9961 | 0.9818 | 0.9964 | 0.9872 | 0.9757 |
| 0.0232 | 41.0 | 11193 | 0.1177 | 0.9552 | 0.9938 | 0.9702 | 0.9943 | 0.9798 | 0.9615 |
| 0.0232 | 42.0 | 11466 | 0.1641 | 0.9396 | 0.9921 | 0.9672 | 0.9929 | 0.9743 | 0.9496 |
| 0.0199 | 43.0 | 11739 | 0.0517 | 0.9754 | 0.9966 | 0.9840 | 0.9969 | 0.9890 | 0.9793 |
| 0.0211 | 44.0 | 12012 | 0.0479 | 0.9863 | 0.9974 | 0.9807 | 0.9972 | 0.9908 | 0.9833 |
| 0.0211 | 45.0 | 12285 | 0.0361 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0299 | 46.0 | 12558 | 0.0395 | 0.9864 | 0.9978 | 0.9864 | 0.9978 | 0.9927 | 0.9864 |
| 0.0299 | 47.0 | 12831 | 0.1815 | 0.9352 | 0.9915 | 0.9653 | 0.9924 | 0.9725 | 0.9457 |
| 0.0175 | 48.0 | 13104 | 0.0167 | 0.9932 | 0.9989 | 0.9932 | 0.9989 | 0.9963 | 0.9932 |
| 0.0175 | 49.0 | 13377 | 0.1156 | 0.9463 | 0.9923 | 0.9703 | 0.9933 | 0.9761 | 0.9557 |
| 0.0179 | 50.0 | 13650 | 0.0377 | 0.9843 | 0.9978 | 0.9886 | 0.9979 | 0.9927 | 0.9863 |
| 0.0179 | 51.0 | 13923 | 0.1576 | 0.9330 | 0.9914 | 0.9695 | 0.9926 | 0.9725 | 0.9451 |
| 0.0109 | 52.0 | 14196 | 0.0649 | 0.9709 | 0.9961 | 0.9818 | 0.9964 | 0.9872 | 0.9757 |
| 0.0109 | 53.0 | 14469 | 0.0735 | 0.9664 | 0.9955 | 0.9796 | 0.9959 | 0.9853 | 0.9721 |
| 0.001 | 54.0 | 14742 | 0.1071 | 0.9620 | 0.9949 | 0.9775 | 0.9954 | 0.9835 | 0.9685 |
| 0.0104 | 55.0 | 15015 | 0.0700 | 0.9620 | 0.9945 | 0.9818 | 0.9954 | 0.9835 | 0.9702 |
| 0.0104 | 56.0 | 15288 | 0.0196 | 0.9909 | 0.9984 | 0.9889 | 0.9983 | 0.9945 | 0.9899 |
| 0.0238 | 57.0 | 15561 | 0.0495 | 0.9754 | 0.9966 | 0.9840 | 0.9969 | 0.9890 | 0.9793 |
| 0.0238 | 58.0 | 15834 | 0.1053 | 0.9575 | 0.9943 | 0.9754 | 0.9949 | 0.9817 | 0.9648 |
| 0.0251 | 59.0 | 16107 | 0.0445 | 0.9798 | 0.9972 | 0.9863 | 0.9974 | 0.9908 | 0.9828 |
| 0.0251 | 60.0 | 16380 | 0.1500 | 0.9352 | 0.9915 | 0.9653 | 0.9924 | 0.9725 | 0.9457 |
| 0.0151 | 61.0 | 16653 | 0.0162 | 0.9977 | 0.9995 | 0.9956 | 0.9994 | 0.9982 | 0.9966 |
| 0.0151 | 62.0 | 16926 | 0.0597 | 0.9754 | 0.9966 | 0.9840 | 0.9969 | 0.9890 | 0.9793 |
| 0.0086 | 63.0 | 17199 | 0.0648 | 0.9709 | 0.9961 | 0.9818 | 0.9964 | 0.9872 | 0.9757 |
| 0.0086 | 64.0 | 17472 | 0.1027 | 0.9620 | 0.9949 | 0.9775 | 0.9954 | 0.9835 | 0.9685 |
| 0.0097 | 65.0 | 17745 | 0.1261 | 0.9396 | 0.9921 | 0.9672 | 0.9929 | 0.9743 | 0.9496 |
| 0.0148 | 66.0 | 18018 | 0.0229 | 0.9954 | 0.9990 | 0.9914 | 0.9989 | 0.9963 | 0.9933 |
| 0.0148 | 67.0 | 18291 | 0.0653 | 0.9709 | 0.9961 | 0.9818 | 0.9964 | 0.9872 | 0.9757 |
| 0.0096 | 68.0 | 18564 | 0.0581 | 0.9775 | 0.9963 | 0.9854 | 0.9968 | 0.9890 | 0.9812 |
| 0.0096 | 69.0 | 18837 | 0.0404 | 0.9931 | 0.9985 | 0.9873 | 0.9983 | 0.9945 | 0.9900 |
| 0.0118 | 70.0 | 19110 | 0.0674 | 0.9709 | 0.9961 | 0.9818 | 0.9964 | 0.9872 | 0.9757 |
| 0.0118 | 71.0 | 19383 | 0.0816 | 0.9709 | 0.9961 | 0.9818 | 0.9964 | 0.9872 | 0.9757 |
| 0.0027 | 72.0 | 19656 | 0.0293 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0027 | 73.0 | 19929 | 0.0349 | 0.9843 | 0.9978 | 0.9886 | 0.9979 | 0.9927 | 0.9863 |
| 0.0036 | 74.0 | 20202 | 0.0291 | 0.9932 | 0.9989 | 0.9932 | 0.9989 | 0.9963 | 0.9932 |
| 0.0036 | 75.0 | 20475 | 0.0370 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0051 | 76.0 | 20748 | 0.0260 | 0.9932 | 0.9989 | 0.9932 | 0.9989 | 0.9963 | 0.9932 |
| 0.0002 | 77.0 | 21021 | 0.1284 | 0.9552 | 0.9934 | 0.9745 | 0.9943 | 0.9798 | 0.9632 |
| 0.0002 | 78.0 | 21294 | 0.0747 | 0.9754 | 0.9966 | 0.9840 | 0.9969 | 0.9890 | 0.9793 |
| 0.0022 | 79.0 | 21567 | 0.0745 | 0.9754 | 0.9966 | 0.9840 | 0.9969 | 0.9890 | 0.9793 |
| 0.0022 | 80.0 | 21840 | 0.0344 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0027 | 81.0 | 22113 | 0.0199 | 0.9932 | 0.9989 | 0.9932 | 0.9989 | 0.9963 | 0.9932 |
| 0.0027 | 82.0 | 22386 | 0.0236 | 0.9932 | 0.9989 | 0.9932 | 0.9989 | 0.9963 | 0.9932 |
| 0.0 | 83.0 | 22659 | 0.0245 | 0.9932 | 0.9989 | 0.9932 | 0.9989 | 0.9963 | 0.9932 |
| 0.0 | 84.0 | 22932 | 0.1361 | 0.9486 | 0.9932 | 0.9713 | 0.9939 | 0.9780 | 0.9573 |
| 0.0043 | 85.0 | 23205 | 0.0325 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0043 | 86.0 | 23478 | 0.0275 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0023 | 87.0 | 23751 | 0.0412 | 0.9843 | 0.9978 | 0.9886 | 0.9979 | 0.9927 | 0.9863 |
| 0.0 | 88.0 | 24024 | 0.0361 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0 | 89.0 | 24297 | 0.0254 | 0.9932 | 0.9989 | 0.9932 | 0.9989 | 0.9963 | 0.9932 |
| 0.0 | 90.0 | 24570 | 0.0318 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0 | 91.0 | 24843 | 0.0263 | 0.9954 | 0.9990 | 0.9914 | 0.9989 | 0.9963 | 0.9933 |
| 0.0025 | 92.0 | 25116 | 0.0610 | 0.9798 | 0.9972 | 0.9863 | 0.9974 | 0.9908 | 0.9828 |
| 0.0025 | 93.0 | 25389 | 0.0661 | 0.9798 | 0.9972 | 0.9863 | 0.9974 | 0.9908 | 0.9828 |
| 0.0003 | 94.0 | 25662 | 0.0356 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0003 | 95.0 | 25935 | 0.0380 | 0.9888 | 0.9983 | 0.9909 | 0.9984 | 0.9945 | 0.9898 |
| 0.0 | 96.0 | 26208 | 0.0429 | 0.9843 | 0.9978 | 0.9886 | 0.9979 | 0.9927 | 0.9863 |
| 0.0 | 97.0 | 26481 | 0.0548 | 0.9798 | 0.9972 | 0.9863 | 0.9974 | 0.9908 | 0.9828 |
| 0.0031 | 98.0 | 26754 | 0.0561 | 0.9798 | 0.9972 | 0.9863 | 0.9974 | 0.9908 | 0.9828 |
| 0.0 | 99.0 | 27027 | 0.0490 | 0.9798 | 0.9972 | 0.9863 | 0.9974 | 0.9908 | 0.9828 |
| 0.0 | 100.0 | 27300 | 0.0483 | 0.9843 | 0.9978 | 0.9886 | 0.9979 | 0.9927 | 0.9863 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Davlan/xlm-roberta-base-ner-hrl
|
[
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 760 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Dawn576/Dawn
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Daymarebait/Discord_BOT_RICK
|
[
"conversational"
] |
conversational
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cmpatino/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Dayout/test
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cmpatino/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Dazai/Ko
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-17T02:50:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased_emotion_ft_0416
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_0416
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
Dbluciferm3737/Idk
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- ehartford/wizard_vicuna_70k_unfiltered
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPT4-LLM-Cleaned
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- tasksource/mmlu
- openai/summarize_from_feedback
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# Manticore 13B - (previously Wizard Mega)

Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/EqrvvehG) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
Manticore 13B is a Llama 13B model fine-tuned on the following datasets:
- [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses
- mmlu: instruct augmented for detailed responses subset including
- abstract_algebra
- conceptual_physics
- formal_logic
- high_school_physics
- logical_fallacies
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
# Demo
Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
- https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml
## Release Notes
- https://wandb.ai/wing-lian/manticore-13b/runs/nq3u3uoh/workspace
## Build
Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
- Preview Release: 3 epochs taking approximately 24 hours.
- The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs).
## Bias, Risks, and Limitations
Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
## Examples
````
### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization.
### Assistant:
Here’s an example of how you can use memoization to efficiently calculate the first n numbers of the Fibonacci sequence in Python:
```
def fib(n):
# Memoize the last calculated Fibonacci number for each index
cache = {}
def fib_memoized(n):
if n not in cache:
cache[n] = [0, 1]
else:
return cache[n][-1], cache[n][-2]
# Recursive helper function to calculate the Fibonacci sequence up to a given index
def fib_helper(n):
if n == 0 or n == 1:
return 0, 1
else:
return fib_helper(n - 1) + fib_helper(n - 2)
# Calculate the first n numbers of the Fibonacci sequence using memoization
def fib_memoized_seq(n):
if n == 0 or n == 1:
return 0, 1
else:
result = fib_helper(n - 1) + fib_helper(n - 2)
cache[n] = [result]
return result, cache[n][-1], cache[n][-2]
# Example usage
for i in range(len(fib_memoized_seq(3))):
print(fib_memoized_seq(3)[i])
```
In this implementation, we use a cache dictionary to memoize the last calculated Fibonacci number for each index. We define two helper functions: fib_helper, which recursively calculates the Fibonacci sequence up to a given index, and fib_memoized, which uses memoization to efficiently calculate the first n numbers of the Fibonacci sequence. Finally, we define the fib_memoized_seq function, which is our main function for calculating the first n numbers of the Fibonacci sequence using memoization.
````
```
USER: Finish the joke, a mechanic and a car salesman walk into a bar...
ASSISTANT: … and when they get up to leave, they realize that they’ve left their keys in their cars. The mechanic says “I know how to fix this” and opens his toolbox to grab something. Then he stops and looks at the salesman. He says “You drive a Toyota, right?” The salesman nods. The mechanic then takes out a tiny screwdriver from his toolbox and starts working on the lock of the car door. The salesman is amazed and asks how he knew that. The mechanic replies “I can tell by the way you parked your car.”
```
```
USER: Describe how a tsunami would affect the residents of Denver Colorado
ASSISTANT: The impact of a tsunami in Denver, Colorado would likely be minimal. As there are no coastal areas or bodies of water nearby, a tsunami generated from an earthquake or volcanic activity elsewhere in the world would have little effect on the city and its residents. Even if a large tsunami were to hit the coastline of California or Mexico, the distance between Denver and these locations means that any impacts on the city itself would be minimal.
```
```
USER: If you could do one thing right now, what would it be?
ASSISTANT: I’d hug my loved ones a little tighter today and tell them how much they mean to me.
USER: And who are your loved ones?
ASSISTANT: My closest friends and family members.
USER: What are their names?
ASSISTANT: My two best friends are named Emily and Jessica, my mom is Mary, and my dad is David. Answering the question with a bit more information about who they are or what they mean to me would help provide a more personalized response.
```
|
Dbluciferm3737/U
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
DeBERTa/deberta-v2-xxlarge
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-folk-mythology-tales
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-folk-mythology-tales
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3083 | 1.0 | 2844 | 3.1913 |
| 3.1865 | 2.0 | 5688 | 3.1404 |
| 3.1166 | 3.0 | 8532 | 3.1063 |
| 3.0783 | 4.0 | 11376 | 3.0944 |
| 3.0474 | 5.0 | 14220 | 3.0895 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DeadBeast/marathi-roberta-base
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: openrail
datasets:
- QingyiSi/Alpaca-CoT
language:
- aa
- ar
metrics:
- bleu
- character
library_name: open_clip
pipeline_tag: text-classification
tags:
- finance
- medical
- text-generation-inference
---
|
Declan/Breitbart_model_v4
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Declan/Breitbart_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-misogyny-sexism-plus-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-sexism-plus-20
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4217
- Accuracy: 0.7225
- F1: 0.7324
- Precision: 0.7318
- Recall: 0.7331
- Mae: 0.2775
- Tn: 1839
- Fp: 747
- Fn: 742
- Tp: 2038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|:----:|:---:|:----:|:----:|
| 0.1956 | 1.0 | 3770 | 1.2666 | 0.7171 | 0.7252 | 0.7300 | 0.7205 | 0.2829 | 1845 | 741 | 777 | 2003 |
| 0.2072 | 2.0 | 7540 | 1.3804 | 0.6996 | 0.6642 | 0.7891 | 0.5734 | 0.3004 | 2160 | 426 | 1186 | 1594 |
| 0.1839 | 3.0 | 11310 | 1.4879 | 0.6964 | 0.6526 | 0.8015 | 0.5504 | 0.3036 | 2207 | 379 | 1250 | 1530 |
| 0.167 | 4.0 | 15080 | 1.4170 | 0.7352 | 0.7443 | 0.7447 | 0.7439 | 0.2648 | 1877 | 709 | 712 | 2068 |
| 0.1986 | 5.0 | 18850 | 1.5159 | 0.7151 | 0.7030 | 0.7640 | 0.6511 | 0.2849 | 2027 | 559 | 970 | 1810 |
| 0.1762 | 6.0 | 22620 | 1.2861 | 0.7333 | 0.7321 | 0.7634 | 0.7032 | 0.2667 | 1980 | 606 | 825 | 1955 |
| 0.1381 | 7.0 | 26390 | 1.4217 | 0.7225 | 0.7324 | 0.7318 | 0.7331 | 0.2775 | 1839 | 747 | 742 | 2038 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Declan/Breitbart_model_v7
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Declan/Breitbart_model_v8
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Declan/Breitbart_modelv7
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Declan/CNN_model_v1
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Declan/CNN_model_v2
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Declan/CNN_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Declan/CNN_model_v4
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: jason-expert-uspto
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jason-expert-uspto
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+rocm5.4.2
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Declan/CNN_model_v7
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
Declan/ChicagoTribune_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0371
- Wer Score: 2.4731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 7.3268 | 2.13 | 50 | 4.4847 | 21.4974 |
| 2.2464 | 4.26 | 100 | 0.3519 | 11.4118 |
| 0.1049 | 6.38 | 150 | 0.0302 | 0.7468 |
| 0.0223 | 8.51 | 200 | 0.0270 | 0.4668 |
| 0.0137 | 10.64 | 250 | 0.0280 | 3.5742 |
| 0.0073 | 12.77 | 300 | 0.0304 | 7.1240 |
| 0.0034 | 14.89 | 350 | 0.0309 | 6.4885 |
| 0.0018 | 17.02 | 400 | 0.0326 | 5.0499 |
| 0.0011 | 19.15 | 450 | 0.0335 | 5.2302 |
| 0.0009 | 21.28 | 500 | 0.0342 | 4.3645 |
| 0.0007 | 23.4 | 550 | 0.0346 | 5.1445 |
| 0.0006 | 25.53 | 600 | 0.0351 | 4.0639 |
| 0.0006 | 27.66 | 650 | 0.0355 | 3.8862 |
| 0.0006 | 29.79 | 700 | 0.0359 | 3.4514 |
| 0.0006 | 31.91 | 750 | 0.0363 | 3.0486 |
| 0.0006 | 34.04 | 800 | 0.0363 | 2.8645 |
| 0.0006 | 36.17 | 850 | 0.0366 | 2.7199 |
| 0.0006 | 38.3 | 900 | 0.0369 | 2.6675 |
| 0.0006 | 40.43 | 950 | 0.0369 | 2.6304 |
| 0.0006 | 42.55 | 1000 | 0.0370 | 2.4910 |
| 0.0006 | 44.68 | 1050 | 0.0370 | 2.4834 |
| 0.0006 | 46.81 | 1100 | 0.0371 | 2.4629 |
| 0.0006 | 48.94 | 1150 | 0.0371 | 2.4731 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.12.1
- Datasets 2.11.0
- Tokenizers 0.11.0
|
Declan/FoxNews_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: ((Best quality)), ((masterpiece)), ((realistic)) human upper body photo of a person for an ID or passport. The photo will be taken with soft, even lighting in an indoor setting. The style will focus on realism, using digital photography as the medium. The artwork will depict an upper body portrait, capturing the subject in a professional attire. The color scheme will consist of neutral tones, creating a refined and polished appearance. The final result will be a high definition, precise depiction suitable for an ID or passport.
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MayIBorn/ft-sd15-human
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on ((Best quality)), ((masterpiece)), ((realistic)) human upper body photo of a person for an ID or passport. The photo will be taken with soft, even lighting in an indoor setting. The style will focus on realism, using digital photography as the medium. The artwork will depict an upper body portrait, capturing the subject in a professional attire. The color scheme will consist of neutral tones, creating a refined and polished appearance. The final result will be a high definition, precise depiction suitable for an ID or passport. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
Declan/FoxNews_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
Access to model 1qazxsw/1 is restricted and you are not in the authorized list. Visit https://huggingface.co/1qazxsw/1 to ask for access.
|
Declan/HuffPost_model_v1
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | 2023-05-17T04:31:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: stance_class_l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stance_class_l
This model is a fine-tuned version of vinai/bertweet-base on the dataset of 804 labeled tweets on the cancer risk controversy of Roundup Weedkiller \.
It classified the stance of an individual's tweet toward Bayer, Monsanto, or other relevant organizations in the crisis.
Two stances are classified: (0) Aggressive, (1) Non-Aggressive (neutral and accommodative).
It achieves the following results on the evaluation set:
- Loss: 0.6084
- Accuracy: 0.8447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.924e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3566 | 1.0 | 17 | 0.4855 | 0.7578 |
| 0.2532 | 2.0 | 34 | 0.3632 | 0.8509 |
| 0.2351 | 3.0 | 51 | 0.3773 | 0.8509 |
| 0.043 | 4.0 | 68 | 0.3553 | 0.8571 |
| 0.08 | 5.0 | 85 | 0.4682 | 0.8447 |
| 0.3089 | 6.0 | 102 | 0.4686 | 0.8509 |
| 0.035 | 7.0 | 119 | 0.5876 | 0.8323 |
| 0.0188 | 8.0 | 136 | 0.5469 | 0.8571 |
| 0.021 | 9.0 | 153 | 0.5022 | 0.8447 |
| 0.0533 | 10.0 | 170 | 0.5240 | 0.8385 |
| 0.0175 | 11.0 | 187 | 0.6352 | 0.8447 |
| 0.0106 | 12.0 | 204 | 0.5856 | 0.8447 |
| 1.9534 | 13.0 | 221 | 0.5938 | 0.8509 |
| 0.0143 | 14.0 | 238 | 0.6074 | 0.8447 |
| 0.0079 | 15.0 | 255 | 0.6084 | 0.8447 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Declan/HuffPost_model_v8
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) trained on a [Fake News Dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
It achieves the following results on the evaluation set:
- Loss: 0.1862
- Accuracy: 0.9696
- LABEL_0 = Fake news
- LABEL_1 = Real News
## Model description
More information needed
## Intended uses & limitations
This model was created for the purposes of UW IMT 575 project.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1916 | 1.0 | 4490 | 0.1435 | 0.9622 |
| 0.0668 | 2.0 | 8980 | 0.1761 | 0.9650 |
| 0.0713 | 3.0 | 13470 | 0.1862 | 0.9696 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Declan/NPR_model_v2
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Declan/NPR_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter_policy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 68.00 +/- 56.45
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Declan/NewYorkPost_model_v1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
language:
- en
pipeline_tag: text2text-generation
tags:
- legal
---
# flan-t5-cbp-lkg-corpus-w-context-small-finetuned
[flan-t5-cbp-lkg-corpus-mlm-small](https://huggingface.co/kinshuk-h/flan-t5-cbp-lkg-corpus-mlm-small) finetuned over a cleaned version of the Legal Knowledge Graph using the training method for [KGT-5](https://github.com/apoorvumang/kgt5/) with additional context supplied alongside the prompts.
|
Declan/NewYorkTimes_model_v1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromP_BFall_10KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromP_BFall_10KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1046
- Accuracy: 0.9898
- F1: 0.8806
- Precision: 0.9952
- Recall: 0.7896
- Roc Auc Score: 0.8947
- Tpr At Fpr 0.01: 0.7606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0104 | 1.0 | 13125 | 0.0568 | 0.9869 | 0.8415 | 0.9964 | 0.7282 | 0.8640 | 0.7054 |
| 0.0078 | 2.0 | 26250 | 0.0722 | 0.9871 | 0.8440 | 0.9932 | 0.7338 | 0.8668 | 0.6516 |
| 0.0047 | 3.0 | 39375 | 0.0675 | 0.9900 | 0.8833 | 0.9913 | 0.7966 | 0.8981 | 0.7312 |
| 0.0011 | 4.0 | 52500 | 0.0811 | 0.9904 | 0.8888 | 0.9936 | 0.804 | 0.9019 | 0.7698 |
| 0.0 | 5.0 | 65625 | 0.1046 | 0.9898 | 0.8806 | 0.9952 | 0.7896 | 0.8947 | 0.7606 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Declan/NewYorkTimes_model_v2
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
datasets:
- shrusti333/konkani_translation
language:
- en
metrics:
- accuracy
pipeline_tag: translation
library_name: keras
---
|
Declan/NewYorkTimes_model_v4
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
Detail Tweaker LoRA (细节调整LoRA)
https://civitai.com/models/58390/detail-tweaker-lora-lora
<lora:add_detail:1>
After sex / Broken / Defeated / Female focus
https://civitai.com/models/52204/after-sex-broken-defeated-female-focus
https://civitai.com/api/download/models/56650
Accidental exposure(休息时暴露)
https://civitai.com/models/61246/accidental-exposure
https://civitai.com/api/download/models/66099
trigger word:accidental exposure panties
Giantess | Concept(巨大化)
https://civitai.com/models/25306/giantess-or-concept
https://civitai.com/api/download/models/63765
trigger words:GTS,GTSCityGTS,GTSCITY,GTSGIGA,GTSPLANET,GTSSPACE,LOOMING,STEPPING,RAMPAGE,SITTING ON BUILDING
Lactation(挤奶汁)
https://civitai.com/models/9652/lactation
https://civitai.com/api/download/models/60642
trigger:lactation
bukkake(颜色)
https://civitai.com/models/31040/bukkake
https://civitai.com/api/download/models/56764
T:bukkake
<lora:qqq-bukkake-v2-000007:0.9> 0.8-1
Mtu Virus(正面视角yutuikui)
https://civitai.com/models/57573/mtu-virus
https://civitai.com/api/download/models/61999
T:mtu virus,multiple views
<lora:Mtu Virus:0.9>
Sitting split [Pose] 一字马坐着
https://civitai.com/models/57861/sitting-split-pose
https://civitai.com/api/download/models/62306
T:split,spread legs,sitting
<lora:splitSitting:1>
Looking Disgusted (Facial Expression) 厌恶的表情
https://civitai.com/models/53448/looking-disgusted-facial-expression
https://civitai.com/api/download/models/57812
T:((looking disgusted)),very angry,disappointed
<lora:LookingDisgusted_V1:0.5>
Clothes Handpull Flashing Concept
https://civitai.com/models/54718/clothes-handpull-flashing-concept
https://civitai.com/api/download/models/59098
T:handpull
<lora:handpull-12:0.9>
【Character】Kafka (Honkai: Star Rail) 角色
https://civitai.com/models/52336/characterkafka-honkai-star-rail
https://civitai.com/api/download/models/60417
T:star-kafka,eyewear on head,purple hair,sunglasses,red gloves,hair between eyes
<lora:kafka-v1.5-lora-novaeany-64dim:0.7>
Ultimate Ahegao
https://civitai.com/models/45566/ultimate-ahegao
https://civitai.com/api/download/models/50192
T:naughty face, rolling_eyes, cross-eyed, ahegao, open mouth, tongue out,
<lora:ultimateAhegao:>
Cereza Bayonetta
https://civitai.com/models/19293/cereza-bayonetta
https://civitai.com/api/download/models/64443
T:cereza
<lora:cerezaV3:0.6>
Tennis Outfit 网球服
https://civitai.com/models/61194/tennis-outfit
https://civitai.com/api/download/models/65659
T:wearing tennis_outfit,wearing tennis_hat,tennis field
<lora:tennis_outfit:0.5>
pov bulge concepts 隔着裤子抹茶
https://civitai.com/models/57205/pov-bulge-concepts
https://civitai.com/api/download/models/61613
T:bulge,pov
<lora:concept-pov-bulge-v1:0.7>
POV missionary 传教士
https://civitai.com/models/56111/pov-missionary
https://civitai.com/api/download/models/60512
T:missionary,penis,pov
<lora:qqq-pov-missionary-v1-000018:0.8> 0.8-1
|
Declan/Politico_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FrozenLake-1v-ver1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SergeyKazulin/FrozenLake-1v-ver1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Declan/Reuters_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: llm_week01
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# llm_week01
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Declan/WallStreetJournal_model_v1
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
Access to model MonkeyDKing/shen-zhou is restricted and you are not in the authorized list. Visit https://huggingface.co/MonkeyDKing/shen-zhou to ask for access.
|
Declan/WallStreetJournal_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: bigscience-bloom-rail-1.0
datasets:
- BelleGroup/train_1M_CN
language:
- zh
library_name: transformers
---
## 介绍
1. ✅ 对`bloom-560m`模型做了sft,在这个数量级和模型规模下,效果非常好!
2. 🚀 训练代码和推理代码全部分享,可以查看链接[https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom](https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom)
## 个人感受
1. 🎯 `bloom`系列的模型,在中文领域,具有极大的潜力,在经过有监督微调训练之后,效果非常惊人!
2. 🔄 `bloom`系列的模型,覆盖中文、英文、代码、法语、西班牙语等。即使拿来做翻译、拿来做代码生成,也都没问题!(后期将会分享相关教程)
3. 😛 当前的这个`bloom-560m`模型,也只是为了跑通整个训练流程,可以无缝切换模型到不同尺寸,如`3b`、`7b`等!
## 如何使用
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "yuanzhoulvpi/chinese_bloom_560m"#"bigscience/bloomz-3b" #"bigscience/bloom-7b1"# "output_dir/checkpoint-8260"#
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
from typing import Optional
def generate_input(instruction:Optional[str]= None, input_str:Optional[str] = None) -> str:
if input_str is None:
return PROMPT_DICT['prompt_no_input'].format_map({'instruction':instruction})
else:
return PROMPT_DICT['prompt_input'].format_map({'instruction':instruction, 'input':input_str})
for i in range(5):
print("*"*80)
inputs = tokenizer.encode(generate_input(instruction="你是谁"), return_tensors="pt")
outputs = model.generate(inputs,num_beams=3,
max_new_tokens=512,
do_sample=False,
top_k=10,
penalty_alpha=0.6,
temperature=0.8,
repetition_penalty=1.2)
print(tokenizer.decode(outputs[0]))
```
### example1
1. 训练前效果
```bash
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
写一篇关于水杯的文章
### Response: 杯子是用来喝水用的</s>
```
2. 训练后效果
```bash
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
写一篇关于水杯的文章
### Response:
水杯是人们日常生活中必不可少的物品之一,它能够帮助人们盛放、储存和运输水。以下是一些有关水杯的详细信息:
1. 形状:水杯通常有圆形、方形、长方形等不同形状。
2. 容量:水杯通常有10毫升、20毫升、30毫升、40毫升等不同容量。
3. 材质:水杯通常有玻璃、陶瓷、金属等不同材质。
4. 颜色:水杯通常有红色、蓝色、黄色、绿色等不同颜色。
5. 用途:水杯通常用于盛放、储存和运输水,如饮料、果汁、茶、咖啡等。
6. 功能:水杯可以用来盛放、储存和运输各种液体,如饮料、果汁、茶、咖啡等。
7. 用途广泛:水杯不仅用于盛放、储存和运输水,还可以用于制作各种饮料、果汁、茶、咖啡等。
总之,水杯是一个不可或缺的物品,它能够帮助人们盛放、储存和运输水,同时还可以用于制作各种饮料、果汁、茶、咖啡等。</s>
```
### example 2
1. 训练前效果
```bash
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
你是谁
### Response: I am a student.</s>
```
2. 训练后效果
```bash
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
你是谁
### Response:我是一个AI语言模型,没有个人身份。</s>
```
|
Declan/test_push
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-17T05:44:54Z |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
DeepBasak/Slack
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
DeepChem/ChemBERTa-10M-MLM
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 90 | null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 8.1288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 63 | 8.6141 |
| No log | 2.0 | 126 | 8.2254 |
| No log | 3.0 | 189 | 8.1288 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 2.0.0+cpu
- Datasets 2.12.0
- Tokenizers 0.12.1
|
DeepChem/ChemBERTa-10M-MTR
|
[
"pytorch",
"roberta",
"arxiv:1910.09700",
"transformers"
] | null |
{
"architectures": [
"RobertaForRegression"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 708 | null |
---
datasets:
- tatsu-lab/alpaca
- wbbbbb/pclue
language:
- en
widget:
- text: |-
### Instruction:
我們如何減少空氣污染
### Response:
tags:
- text-generation-inference
---
具備繁體中文指令遵循能力的bloomz
請搭配Alpaca模板使用
```python
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
```
|
DeepChem/SmilesTokenizer_PubChem_1M
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 227 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
|
DeepPavlov/bert-base-bg-cs-pl-ru-cased
|
[
"pytorch",
"jax",
"bert",
"feature-extraction",
"bg",
"cs",
"pl",
"ru",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,614 | null |
# AutoML Leaderboard
| Best model | name | model_type | metric_type | metric_value | train_time |
|:-------------|:-------------------------------------------------------------------------------------------------------------------------------|:---------------|:--------------|---------------:|-------------:|
| | [1_Baseline](1_Baseline/README.md) | Baseline | rmse | 22.3189 | 27.94 |
| | [2_DecisionTree](2_DecisionTree/README.md) | Decision Tree | rmse | 12.8338 | 3494.55 |
| | [3_DecisionTree](3_DecisionTree/README.md) | Decision Tree | rmse | 10.8007 | 3542.61 |
| | [4_DecisionTree](4_DecisionTree/README.md) | Decision Tree | rmse | 10.8007 | 3568.62 |
| | [5_Default_LightGBM](5_Default_LightGBM/README.md) | LightGBM | rmse | 0.611488 | 5923.24 |
| | [6_Default_Xgboost](6_Default_Xgboost/README.md) | Xgboost | rmse | 0.633721 | 5824.88 |
| | [7_Default_CatBoost](7_Default_CatBoost/README.md) | CatBoost | rmse | 0.346605 | 41347 |
| | [8_Default_NeuralNetwork](8_Default_NeuralNetwork/README.md) | Neural Network | rmse | 1.50702 | 4638.86 |
| | [9_Default_RandomForest](9_Default_RandomForest/README.md) | Random Forest | rmse | 9.83145 | 4801.5 |
| | [10_Default_ExtraTrees](10_Default_ExtraTrees/README.md) | Extra Trees | rmse | 10.4325 | 4744.49 |
| | [20_LightGBM](20_LightGBM/README.md) | LightGBM | rmse | 0.588545 | 6264.59 |
| | [11_Xgboost](11_Xgboost/README.md) | Xgboost | rmse | 0.720779 | 5940.07 |
| | [29_CatBoost](29_CatBoost/README.md) | CatBoost | rmse | 0.358175 | 60123.8 |
| | [38_RandomForest](38_RandomForest/README.md) | Random Forest | rmse | 9.28304 | 5999.16 |
| | [47_ExtraTrees](47_ExtraTrees/README.md) | Extra Trees | rmse | 11.5642 | 5532.58 |
| | [56_NeuralNetwork](56_NeuralNetwork/README.md) | Neural Network | rmse | 1.76989 | 5756.18 |
| | [6_Default_Xgboost_categorical_mix](6_Default_Xgboost_categorical_mix/README.md) | Xgboost | rmse | 0.625617 | 7511.86 |
| | [20_LightGBM_GoldenFeatures](20_LightGBM_GoldenFeatures/README.md) | LightGBM | rmse | 0.521862 | 6676.63 |
| | [5_Default_LightGBM_GoldenFeatures](5_Default_LightGBM_GoldenFeatures/README.md) | LightGBM | rmse | 0.56261 | 7515.12 |
| | [6_Default_Xgboost_categorical_mix_GoldenFeatures](6_Default_Xgboost_categorical_mix_GoldenFeatures/README.md) | Xgboost | rmse | 0.574645 | 8348.49 |
| | [20_LightGBM_KMeansFeatures](20_LightGBM_KMeansFeatures/README.md) | LightGBM | rmse | 0.611778 | 6887.65 |
| | [57_CatBoost](57_CatBoost/README.md) | CatBoost | rmse | 0.509914 | 30362.9 |
| | [58_CatBoost](58_CatBoost/README.md) | CatBoost | rmse | 0.515477 | 47150.4 |
| | [57_CatBoost_BoostOnErrors](57_CatBoost_BoostOnErrors/README.md) | CatBoost | rmse | 0.516569 | 31113.8 |
| | [Ensemble](Ensemble/README.md) | Ensemble | rmse | 0.324031 | 2.36 |
| | [7_Default_CatBoost_Stacked](7_Default_CatBoost_Stacked/README.md) | CatBoost | rmse | 0.444808 | 20187.4 |
| | [20_LightGBM_GoldenFeatures_Stacked](20_LightGBM_GoldenFeatures_Stacked/README.md) | LightGBM | rmse | 0.482133 | 6414.12 |
| | [6_Default_Xgboost_categorical_mix_GoldenFeatures_Stacked](6_Default_Xgboost_categorical_mix_GoldenFeatures_Stacked/README.md) | Xgboost | rmse | 0.473802 | 6722.89 |
| **the best** | [Ensemble_Stacked](Ensemble_Stacked/README.md) | Ensemble | rmse | 0.321204 | 2.98 |
### AutoML Performance

### AutoML Performance Boxplot

### Features Importance

### Spearman Correlation of Models

|
DeepPavlov/distilrubert-base-cased-conversational
|
[
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null |
{
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6,324 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-var3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SergeyKazulin/Taxi-v3-var3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DeepPavlov/distilrubert-tiny-cased-conversational-v1
|
[
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null |
{
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9,141 | null |
---
license: mit
language:
- en
pipeline_tag: text2text-generation
tags:
- legal
---
# flan-t5-cbp-lkg-corpus-small-finetuned
[flan-t5-cbp-lkg-corpus-mlm-small](https://huggingface.co/kinshuk-h/flan-t5-cbp-lkg-corpus-mlm-small) finetuned over a cleaned version of the Legal Knowledge Graph using the training method for [KGT-5](https://github.com/apoorvumang/kgt5/).
|
Deniskin/emailer_medium_300
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: fine_tune_bert_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tune_bert_output
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Deniskin/gpt3_medium
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 52 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.63 +/- 14.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Denny29/DialoGPT-medium-asunayuuki
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: fine_tune_bert_output_te_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tune_bert_output_te_ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DeskDown/MarianMixFT_en-hi
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | 2023-05-17T06:56:58Z |
---
language:
- mn
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mn-twhin-bert-named-entity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mn-twhin-bert-named-entity
This model is a fine-tuned version of [Twitter/twhin-bert-base](https://huggingface.co/Twitter/twhin-bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1591
- Precision: 0.9068
- Recall: 0.9199
- F1: 0.9133
- Accuracy: 0.9728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1901 | 1.0 | 477 | 0.1052 | 0.8528 | 0.8872 | 0.8697 | 0.9666 |
| 0.0853 | 2.0 | 954 | 0.1220 | 0.8731 | 0.8963 | 0.8845 | 0.9666 |
| 0.0577 | 3.0 | 1431 | 0.1109 | 0.8889 | 0.9082 | 0.8984 | 0.9696 |
| 0.0396 | 4.0 | 1908 | 0.1172 | 0.9006 | 0.9175 | 0.9090 | 0.9724 |
| 0.0287 | 5.0 | 2385 | 0.1314 | 0.9002 | 0.9169 | 0.9085 | 0.9720 |
| 0.0213 | 6.0 | 2862 | 0.1363 | 0.9051 | 0.9181 | 0.9116 | 0.9720 |
| 0.0158 | 7.0 | 3339 | 0.1437 | 0.9114 | 0.9221 | 0.9167 | 0.9732 |
| 0.011 | 8.0 | 3816 | 0.1517 | 0.9091 | 0.9202 | 0.9146 | 0.9726 |
| 0.0077 | 9.0 | 4293 | 0.1570 | 0.9070 | 0.9199 | 0.9134 | 0.9728 |
| 0.0059 | 10.0 | 4770 | 0.1591 | 0.9068 | 0.9199 | 0.9133 | 0.9728 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DeskDown/MarianMixFT_en-th
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cuad_v1
model-index:
- name: distilbert-base-uncased-finetuned-cuad_smaller_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cuad_smaller_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the cuad_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 100 | 0.0827 |
| No log | 2.0 | 200 | 0.0897 |
| No log | 3.0 | 300 | 0.0862 |
| No log | 4.0 | 400 | 0.0929 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DeskDown/MarianMixFT_en-vi
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: creativeml-openrail-m
language:
- en
tags:
- LLM
- BELLE
---
## Model Card for lyraBELLE
lyraBELLE is currently the **fastest BELLE model** available. To the best of our knowledge, it is the **first accelerated version of BELLE**.
The inference speed of lyraBELLE has achieved **3.3x+** acceleration upon the original version.
Among its main features are:
- weights: the original BELLE-7B-2M weights released by BelleGroup.
- device: Nvidia Ampere architechture or newer (e.g., A100)
Note that:
**Some interface/code were set for future uses(see demo below).**
- **int8 mode**: not supported yet, please always set it at 0
- **data type**: only `fp16` available.
## Speed
### test environment
- device: Nvidia A100 40G
- warmup: 10 rounds
- percision: fp16
- batch size: 64
- language: Chinese, keep the same in a batch.
- do_sample: True, the model will generate slightly different answsers to the same questions.
|version|speed|
|:-:|:-:|
|original|826.34 tokens/sec|
|lyraBELLE|2701.71 tokens/sec|
## Model Sources
- **Repository:** [https://huggingface.co/BelleGroup/BELLE-7B-2M?clone=true]
## Environment
- **docker image available** at [https://hub.docker.com/repository/docker/bigmoyan/lyrallm/general], pull image by:
```
docker pull bigmoyan/lyrallm:v0.1
```
## Uses
```python
from lyraBelle import LyraBelle
data_type = "fp16"
prompts = "今天天气大概 25度,有点小雨,吹着风,我想去户外散步,应该穿什么样的衣服裤子鞋子搭配。"
model_dir = "./model"
model_name = "1-gpu-fp16.h5"
max_output_length = 512
# int8 mode not supported, data_type only support fp16
model = LyraBelle(model_dir, model_name, data_type, 0)
output_texts = model.generate(prompts, output_length=max_output_length,top_k=30, top_p=0.85, temperature=0.35, repetition_penalty=1.2, do_sample=True)
print(output_texts)
```
## Demo output
### input
今天天气大概 25度,有点小雨,吹着风,我想去户外散步,应该穿什么样的衣服裤子鞋子搭配。
### output
建议穿着一件轻便的衬衫或T恤、一条牛仔裤和一双运动鞋或休闲鞋。如果下雨了可以带上一把伞。
## Citation
``` bibtex
@Misc{lyraBELLE2023,
author = {Kangjian Wu, Zhengtao Wang, Bin Wu},
title = {lyraBELLE: Accelerating BELLE by 3x+},
howpublished = {\url{https://huggingface.co/TMElyralab/lyraBELLE},
year = {2023}
}
```
## Report bug
- start a discussion to report any bugs!--> https://huggingface.co/TMElyralab/lyraBELLE/discussions
- report bug with a `[bug]` mark in the title.
|
DeskDown/MarianMix_en-ja-10
|
[
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: other
---
This Model is trained by T5 model and use for question generation. This model is trained across 80,000 datasets field and it's help and give you better result for your need.
If you need any sort of help for this you can ask me.
Thanks for choosing my t5 model
|
DeskDown/MarianMix_en-zh_to_vi-ms-hi-ja
|
[
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
---
This project contains the onnx and tensorrt model files converted from the chatglm-6b model.
The infer scripts for onnx and tensorrt will be refined later
onnx2engine.py used to convert onnx into tensorrt engine, batch is now 1, can be modified
according to their own video memory into dynamic batch
|
Devid/DialoGPT-small-Miku
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: bastienm/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dimedrolza/DialoGPT-small-cyberpunk
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: creativeml-openrail-m
datasets:
- jwdaddy/ddpm-butterflies-128
language:
- en
---
|
DivyanshuSheth/T5-Seq2Seq-Final
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things."
example_title: "Question Answering Example 1"
- text: "question: who created the post as we know it today?, context: 'So much of The Post is Ben,' Mrs. Graham said in 1994, three years after Bradlee retired as editor. 'He created it as we know it today.'— Ed O'Keefe (@edatpost) October 21, 2014"
example_title: "Question Answering Example 2"
model-index:
- name: lmqg/t5-small-squad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 59.66
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 79.65
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 48.74
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 93.34
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 86.84
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 81.84
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 69.95
---
# Model Card of `lmqg/t5-small-squad-qa`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question answering task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-small-squad-qa")
# model prediction
answers = model.answer_q(list_question="What is a person called is practicing heresy?", list_context=" Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qa")
output = pipe("question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 69.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 81.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 93.34 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 71.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 67.5 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 63.3 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 59.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 48.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 86.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 79.65 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: t5-small
- max_length: 512
- max_length_output: 32
- epoch: 13
- batch: 32
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Dkwkk/Da
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-ear_1_migrants-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-ear_1_migrants-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 69.995 | 0.03 | 10 | 64.5661 |
| 28.3476 | 0.06 | 20 | 18.0685 |
| 5.2324 | 0.08 | 30 | 6.3912 |
| 0.7104 | 0.11 | 40 | 2.7627 |
| -0.8669 | 0.14 | 50 | 1.2027 |
| -1.7912 | 0.17 | 60 | 0.8591 |
| -1.7957 | 0.2 | 70 | 0.7706 |
| -2.1587 | 0.23 | 80 | 0.7197 |
| -2.1961 | 0.25 | 90 | 0.6167 |
| -2.1901 | 0.28 | 100 | 0.6059 |
| -2.0514 | 0.31 | 110 | 0.5845 |
| -2.0839 | 0.34 | 120 | 0.5717 |
| -2.1579 | 0.37 | 130 | 0.5667 |
| -2.0266 | 0.4 | 140 | 0.5601 |
| -2.2698 | 0.42 | 150 | 0.5582 |
| -2.1635 | 0.45 | 160 | 0.5694 |
| -2.1359 | 0.48 | 170 | 0.5584 |
| -2.1628 | 0.51 | 180 | 0.5510 |
| -2.0485 | 0.54 | 190 | 0.5520 |
| -2.1333 | 0.57 | 200 | 0.5431 |
| -2.2908 | 0.59 | 210 | 0.5438 |
| -2.1131 | 0.62 | 220 | 0.5545 |
| -2.1988 | 0.65 | 230 | 0.5371 |
| -2.187 | 0.68 | 240 | 0.5349 |
| -2.0381 | 0.71 | 250 | 0.5504 |
| -2.1413 | 0.74 | 260 | 0.5293 |
| -2.0951 | 0.76 | 270 | 0.5257 |
| -2.2314 | 0.79 | 280 | 0.5263 |
| -2.1851 | 0.82 | 290 | 0.5291 |
| -2.2318 | 0.85 | 300 | 0.5331 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
albert-base-v1
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 38,156 | 2023-05-17T08:28:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: blurred_faces
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: faces_resnet
split: validation
args: faces_resnet
metrics:
- name: Accuracy
type: accuracy
value: 0.9964317573595004
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blurred_faces
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0414
- Accuracy: 0.9964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.514 | 1.0 | 144 | 0.4884 | 0.9358 |
| 0.242 | 2.0 | 288 | 0.1377 | 0.9893 |
| 0.1592 | 2.99 | 432 | 0.0736 | 0.9902 |
| 0.0956 | 4.0 | 577 | 0.0488 | 0.9955 |
| 0.1734 | 4.99 | 720 | 0.0414 | 0.9964 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.13.0
- Datasets 2.10.1
- Tokenizers 0.11.0
|
albert-base-v2
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4,785,283 | 2023-05-17T08:31:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromP_BFall_30KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromP_BFall_30KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0981
- Accuracy: 0.9876
- F1: 0.8504
- Precision: 0.9938
- Recall: 0.7432
- Roc Auc Score: 0.8715
- Tpr At Fpr 0.01: 0.6914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0097 | 1.0 | 26250 | 0.0808 | 0.9840 | 0.8004 | 0.9874 | 0.673 | 0.8363 | 0.6018 |
| 0.011 | 2.0 | 52500 | 0.0652 | 0.9867 | 0.8389 | 0.9881 | 0.7288 | 0.8642 | 0.6536 |
| 0.0025 | 3.0 | 78750 | 0.0730 | 0.9868 | 0.8401 | 0.9889 | 0.7302 | 0.8649 | 0.649 |
| 0.0023 | 4.0 | 105000 | 0.1064 | 0.9866 | 0.8367 | 0.9937 | 0.7226 | 0.8612 | 0.6878 |
| 0.0011 | 5.0 | 131250 | 0.0981 | 0.9876 | 0.8504 | 0.9938 | 0.7432 | 0.8715 | 0.6914 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
albert-large-v1
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 687 | 2023-05-17T08:32:48Z |
---
license: apache-2.0
language:
- zh
tags:
- legal
---
```
> pretrained: Llama7B
> instruction & conversation finetuned: Chinese-Vicuna-chatv1 (Lora)
> domain finetuned: Lora
```
这是[Chinese-Vicuna](https://github.com/Facico/Chinese-Vicuna)在legal领域上微调后的lora模型,可直接配合Llama7B使用
legal的数据我们使用 [Chatgpt关于JEC-QA中国法考数据集的解答](https://raw.githubusercontent.com/AndrewZhe/lawyer-llama/main/data/judical_examination.json) 、 [ChatGPT扮演律师解答问题](https://raw.githubusercontent.com/AndrewZhe/lawyer-llama/main/data/legal_advice.json) 、[法律知识问答](https://github.com/thunlp/CAIL) 三种来源的数据,总计23209条。尽管我们能够找到一些法律真实问答的数据,但此类数据往往带噪(比如不耐烦地回答`“问问你自己吧”`),因此并没有使用
我们按[chat](https://github.com/Facico/Chinese-Vicuna/blob/master/sample/chat/data_sample.jsonl)格式格式化数据,基于[chatv1](https://huggingface.co/Chinese-Vicuna/Chinese-Vicuna-lora-7b-chatv1),使用[continue-training](https://github.com/Facico/Chinese-Vicuna/blob/master/scripts/finetune_chat_continue.sh) 继续训练将近6 epoch;经测试不仅提高了法律问答能力,还能够保留一定的通用问答能力。
|
albert-xlarge-v1
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 341 | 2023-05-17T08:34:19Z |
---
language:
- zh
- en
tags:
- glm
- visualglm
- chatglm
- thudm
---
# VisualGLM-6B
<p align="center">
💻 <a href="https://github.com/THUDM/VisualGLM-6B" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> • 📃 <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-1th2q5u69-7tURzFuOPanmuHy9hsZnKA" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM-6B/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
## 介绍
VisualGLM-6B 是一个开源的,支持**图像、中文和英文**的多模态对话语言模型,语言模型基于 [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B),具有 62 亿参数;图像部分通过训练 [BLIP2-Qformer](https://arxiv.org/abs/2301.12597) 构建起视觉模型与语言模型的桥梁,整体模型共78亿参数。
VisualGLM-6B 依靠来自于 [CogView](https://arxiv.org/abs/2105.13290) 数据集的30M高质量中文图文对,与300M经过筛选的英文图文对进行预训练,中英文权重相同。该训练方式较好地将视觉信息对齐到ChatGLM的语义空间;之后的微调阶段,模型在长视觉问答数据上训练,以生成符合人类偏好的答案。
## 软件依赖
```shell
pip install SwissArmyTransformer>=0.3.6 torch>1.10.0 torchvision transformers>=4.27.1 cpm_kernels
```
## 代码调用
可以通过如下代码调用 VisualGLM-6B 模型来生成对话:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True).half().cuda()
>>> image_path = "your image path"
>>> response, history = model.chat(tokenizer, image_path, "描述这张图片。", history=[])
>>> print(response)
>>> response, history = model.chat(tokenizer, image_path, "这张图片可能是在什么场所拍摄的?", history=history)
>>> print(response)
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/VisualGLM-6B)。
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/VisualGLM-6B).
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,VisualGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
## 引用
如果你觉得我们的工作有帮助的话,请考虑引用下列论文:
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
```
```
@article{ding2021cogview,
title={Cogview: Mastering text-to-image generation via transformers},
author={Ding, Ming and Yang, Zhuoyi and Hong, Wenyi and Zheng, Wendi and Zhou, Chang and Yin, Da and Lin, Junyang and Zou, Xu and Shao, Zhou and Yang, Hongxia and others},
journal={Advances in Neural Information Processing Systems},
volume={34},
pages={19822--19835},
year={2021}
}
```
|
albert-xlarge-v2
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,973 | 2023-05-17T08:35:13Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_wikilingua_model_mBart50_trans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_wikilingua_model_mBart50_trans
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5850
- Rouge1: 0.3544
- Rouge2: 0.1276
- Rougel: 0.278
- Rougelsum: 0.2781
- Gen Len: 44.4625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.2609 | 1.0 | 1600 | 2.1254 | 0.2502 | 0.0868 | 0.2162 | 0.2157 | 61.8925 |
| 1.5106 | 2.0 | 3200 | 2.1392 | 0.3321 | 0.1205 | 0.27 | 0.2696 | 42.615 |
| 1.2338 | 3.0 | 4800 | 2.3200 | 0.3496 | 0.128 | 0.2754 | 0.2753 | 42.3075 |
| 0.7157 | 4.0 | 6400 | 2.5850 | 0.3544 | 0.1276 | 0.278 | 0.2781 | 44.4625 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bert-base-german-cased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 175,983 | null |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - wujia/output
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
bert-base-german-dbmdz-cased
|
[
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,814 | 2023-05-17T08:47:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-test1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2837
- Rouge1: 22.7012
- Rouge2: 0.0
- Rougel: 22.7156
- Rougelsum: 22.7348
- Gen Len: 2.2686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.5353 | 1.0 | 2601 | 2.3131 | 22.0732 | 0.0 | 22.1069 | 22.1229 | 2.2647 |
| 2.4728 | 2.0 | 5202 | 2.2838 | 22.7012 | 0.0 | 22.7156 | 22.7348 | 2.2686 |
| 2.4819 | 3.0 | 7803 | 2.2837 | 22.7012 | 0.0 | 22.7156 | 22.7348 | 2.2686 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bert-base-german-dbmdz-uncased
|
[
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 68,305 | 2023-05-17T08:47:45Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TestZee/t5-base-finetuned-short-news-t5-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TestZee/t5-base-finetuned-short-news-t5-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3869
- Validation Loss: 2.3736
- Train Rouge1: 30.0355
- Train Rouge2: 13.2593
- Train Rougel: 26.0004
- Train Rougelsum: 25.9971
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.3869 | 2.3736 | 30.0355 | 13.2593 | 26.0004 | 25.9971 | 19.0 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bert-base-multilingual-uncased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 328,585 | 2023-05-17T08:51:54Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.