modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Denilson/gbert-base-germaner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-dropout-cola-0.4
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5780870172624647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-dropout-cola-0.4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9088
- Matthews Correlation: 0.5781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1124 | 1.0 | 535 | 1.0648 | 0.5327 |
| 0.0804 | 2.0 | 1070 | 0.9088 | 0.5781 |
| 0.0599 | 3.0 | 1605 | 1.2529 | 0.5599 |
| 0.036 | 4.0 | 2140 | 1.3387 | 0.5666 |
| 0.03 | 5.0 | 2675 | 1.3587 | 0.5709 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Deniskin/essays_small_2000i
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-9e-06
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5753593483598531
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-9e-06
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9848
- Matthews Correlation: 0.5754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5061 | 0.4717 |
| 0.3617 | 2.0 | 1070 | 0.4769 | 0.5701 |
| 0.2584 | 3.0 | 1605 | 0.5299 | 0.5625 |
| 0.1998 | 4.0 | 2140 | 0.6801 | 0.5629 |
| 0.1492 | 5.0 | 2675 | 0.8519 | 0.5446 |
| 0.1323 | 6.0 | 3210 | 0.9372 | 0.5624 |
| 0.103 | 7.0 | 3745 | 0.9424 | 0.5753 |
| 0.0949 | 8.0 | 4280 | 0.9848 | 0.5754 |
| 0.0718 | 9.0 | 4815 | 1.0474 | 0.5652 |
| 0.0629 | 10.0 | 5350 | 1.0657 | 0.5731 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Denver/distilbert-base-uncased-finetuned-squad
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: en
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
## Intro
This is a OPT-125m model trained with HF dataset on a single 3090 GPU.
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-125m")
>>> generator("Hello, I'm am conscious and")
[{'generated_text': 'Hello, I am conscious and aware of the fact that I am a woman. I am aware of'}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True)
>>> generator("Hello, I'm am conscious and")
[{'generated_text': 'Hello, I am conscious and active member of the Khaosan Group, a private, self'}]
```
## Training data
This model uses AHRLHF for RL https://huggingface.co/datasets/Anthropic/hh-rlhf
|
DeskDown/MarianMixFT_en-id
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/803601382943162368/F36Z7ypy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jerma</div>
<div style="text-align: center; font-size: 14px;">@jerma985</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jerma.
| Data | Jerma |
| --- | --- |
| Tweets downloaded | 2732 |
| Retweets | 111 |
| Short tweets | 286 |
| Tweets kept | 2335 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/sd9hzc1n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jerma985's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/372xxdz8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/372xxdz8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jerma985')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DeskDown/MarianMixFT_en-ja
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: mit
datasets:
- squad
- deepset/germanquad
language:
- de
---
# Overview
German QA-Model finetuned on Question-Answer-Pairs for Bürgerbüro-Service-Documents
**Base model:** deepset/gelectra-large
**Finetuning** in sequential steps on:
1. Machine-translated (en->de) SQuAD 1.0
2. GermanQuAD: deepset/germanquad
3. Custom LHM-QA-Dataset (>reference following<)
**Evaluation:** Reaches a performance of 70,0 F1-Score on LHM-QA-testdata
|
DeskDown/MarianMixFT_en-ms
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: HilbertS/ppo-SnowballTargetTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DeskDown/MarianMixFT_en-my
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: t5-mt-en-ca
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: ca-en
split: train
args: ca-en
metrics:
- name: Bleu
type: bleu
value: 1.9924
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-mt-en-ca
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2444
- Bleu: 1.9924
- Gen Len: 17.2964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 231 | 3.9148 | 0.1683 | 17.2649 |
| No log | 2.0 | 462 | 3.6731 | 0.1568 | 17.6819 |
| 4.1865 | 3.0 | 693 | 3.5163 | 0.2006 | 17.7144 |
| 4.1865 | 4.0 | 924 | 3.3951 | 0.2983 | 17.5233 |
| 3.7413 | 5.0 | 1155 | 3.2961 | 0.3487 | 17.4517 |
| 3.7413 | 6.0 | 1386 | 3.2153 | 0.3698 | 17.4213 |
| 3.5136 | 7.0 | 1617 | 3.1464 | 0.4649 | 17.367 |
| 3.5136 | 8.0 | 1848 | 3.0885 | 0.528 | 17.3181 |
| 3.3438 | 9.0 | 2079 | 3.0353 | 0.5732 | 17.2638 |
| 3.3438 | 10.0 | 2310 | 2.9903 | 0.6168 | 17.24 |
| 3.226 | 11.0 | 2541 | 2.9470 | 0.6037 | 17.2476 |
| 3.226 | 12.0 | 2772 | 2.9100 | 0.6071 | 17.2856 |
| 3.1273 | 13.0 | 3003 | 2.8735 | 0.7135 | 17.2562 |
| 3.1273 | 14.0 | 3234 | 2.8400 | 0.7844 | 17.291 |
| 3.1273 | 15.0 | 3465 | 2.8125 | 0.7642 | 17.2649 |
| 3.0446 | 16.0 | 3696 | 2.7848 | 0.7874 | 17.2552 |
| 3.0446 | 17.0 | 3927 | 2.7594 | 0.7701 | 17.266 |
| 2.9717 | 18.0 | 4158 | 2.7335 | 0.8199 | 17.317 |
| 2.9717 | 19.0 | 4389 | 2.7096 | 0.8848 | 17.2812 |
| 2.9026 | 20.0 | 4620 | 2.6913 | 0.9185 | 17.2942 |
| 2.9026 | 21.0 | 4851 | 2.6728 | 0.9304 | 17.2997 |
| 2.8527 | 22.0 | 5082 | 2.6529 | 0.9424 | 17.2758 |
| 2.8527 | 23.0 | 5313 | 2.6350 | 0.9681 | 17.2801 |
| 2.8026 | 24.0 | 5544 | 2.6209 | 1.065 | 17.2856 |
| 2.8026 | 25.0 | 5775 | 2.6031 | 1.0636 | 17.2443 |
| 2.7559 | 26.0 | 6006 | 2.5882 | 1.0406 | 17.2476 |
| 2.7559 | 27.0 | 6237 | 2.5722 | 1.0967 | 17.241 |
| 2.7559 | 28.0 | 6468 | 2.5621 | 1.1424 | 17.2486 |
| 2.7094 | 29.0 | 6699 | 2.5472 | 1.1675 | 17.2226 |
| 2.7094 | 30.0 | 6930 | 2.5356 | 1.1882 | 17.2454 |
| 2.6703 | 31.0 | 7161 | 2.5226 | 1.1994 | 17.2747 |
| 2.6703 | 32.0 | 7392 | 2.5116 | 1.2601 | 17.266 |
| 2.6343 | 33.0 | 7623 | 2.5017 | 1.2126 | 17.2389 |
| 2.6343 | 34.0 | 7854 | 2.4905 | 1.2105 | 17.2432 |
| 2.6114 | 35.0 | 8085 | 2.4795 | 1.2356 | 17.2215 |
| 2.6114 | 36.0 | 8316 | 2.4713 | 1.2904 | 17.2497 |
| 2.5778 | 37.0 | 8547 | 2.4599 | 1.291 | 17.2193 |
| 2.5778 | 38.0 | 8778 | 2.4523 | 1.3017 | 17.2313 |
| 2.5475 | 39.0 | 9009 | 2.4413 | 1.3076 | 17.2389 |
| 2.5475 | 40.0 | 9240 | 2.4350 | 1.3536 | 17.2508 |
| 2.5475 | 41.0 | 9471 | 2.4277 | 1.3899 | 17.2182 |
| 2.5255 | 42.0 | 9702 | 2.4195 | 1.4112 | 17.2421 |
| 2.5255 | 43.0 | 9933 | 2.4117 | 1.4328 | 17.2562 |
| 2.4996 | 44.0 | 10164 | 2.4059 | 1.4373 | 17.2226 |
| 2.4996 | 45.0 | 10395 | 2.3974 | 1.4887 | 17.2204 |
| 2.4748 | 46.0 | 10626 | 2.3909 | 1.4829 | 17.2269 |
| 2.4748 | 47.0 | 10857 | 2.3863 | 1.5417 | 17.2682 |
| 2.4563 | 48.0 | 11088 | 2.3785 | 1.5502 | 17.2182 |
| 2.4563 | 49.0 | 11319 | 2.3717 | 1.609 | 17.2313 |
| 2.4363 | 50.0 | 11550 | 2.3661 | 1.576 | 17.2573 |
| 2.4363 | 51.0 | 11781 | 2.3628 | 1.61 | 17.2465 |
| 2.4182 | 52.0 | 12012 | 2.3568 | 1.6118 | 17.2476 |
| 2.4182 | 53.0 | 12243 | 2.3498 | 1.6268 | 17.2389 |
| 2.4182 | 54.0 | 12474 | 2.3430 | 1.5769 | 17.2519 |
| 2.4 | 55.0 | 12705 | 2.3404 | 1.6465 | 17.2432 |
| 2.4 | 56.0 | 12936 | 2.3363 | 1.6708 | 17.2508 |
| 2.3825 | 57.0 | 13167 | 2.3322 | 1.6851 | 17.2714 |
| 2.3825 | 58.0 | 13398 | 2.3273 | 1.6938 | 17.253 |
| 2.3689 | 59.0 | 13629 | 2.3229 | 1.729 | 17.2693 |
| 2.3689 | 60.0 | 13860 | 2.3187 | 1.7584 | 17.2519 |
| 2.3586 | 61.0 | 14091 | 2.3144 | 1.7604 | 17.2161 |
| 2.3586 | 62.0 | 14322 | 2.3101 | 1.7821 | 17.2204 |
| 2.3433 | 63.0 | 14553 | 2.3072 | 1.7585 | 17.2356 |
| 2.3433 | 64.0 | 14784 | 2.3027 | 1.7544 | 17.2269 |
| 2.3294 | 65.0 | 15015 | 2.3009 | 1.8058 | 17.2226 |
| 2.3294 | 66.0 | 15246 | 2.2964 | 1.7876 | 17.2182 |
| 2.3294 | 67.0 | 15477 | 2.2941 | 1.7765 | 17.2476 |
| 2.3129 | 68.0 | 15708 | 2.2898 | 1.747 | 17.2541 |
| 2.3129 | 69.0 | 15939 | 2.2878 | 1.7628 | 17.2486 |
| 2.3102 | 70.0 | 16170 | 2.2845 | 1.7721 | 17.2345 |
| 2.3102 | 71.0 | 16401 | 2.2829 | 1.803 | 17.2334 |
| 2.2949 | 72.0 | 16632 | 2.2786 | 1.7698 | 17.2161 |
| 2.2949 | 73.0 | 16863 | 2.2754 | 1.786 | 17.2302 |
| 2.2895 | 74.0 | 17094 | 2.2746 | 1.7973 | 17.2552 |
| 2.2895 | 75.0 | 17325 | 2.2710 | 1.7891 | 17.2747 |
| 2.2803 | 76.0 | 17556 | 2.2709 | 1.8304 | 17.2497 |
| 2.2803 | 77.0 | 17787 | 2.2682 | 1.822 | 17.2443 |
| 2.2697 | 78.0 | 18018 | 2.2653 | 1.819 | 17.2736 |
| 2.2697 | 79.0 | 18249 | 2.2634 | 1.8169 | 17.279 |
| 2.2697 | 80.0 | 18480 | 2.2619 | 1.8322 | 17.2747 |
| 2.2649 | 81.0 | 18711 | 2.2612 | 1.8546 | 17.2541 |
| 2.2649 | 82.0 | 18942 | 2.2582 | 1.868 | 17.2986 |
| 2.2582 | 83.0 | 19173 | 2.2575 | 1.9165 | 17.2856 |
| 2.2582 | 84.0 | 19404 | 2.2563 | 1.9389 | 17.2725 |
| 2.2556 | 85.0 | 19635 | 2.2543 | 1.9548 | 17.2834 |
| 2.2556 | 86.0 | 19866 | 2.2528 | 1.9543 | 17.2932 |
| 2.2516 | 87.0 | 20097 | 2.2512 | 1.9483 | 17.2856 |
| 2.2516 | 88.0 | 20328 | 2.2506 | 1.9439 | 17.2942 |
| 2.2475 | 89.0 | 20559 | 2.2499 | 1.9672 | 17.2801 |
| 2.2475 | 90.0 | 20790 | 2.2490 | 1.9569 | 17.2866 |
| 2.2373 | 91.0 | 21021 | 2.2479 | 1.9708 | 17.2671 |
| 2.2373 | 92.0 | 21252 | 2.2468 | 1.9655 | 17.2834 |
| 2.2373 | 93.0 | 21483 | 2.2461 | 1.9695 | 17.2845 |
| 2.2399 | 94.0 | 21714 | 2.2455 | 1.9703 | 17.2888 |
| 2.2399 | 95.0 | 21945 | 2.2453 | 1.9728 | 17.2877 |
| 2.2381 | 96.0 | 22176 | 2.2453 | 1.9734 | 17.2758 |
| 2.2381 | 97.0 | 22407 | 2.2447 | 1.9855 | 17.2921 |
| 2.237 | 98.0 | 22638 | 2.2444 | 1.9912 | 17.2975 |
| 2.237 | 99.0 | 22869 | 2.2445 | 1.9924 | 17.2964 |
| 2.2283 | 100.0 | 23100 | 2.2444 | 1.9924 | 17.2964 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DeskDown/MarianMixFT_en-th
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-bne-finetuned-TripAdvisorDomainAdaptation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-TripAdvisorDomainAdaptation
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9911 | 1.0 | 2655 | 1.8865 |
| 1.8354 | 2.0 | 5310 | 1.8216 |
| 1.7849 | 3.0 | 7965 | 1.7930 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DeskDown/MarianMixFT_en-vi
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
---
以ChatGPT、GPT-4等为代表的大语言模型(Large Language Model, LLM)掀起了新一轮自然语言处理领域的研究浪潮,展现出了类通用人工智能(AGI)的能力,受到业界广泛关注。
为推动LLM在中文医疗领域的发展和落地,提升LLM的医疗知识与回答医学咨询的能力,我们现推出**ChatMed**系列中文医疗大规模语言模型:
- 🚀 [ChatMed-Consult](https://huggingface.co/michaelwzhu/ChatMed-Consult) : 基于[中文医疗在线问诊数据集ChatMed_Consult_Dataset](https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset)的50w+在线问诊+ChatGPT回复作为训练集。模型主干为[LlaMA-7b](https://github.com/facebookresearch/llama),融合了[Chinese-LlaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)的LoRA权重与中文扩展词表,然后再进行基于LoRA的参数高效微调。我们将全部代码都进行了公开。我们也将部署一个在线Gradio demo, 敬请关注。
- ⏳ [ChatMed-TCM](https://huggingface.co/michaelwzhu/ChatMed-TCM) : 大模型赋能中医药传承。这一模型的训练数据为[中医药指令数据集ChatMed_TCM_Dataset](https://huggingface.co/datasets/michaelwzhu/ChatMed_TCM_Dataset)。以我们开源的[中医药知识图谱](https://github.com/ywjawmw/TCM_KG)为基础,采用以实体为中心的自指令方法(entity-centric self-instruct),调用ChatGPT得到2.6w+的围绕中医药的指令数据。ChatMed-TCM模型也是以LlaMA为底座,采用LoRA微调得到。
----
[Text2DT](https://github.com/michael-wzhu/Text2DT_Baseline) | [中文医疗大模型评测基准PromptCBLUE](https://github.com/michael-wzhu/PromptCBLUE) | [中文医疗在线问诊数据集ChatMed_Consult_Dataset](https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset) | [中医药指令数据集ChatMed_TCM_Dataset](https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset) | [中医药知识图谱](https://github.com/ywjawmw/TCM_KG)
## 更新
2023/5/05 开源[ChatMed-Consult](https://huggingface.co/michaelwzhu/ChatMed-Consult)模型;
## ChatMed-Consult模型介绍
### 模型介绍
- 训练数据:[中文医疗在线问诊数据集ChatMed_Consult_Dataset](https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset)的50w+在线问诊+ChatGPT回复作为训练集。我们发现,在线爬取的问诊数据,虽然可以反映真实世界的用户/患者的问诊需求,但是一般网上的回答良莠不齐。所以我们调用ChatGPT (`gpt-3.5-turbo`)得到问诊的回复。 (⏳ todo: 实现一个评估模型,给人工回复进行评分。调用大模型的token毕竟烧钱)
- 模型基座:目前我们开源了基于LlaMA-7b的[ChatMed-Consult](https://huggingface.co/michaelwzhu/ChatMed-Consult)模型。后续我们将会尝试不同的模型底座,比如LlaMA-13b,MOSS等。
- 代码:模型训练所需要的全部代码见[ChatMed-Consult 训练代码](https://github.com/michael-wzhu/ChatMed/blob/main/src/chatmed_llama_peft)。训练中我们借助DeepSpeed(ZeRO stage 3)实现分布式训练。
- 模型权重下载:由于我们目前采用模型是基于Llama-7b进行参数高效微调,所以我们只上传了参数高效微调模块的权重,见[ChatMed-Consult模型权重](https://huggingface.co/michaelwzhu/ChatMed-Consult)。
### 快速上手
在使用[ChatMed-Consult](https://huggingface.co/michaelwzhu/ChatMed-Consult)之前,大家需要准备好LlaMA-7b底座模型,详细操作见[LlaMA-7b模型准备](https://github.com/michael-wzhu/ChatMed/blob/main/src/chatmed_llama_peft/LlaMA-7b%E6%A8%A1%E5%9E%8B%E5%87%86%E5%A4%87.md)。
LlaMA-7b底座模型准备好后,下载[ChatMed-Consult的LoRA权重](https://huggingface.co/michaelwzhu/ChatMed-Consult),在3090显卡(或者更强的显卡) 运行以下命令,启动一个简单的基于flask的web service:
```bash
python src/web_services/web_service_simple.py
```
然后运行
```bash
python src/web_services/web_service_test.py
```
上面的脚本主要是运行了test_examples.json文件中提供了测试用例。在使用自己的测试用例时,请注意保持格式一致。
### 训练
首先,大家需要准备好LlaMA-7b底座模型,保存于`resources/chinese-llama-alpaca-plus-lora-7b`路径。数据集采用[中文医疗在线医疗咨询数据集ChatMed_Consult_Dataset](https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset)。我们采用deepspeed实现分布式训练:
```bash
./src/chatmed_llama_peft/run_train.sh
```
训练脚本中使用的是4张显卡,大家根据自己的服务器情况调整超参数。
## 效果对比
我们对比了经过权重转化与合并的[中文LlaMA-7b](https://github.com/michael-wzhu/ChatMed/blob/main/src/chatmed_llama_peft/LlaMA-7b%E6%A8%A1%E5%9E%8B%E5%87%86%E5%A4%87.md),以及经过了[ChatMed_Consult_Dataset](https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset) 和LoRA微调的[ChatMed-Consult](https://huggingface.co/michaelwzhu/ChatMed-Consult)模型。下面是五个典型例子的对比。更多测试样例见[More test examples](https://github.com/michael-wzhu/ChatMed/blob/main/src/web_services/test_examples/ChatMed-Consult_test.json)
| 问诊问题 | 中文LlaMA-7b | ChatMed-Consult |
|------|------------|-----------------|
| 天气温度越来越高,对于有宝宝的家庭来说,室内空调开到几度合适,以前我都是开二十七,那医生说开错了。 | "A. 二十七摄氏度;B. 三十二八摄氏度;C.三十六二十四摄氏度;D.四十一九摄氏度;E. 四十一九摄氏度。" | "在夏天的时候,为了保持良好的健康和舒适的环境,建议您将房间内的空气调节器调至26-28摄氏度左右即可。如果您想让婴儿更加舒服地睡觉,可以把室温降低1-2个台阶(比如从27升至22),这样可以让孩子更容易入睡并提高睡眠质量。同时,也要注意定期清洁空调过滤网以及避免过度使用空调等措施以保证室内通风良好、干燥清新。" |
| 这两天关节痛,昨天贴了膏药好一点。昨晚吹了一下风扇,今天早上起床都起不了,走不了路。 | "可能是风湿性关节炎引起的疼痛和僵硬症状。建议您去医院做进一步检查以确定诊断并制定治疗方案。" | 您的症状可能是风湿性疾病引起的疼痛和僵硬感。建议您尽快就医并接受专业的诊断治疗。在等待看医生的过程中,可以尝试以下措施缓解痛苦:保持休息、避免过度活动或剧烈运动;使用热敷或者冷敷来减轻肿胀和疼痛;按摩局部肌肉以促进血液循环等。同时请注意保暖,穿暖和的衣服,以免加重病情。" |
| 癫痫病25年了,这些年病情一直都比较稳定,发作次数也很少,现在开始频繁发作了该怎么办?有没有什么针对的办法可以解决啊? | 建议您及时就医并进行检查。如果确诊为癫痫持续状态(EPS)或阵发性间歇期发作等需要药物治疗的情况,可以考虑使用抗惊厥药、镇静催眠剂和神经营养素类药物来控制症状;同时注意饮食卫生及生活习惯的调整也是很重要的。 | "如果您的癫痫症状越来越多、更频繁或更加严重,建议您尽快就医。以下是一些可能有助于缓解您的情况的方法和注意事项:\n\n1.保持规律的生活方式,包括充足睡眠时间、饮食健康等;\n2.避免过度疲劳或者精神紧张的情况发生;\n3.定期进行体检和其他必要的检查以确保身体健康状况良好;\n4.遵从医生开具的药物治疗方案并按时服药;\n5.在紧急情况下及时拨打急救电话寻求帮助。\n\n请注意,以上方法只是一般性的指导原则,具体治疗方法需要根据个人实际情况而定,最好咨询专业医师的意见。" |
| 上牙床外凸,牙齿还是较整齐的,微笑时会露出很大的牙床,已经21岁了,带牙套会有用吗,需要带多久,多少钱啊。 | "建议您去口腔医院进行检查和治疗,因为这属于比较严重的问题之一。" | "如果您想矫正您的下颌骨和牙齿问题,可以考虑戴牙套来达到这个目的。但是需要注意的是,佩戴牙套的时间长短取决于问题的严重程度以及个人的情况。一般来说,如果情况比较轻微或不影响日常生活的话,可能只需要几个月就可以看到明显的效果;但如果情况较为复杂或者影响到生活质量,可能会更长时间才能完成治疗过程。此外,价格也会因地区、医院等因素而有所不同,建议您咨询当地口腔医生进行详细了解并制定个性化方案。" |
| 医生您好,我想给爷爷奶奶在美国买保健品/药,奶奶有长期冠心病,爷爷有缺血性心脏病,以前还做过白内障手术,请问有哪些药适合买?我看到的有鱼油,欧米茄(omega)3-6-9,还有烟酸,叶黄素,亚麻油. | 建议购买含有硝酸甘油、维生素E和钙片等成分的产品。 | "很抱歉听到您的家人健康问题。对于老年人来说,选择合适的药物非常重要。以下是一些可能有用的产品和建议供参考:\n\n1. Omega-3脂肪酸补充剂 - 这些产品通常含有丰富的Omega-3脂肪酸和其他营养成分,可以帮助降低血液中的胆固醇水平并预防心血管疾病。但是请注意,如果您正在服用其他抗凝或降脂药物,最好先咨询专业医师的意见是否需要调整剂量或者停止使用这些药品。\n\n2. Smart Heart Plus – 这是一种针对心脏问题的复合物,包括多种天然植物提取物、维生素B群等物质,能够改善血液循环、增强心肌功能以及保护血管壁等等。不过需要注意的是,这种产品的效果因人而异,所以在购买前一定要仔细阅读说明书并且遵循医嘱进行用药。\n\n3. CoQ10 - 这种辅酶是一种重要的能量代谢产物,它可以在细胞中提供电子流来支持ATP合成过程。CoQ10缺乏可能会导致疲劳感、肌肉疼痛等问题。因此,如果老人的身体状况允许,可以考虑添加适量的CoQ10以提高身体机能。" |
我们可以看到:
- 在线问诊,由于用户输入错字或者语音转文字带来噪音,问诊问题可能不会很规整。在这种情况下,中文LlaMA-7b模型可能会产生没有意义的回答。而ChatMed-Consult可以有效的从干扰中理解用户问题,做出相应的回答;
- ChatMed-Consult的回答会更加具有人性关怀一些,而中文LlaMA-7b模型的回复可能会比较生硬;
- ChatMed-Consult相较于中文LlaMA-7b模型的回答会更加丰富,具有更多可行的建议。
- 大模型作为医生的辅助,其可以多列举可能性和建议,但是不能太过武断的下结论。中文LlaMA-7b模型面对问诊问题会比较容易下直接下结论,似乎是有一些过度自信。ChatMed-Consult一般会说"以下是一些可能...",相对更加谨慎。
## 免责声明
- 本项目相关资源仅供学术研究之用,严禁用于商业用途。
- ChatMed-Consult作为基于语言模型的智能助手,其不能代替医生进行医学诊断和给出医学建议。如有需要,请咨询专业医生或前往医院就诊。
- ChatMed系列模型正在快速迭代中,模型权重会定期进行更新。
- ChatMed系列模型基于开源数据,其训练数据的质和量都是有限的,其掌握的医学知识肯定是存在各种各样的缺陷。我们将会不断进行改进和更新。
## 致谢
本项目基于开源项目进行开发,在此对相关项目和研究开发人员表示感谢。
- [LlaMA](https://github.com/facebookresearch/llama)
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Chinese-LlaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
Logo中的小学霸羊驼是由[midjourney](http://midjourney.com)自动生成。
## Citation
如果你使用了本项目的模型,数据或者代码,请声明引用:
```bash
@misc{zhu2023ChatMed,
title={ChatMed: A Chinese Medical Large Language Model},
author={Wei Zhu and Xiaoling Wang},
year={2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/michael-wzhu/ChatMed}},
}
```
|
DeskDown/MarianMix_en-zh-10
|
[
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-8e-06
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5752615459764325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-8e-06
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8389
- Matthews Correlation: 0.5753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5241 | 1.0 | 535 | 0.4659 | 0.5046 |
| 0.3755 | 2.0 | 1070 | 0.4412 | 0.5650 |
| 0.2782 | 3.0 | 1605 | 0.5524 | 0.5395 |
| 0.2154 | 4.0 | 2140 | 0.6437 | 0.5651 |
| 0.1669 | 5.0 | 2675 | 0.7709 | 0.5650 |
| 0.1503 | 6.0 | 3210 | 0.8389 | 0.5753 |
| 0.1151 | 7.0 | 3745 | 0.8964 | 0.5681 |
| 0.1082 | 8.0 | 4280 | 0.9767 | 0.5548 |
| 0.0816 | 9.0 | 4815 | 0.9978 | 0.5498 |
| 0.0809 | 10.0 | 5350 | 1.0170 | 0.5576 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DeskDown/MarianMix_en-zh_to_vi-ms-hi-ja
|
[
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: creativeml-openrail-m
base_model: /home/ubuntu/model/stable-diffusion-v1-5
instance_prompt: a photo of benben cartoon cow,with red skin,cute face,two horns on the head,white cheeks
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - heine123/benben_out
These are LoRA adaption weights for /home/ubuntu/model/stable-diffusion-v1-5. The weights were trained on a photo of benben cartoon cow,with red skin,cute face,two horns on the head,white cheeks using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
Dev-DGT/food-dbert-multiling
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: swlosof02_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swlosof02_2
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DheerajPranav/Dialo-GPT-Rick-bot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: HilbertS/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dhito/am
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarizing_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarizing_news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5292
- Rouge1: 0.384
- Rouge2: 0.1554
- Rougel: 0.3376
- Rougelsum: 0.3377
- Gen Len: 18.8513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 63 | 3.0459 | 0.3393 | 0.1259 | 0.2985 | 0.2986 | 18.9927 |
| No log | 2.0 | 126 | 2.7214 | 0.3699 | 0.1458 | 0.3255 | 0.3257 | 18.9666 |
| No log | 3.0 | 189 | 2.5743 | 0.3805 | 0.153 | 0.3345 | 0.3347 | 18.8972 |
| No log | 4.0 | 252 | 2.5292 | 0.384 | 0.1554 | 0.3376 | 0.3377 | 18.8513 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Dhruva/Interstellar
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-0.0001
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-0.0001
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7459
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6205 | 1.0 | 535 | 0.7459 | 0.0 |
| 0.6218 | 2.0 | 1070 | 0.6288 | 0.0 |
| 0.6166 | 3.0 | 1605 | 0.6181 | 0.0 |
| 0.6196 | 4.0 | 2140 | 0.6279 | 0.0 |
| 0.6137 | 5.0 | 2675 | 0.6202 | 0.0 |
| 0.6138 | 6.0 | 3210 | 0.6203 | 0.0 |
| 0.6074 | 7.0 | 3745 | 0.6184 | 0.0 |
| 0.6128 | 8.0 | 4280 | 0.6220 | 0.0 |
| 0.6073 | 9.0 | 4815 | 0.6183 | 0.0 |
| 0.6113 | 10.0 | 5350 | 0.6196 | 0.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Dilmk2/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
Access to model erwinschrodigner1/prabigya is restricted and you are not in the authorized list. Visit https://huggingface.co/erwinschrodigner1/prabigya to ask for access.
|
DimaOrekhov/cubert-method-name
|
[
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | 2023-05-05T10:13:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-dropout-cola-0.8
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.609298672684182
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-dropout-cola-0.8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1085
- Matthews Correlation: 0.6093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.0511 | 1.0 | 535 | 1.5284 | 0.5702 |
| 0.0458 | 2.0 | 1070 | 1.1085 | 0.6093 |
| 0.0667 | 3.0 | 1605 | 1.1696 | 0.5806 |
| 0.0406 | 4.0 | 2140 | 1.2386 | 0.5960 |
| 0.0314 | 5.0 | 2675 | 1.3074 | 0.5934 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Dizoid/Lll
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-AS_sentences
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-AS_sentences
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0021
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1111 | 1.0 | 2 | 1.0519 | 0.3958 | 0.2245 |
| 1.0223 | 2.0 | 4 | 1.0014 | 0.6042 | 0.4947 |
| 0.9491 | 3.0 | 6 | 0.9297 | 0.6042 | 0.4839 |
| 0.8283 | 4.0 | 8 | 0.8463 | 0.6458 | 0.5689 |
| 0.8 | 5.0 | 10 | 0.7632 | 0.7708 | 0.7583 |
| 0.6775 | 6.0 | 12 | 0.6857 | 0.8333 | 0.8321 |
| 0.6429 | 7.0 | 14 | 0.6167 | 0.9167 | 0.9172 |
| 0.5803 | 8.0 | 16 | 0.5473 | 0.9375 | 0.9381 |
| 0.4597 | 9.0 | 18 | 0.4705 | 0.9792 | 0.9793 |
| 0.3854 | 10.0 | 20 | 0.4016 | 1.0 | 1.0 |
| 0.3427 | 11.0 | 22 | 0.3417 | 1.0 | 1.0 |
| 0.3251 | 12.0 | 24 | 0.2882 | 1.0 | 1.0 |
| 0.2751 | 13.0 | 26 | 0.2398 | 1.0 | 1.0 |
| 0.2163 | 14.0 | 28 | 0.1968 | 1.0 | 1.0 |
| 0.1764 | 15.0 | 30 | 0.1598 | 1.0 | 1.0 |
| 0.1442 | 16.0 | 32 | 0.1304 | 1.0 | 1.0 |
| 0.1209 | 17.0 | 34 | 0.1076 | 1.0 | 1.0 |
| 0.1065 | 18.0 | 36 | 0.0898 | 1.0 | 1.0 |
| 0.0905 | 19.0 | 38 | 0.0757 | 1.0 | 1.0 |
| 0.0717 | 20.0 | 40 | 0.0644 | 1.0 | 1.0 |
| 0.0611 | 21.0 | 42 | 0.0553 | 1.0 | 1.0 |
| 0.0659 | 22.0 | 44 | 0.0481 | 1.0 | 1.0 |
| 0.0451 | 23.0 | 46 | 0.0423 | 1.0 | 1.0 |
| 0.049 | 24.0 | 48 | 0.0376 | 1.0 | 1.0 |
| 0.0414 | 25.0 | 50 | 0.0338 | 1.0 | 1.0 |
| 0.0362 | 26.0 | 52 | 0.0306 | 1.0 | 1.0 |
| 0.0313 | 27.0 | 54 | 0.0279 | 1.0 | 1.0 |
| 0.0319 | 28.0 | 56 | 0.0256 | 1.0 | 1.0 |
| 0.0293 | 29.0 | 58 | 0.0236 | 1.0 | 1.0 |
| 0.0274 | 30.0 | 60 | 0.0218 | 1.0 | 1.0 |
| 0.0252 | 31.0 | 62 | 0.0203 | 1.0 | 1.0 |
| 0.0233 | 32.0 | 64 | 0.0190 | 1.0 | 1.0 |
| 0.0218 | 33.0 | 66 | 0.0178 | 1.0 | 1.0 |
| 0.0205 | 34.0 | 68 | 0.0168 | 1.0 | 1.0 |
| 0.0189 | 35.0 | 70 | 0.0160 | 1.0 | 1.0 |
| 0.0183 | 36.0 | 72 | 0.0152 | 1.0 | 1.0 |
| 0.0177 | 37.0 | 74 | 0.0145 | 1.0 | 1.0 |
| 0.0168 | 38.0 | 76 | 0.0138 | 1.0 | 1.0 |
| 0.0169 | 39.0 | 78 | 0.0132 | 1.0 | 1.0 |
| 0.0145 | 40.0 | 80 | 0.0127 | 1.0 | 1.0 |
| 0.0149 | 41.0 | 82 | 0.0122 | 1.0 | 1.0 |
| 0.014 | 42.0 | 84 | 0.0117 | 1.0 | 1.0 |
| 0.013 | 43.0 | 86 | 0.0113 | 1.0 | 1.0 |
| 0.0128 | 44.0 | 88 | 0.0109 | 1.0 | 1.0 |
| 0.0122 | 45.0 | 90 | 0.0105 | 1.0 | 1.0 |
| 0.0116 | 46.0 | 92 | 0.0102 | 1.0 | 1.0 |
| 0.0121 | 47.0 | 94 | 0.0099 | 1.0 | 1.0 |
| 0.0107 | 48.0 | 96 | 0.0096 | 1.0 | 1.0 |
| 0.0107 | 49.0 | 98 | 0.0093 | 1.0 | 1.0 |
| 0.0111 | 50.0 | 100 | 0.0090 | 1.0 | 1.0 |
| 0.0106 | 51.0 | 102 | 0.0088 | 1.0 | 1.0 |
| 0.0102 | 52.0 | 104 | 0.0085 | 1.0 | 1.0 |
| 0.0096 | 53.0 | 106 | 0.0083 | 1.0 | 1.0 |
| 0.0104 | 54.0 | 108 | 0.0081 | 1.0 | 1.0 |
| 0.0096 | 55.0 | 110 | 0.0079 | 1.0 | 1.0 |
| 0.0095 | 56.0 | 112 | 0.0077 | 1.0 | 1.0 |
| 0.0099 | 57.0 | 114 | 0.0075 | 1.0 | 1.0 |
| 0.0096 | 58.0 | 116 | 0.0073 | 1.0 | 1.0 |
| 0.0085 | 59.0 | 118 | 0.0072 | 1.0 | 1.0 |
| 0.0091 | 60.0 | 120 | 0.0070 | 1.0 | 1.0 |
| 0.0084 | 61.0 | 122 | 0.0068 | 1.0 | 1.0 |
| 0.0087 | 62.0 | 124 | 0.0067 | 1.0 | 1.0 |
| 0.0076 | 63.0 | 126 | 0.0065 | 1.0 | 1.0 |
| 0.0077 | 64.0 | 128 | 0.0064 | 1.0 | 1.0 |
| 0.0081 | 65.0 | 130 | 0.0063 | 1.0 | 1.0 |
| 0.0071 | 66.0 | 132 | 0.0062 | 1.0 | 1.0 |
| 0.0073 | 67.0 | 134 | 0.0060 | 1.0 | 1.0 |
| 0.0075 | 68.0 | 136 | 0.0059 | 1.0 | 1.0 |
| 0.007 | 69.0 | 138 | 0.0058 | 1.0 | 1.0 |
| 0.007 | 70.0 | 140 | 0.0057 | 1.0 | 1.0 |
| 0.0066 | 71.0 | 142 | 0.0056 | 1.0 | 1.0 |
| 0.0063 | 72.0 | 144 | 0.0055 | 1.0 | 1.0 |
| 0.0064 | 73.0 | 146 | 0.0054 | 1.0 | 1.0 |
| 0.0071 | 74.0 | 148 | 0.0053 | 1.0 | 1.0 |
| 0.007 | 75.0 | 150 | 0.0052 | 1.0 | 1.0 |
| 0.0061 | 76.0 | 152 | 0.0051 | 1.0 | 1.0 |
| 0.0061 | 77.0 | 154 | 0.0050 | 1.0 | 1.0 |
| 0.0058 | 78.0 | 156 | 0.0049 | 1.0 | 1.0 |
| 0.0056 | 79.0 | 158 | 0.0048 | 1.0 | 1.0 |
| 0.0063 | 80.0 | 160 | 0.0048 | 1.0 | 1.0 |
| 0.0059 | 81.0 | 162 | 0.0047 | 1.0 | 1.0 |
| 0.0059 | 82.0 | 164 | 0.0046 | 1.0 | 1.0 |
| 0.0054 | 83.0 | 166 | 0.0046 | 1.0 | 1.0 |
| 0.0052 | 84.0 | 168 | 0.0045 | 1.0 | 1.0 |
| 0.006 | 85.0 | 170 | 0.0044 | 1.0 | 1.0 |
| 0.0056 | 86.0 | 172 | 0.0044 | 1.0 | 1.0 |
| 0.0052 | 87.0 | 174 | 0.0043 | 1.0 | 1.0 |
| 0.0051 | 88.0 | 176 | 0.0042 | 1.0 | 1.0 |
| 0.0052 | 89.0 | 178 | 0.0042 | 1.0 | 1.0 |
| 0.0052 | 90.0 | 180 | 0.0041 | 1.0 | 1.0 |
| 0.0053 | 91.0 | 182 | 0.0041 | 1.0 | 1.0 |
| 0.005 | 92.0 | 184 | 0.0040 | 1.0 | 1.0 |
| 0.005 | 93.0 | 186 | 0.0040 | 1.0 | 1.0 |
| 0.0052 | 94.0 | 188 | 0.0039 | 1.0 | 1.0 |
| 0.0051 | 95.0 | 190 | 0.0039 | 1.0 | 1.0 |
| 0.0045 | 96.0 | 192 | 0.0038 | 1.0 | 1.0 |
| 0.0047 | 97.0 | 194 | 0.0038 | 1.0 | 1.0 |
| 0.0049 | 98.0 | 196 | 0.0037 | 1.0 | 1.0 |
| 0.0045 | 99.0 | 198 | 0.0037 | 1.0 | 1.0 |
| 0.0044 | 100.0 | 200 | 0.0036 | 1.0 | 1.0 |
| 0.0045 | 101.0 | 202 | 0.0036 | 1.0 | 1.0 |
| 0.0044 | 102.0 | 204 | 0.0036 | 1.0 | 1.0 |
| 0.0048 | 103.0 | 206 | 0.0035 | 1.0 | 1.0 |
| 0.0043 | 104.0 | 208 | 0.0035 | 1.0 | 1.0 |
| 0.0043 | 105.0 | 210 | 0.0034 | 1.0 | 1.0 |
| 0.0043 | 106.0 | 212 | 0.0034 | 1.0 | 1.0 |
| 0.0043 | 107.0 | 214 | 0.0034 | 1.0 | 1.0 |
| 0.004 | 108.0 | 216 | 0.0033 | 1.0 | 1.0 |
| 0.004 | 109.0 | 218 | 0.0033 | 1.0 | 1.0 |
| 0.0041 | 110.0 | 220 | 0.0033 | 1.0 | 1.0 |
| 0.0041 | 111.0 | 222 | 0.0032 | 1.0 | 1.0 |
| 0.0041 | 112.0 | 224 | 0.0032 | 1.0 | 1.0 |
| 0.0039 | 113.0 | 226 | 0.0032 | 1.0 | 1.0 |
| 0.0041 | 114.0 | 228 | 0.0031 | 1.0 | 1.0 |
| 0.0038 | 115.0 | 230 | 0.0031 | 1.0 | 1.0 |
| 0.0037 | 116.0 | 232 | 0.0031 | 1.0 | 1.0 |
| 0.0036 | 117.0 | 234 | 0.0031 | 1.0 | 1.0 |
| 0.0035 | 118.0 | 236 | 0.0030 | 1.0 | 1.0 |
| 0.0038 | 119.0 | 238 | 0.0030 | 1.0 | 1.0 |
| 0.0038 | 120.0 | 240 | 0.0030 | 1.0 | 1.0 |
| 0.0037 | 121.0 | 242 | 0.0029 | 1.0 | 1.0 |
| 0.0037 | 122.0 | 244 | 0.0029 | 1.0 | 1.0 |
| 0.0036 | 123.0 | 246 | 0.0029 | 1.0 | 1.0 |
| 0.0034 | 124.0 | 248 | 0.0029 | 1.0 | 1.0 |
| 0.0035 | 125.0 | 250 | 0.0029 | 1.0 | 1.0 |
| 0.0035 | 126.0 | 252 | 0.0028 | 1.0 | 1.0 |
| 0.0036 | 127.0 | 254 | 0.0028 | 1.0 | 1.0 |
| 0.0034 | 128.0 | 256 | 0.0028 | 1.0 | 1.0 |
| 0.0032 | 129.0 | 258 | 0.0028 | 1.0 | 1.0 |
| 0.0037 | 130.0 | 260 | 0.0027 | 1.0 | 1.0 |
| 0.0034 | 131.0 | 262 | 0.0027 | 1.0 | 1.0 |
| 0.0034 | 132.0 | 264 | 0.0027 | 1.0 | 1.0 |
| 0.0039 | 133.0 | 266 | 0.0027 | 1.0 | 1.0 |
| 0.0033 | 134.0 | 268 | 0.0027 | 1.0 | 1.0 |
| 0.0036 | 135.0 | 270 | 0.0027 | 1.0 | 1.0 |
| 0.0034 | 136.0 | 272 | 0.0026 | 1.0 | 1.0 |
| 0.0033 | 137.0 | 274 | 0.0026 | 1.0 | 1.0 |
| 0.0033 | 138.0 | 276 | 0.0026 | 1.0 | 1.0 |
| 0.003 | 139.0 | 278 | 0.0026 | 1.0 | 1.0 |
| 0.0035 | 140.0 | 280 | 0.0026 | 1.0 | 1.0 |
| 0.003 | 141.0 | 282 | 0.0026 | 1.0 | 1.0 |
| 0.0031 | 142.0 | 284 | 0.0025 | 1.0 | 1.0 |
| 0.003 | 143.0 | 286 | 0.0025 | 1.0 | 1.0 |
| 0.0032 | 144.0 | 288 | 0.0025 | 1.0 | 1.0 |
| 0.0033 | 145.0 | 290 | 0.0025 | 1.0 | 1.0 |
| 0.0032 | 146.0 | 292 | 0.0025 | 1.0 | 1.0 |
| 0.0031 | 147.0 | 294 | 0.0025 | 1.0 | 1.0 |
| 0.0032 | 148.0 | 296 | 0.0024 | 1.0 | 1.0 |
| 0.0029 | 149.0 | 298 | 0.0024 | 1.0 | 1.0 |
| 0.0032 | 150.0 | 300 | 0.0024 | 1.0 | 1.0 |
| 0.0028 | 151.0 | 302 | 0.0024 | 1.0 | 1.0 |
| 0.003 | 152.0 | 304 | 0.0024 | 1.0 | 1.0 |
| 0.0033 | 153.0 | 306 | 0.0024 | 1.0 | 1.0 |
| 0.0029 | 154.0 | 308 | 0.0024 | 1.0 | 1.0 |
| 0.0031 | 155.0 | 310 | 0.0024 | 1.0 | 1.0 |
| 0.003 | 156.0 | 312 | 0.0024 | 1.0 | 1.0 |
| 0.0032 | 157.0 | 314 | 0.0023 | 1.0 | 1.0 |
| 0.0029 | 158.0 | 316 | 0.0023 | 1.0 | 1.0 |
| 0.0029 | 159.0 | 318 | 0.0023 | 1.0 | 1.0 |
| 0.0032 | 160.0 | 320 | 0.0023 | 1.0 | 1.0 |
| 0.0029 | 161.0 | 322 | 0.0023 | 1.0 | 1.0 |
| 0.0028 | 162.0 | 324 | 0.0023 | 1.0 | 1.0 |
| 0.003 | 163.0 | 326 | 0.0023 | 1.0 | 1.0 |
| 0.0029 | 164.0 | 328 | 0.0023 | 1.0 | 1.0 |
| 0.0026 | 165.0 | 330 | 0.0023 | 1.0 | 1.0 |
| 0.0029 | 166.0 | 332 | 0.0023 | 1.0 | 1.0 |
| 0.003 | 167.0 | 334 | 0.0023 | 1.0 | 1.0 |
| 0.0028 | 168.0 | 336 | 0.0022 | 1.0 | 1.0 |
| 0.0028 | 169.0 | 338 | 0.0022 | 1.0 | 1.0 |
| 0.0029 | 170.0 | 340 | 0.0022 | 1.0 | 1.0 |
| 0.0028 | 171.0 | 342 | 0.0022 | 1.0 | 1.0 |
| 0.0027 | 172.0 | 344 | 0.0022 | 1.0 | 1.0 |
| 0.0028 | 173.0 | 346 | 0.0022 | 1.0 | 1.0 |
| 0.0028 | 174.0 | 348 | 0.0022 | 1.0 | 1.0 |
| 0.0027 | 175.0 | 350 | 0.0022 | 1.0 | 1.0 |
| 0.0028 | 176.0 | 352 | 0.0022 | 1.0 | 1.0 |
| 0.0028 | 177.0 | 354 | 0.0022 | 1.0 | 1.0 |
| 0.0029 | 178.0 | 356 | 0.0022 | 1.0 | 1.0 |
| 0.003 | 179.0 | 358 | 0.0022 | 1.0 | 1.0 |
| 0.0028 | 180.0 | 360 | 0.0022 | 1.0 | 1.0 |
| 0.0026 | 181.0 | 362 | 0.0022 | 1.0 | 1.0 |
| 0.0029 | 182.0 | 364 | 0.0022 | 1.0 | 1.0 |
| 0.0026 | 183.0 | 366 | 0.0022 | 1.0 | 1.0 |
| 0.0028 | 184.0 | 368 | 0.0022 | 1.0 | 1.0 |
| 0.0031 | 185.0 | 370 | 0.0022 | 1.0 | 1.0 |
| 0.0029 | 186.0 | 372 | 0.0022 | 1.0 | 1.0 |
| 0.0027 | 187.0 | 374 | 0.0021 | 1.0 | 1.0 |
| 0.0028 | 188.0 | 376 | 0.0021 | 1.0 | 1.0 |
| 0.0029 | 189.0 | 378 | 0.0021 | 1.0 | 1.0 |
| 0.0028 | 190.0 | 380 | 0.0021 | 1.0 | 1.0 |
| 0.0028 | 191.0 | 382 | 0.0021 | 1.0 | 1.0 |
| 0.0026 | 192.0 | 384 | 0.0021 | 1.0 | 1.0 |
| 0.0026 | 193.0 | 386 | 0.0021 | 1.0 | 1.0 |
| 0.0026 | 194.0 | 388 | 0.0021 | 1.0 | 1.0 |
| 0.0027 | 195.0 | 390 | 0.0021 | 1.0 | 1.0 |
| 0.003 | 196.0 | 392 | 0.0021 | 1.0 | 1.0 |
| 0.0029 | 197.0 | 394 | 0.0021 | 1.0 | 1.0 |
| 0.0027 | 198.0 | 396 | 0.0021 | 1.0 | 1.0 |
| 0.0029 | 199.0 | 398 | 0.0021 | 1.0 | 1.0 |
| 0.0027 | 200.0 | 400 | 0.0021 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Dmitry12/sber
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Circularmachines/Batch_indexing_machine_ViT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Circularmachines/Batch_indexing_machine_ViT
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6437
- Train Accuracy: 0.6590
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0003, 'decay_steps': 72510, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.6457 | 0.6543 | 0 |
| 0.6367 | 0.6706 | 1 |
| 0.6343 | 0.6728 | 2 |
| 0.6424 | 0.6604 | 3 |
| 0.6437 | 0.6590 | 4 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Doiman/DialoGPT-medium-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridad
This model is a fine-tuned version of [vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation](https://huggingface.co/vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5996
- F1: 0.7468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5823 | 1.0 | 14159 | 0.5671 | 0.7452 |
| 0.4536 | 2.0 | 28318 | 0.5996 | 0.7468 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DongHai/DialoGPT-small-rick
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-batchSize-cola-16
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6125472225786625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-batchSize-cola-16
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0969
- Matthews Correlation: 0.6125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.0394 | 1.0 | 535 | 1.0969 | 0.6125 |
| 0.0289 | 2.0 | 1070 | 1.0612 | 0.5907 |
| 0.0559 | 3.0 | 1605 | 1.1586 | 0.5650 |
| 0.0373 | 4.0 | 2140 | 1.1325 | 0.5831 |
| 0.0261 | 5.0 | 2675 | 1.3065 | 0.5804 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DongHyoungLee/kogpt2-base-v2-finetuned-kogpt2_nsmc_single_sentence_classification
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- adapter-transformers
- bert
- adapterhub:pico_ner
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/clinical_bert_adapter_ner_pico_for_classification_task` for emilyalsentzer/Bio_ClinicalBERT
An [adapter](https://adapterhub.ml) for the `emilyalsentzer/Bio_ClinicalBERT` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
adapter_name = model.load_adapter("reginaboateng/clinical_bert_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Dongjae/mrc2reader
|
[
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-Squad-ID-with-indobert-large-p2-with-ITTL-with-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-Squad-ID-with-indobert-large-p2-with-ITTL-with-freeze-LR-1e-05
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5574
- Exact Match: 47.6371
- F1: 63.8727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 1.8897 | 0.5 | 463 | 1.7933 | 39.8083 | 55.8203 |
| 1.6804 | 1.0 | 926 | 1.6326 | 44.1978 | 60.8018 |
| 1.5163 | 1.5 | 1389 | 1.5762 | 45.0303 | 60.8666 |
| 1.4687 | 2.0 | 1852 | 1.5214 | 46.6616 | 63.1627 |
| 1.3234 | 2.5 | 2315 | 1.5170 | 46.7793 | 63.1629 |
| 1.3221 | 3.0 | 2778 | 1.5037 | 47.2418 | 63.9784 |
| 1.1877 | 3.5 | 3241 | 1.5265 | 47.4773 | 63.7942 |
| 1.2091 | 4.0 | 3704 | 1.5262 | 47.3764 | 63.8722 |
| 1.0894 | 4.5 | 4167 | 1.5574 | 47.6371 | 63.8727 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Waynehillsdev/Wayne_NLP_mT5
|
[
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-dropout-0.1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.593197037544882
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-dropout-0.1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1127
- Matthews Correlation: 0.5932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.49 | 1.0 | 535 | 0.5310 | 0.4914 |
| 0.3003 | 2.0 | 1070 | 0.5391 | 0.5572 |
| 0.2033 | 3.0 | 1605 | 0.6975 | 0.5473 |
| 0.1427 | 4.0 | 2140 | 0.8513 | 0.5612 |
| 0.0998 | 5.0 | 2675 | 0.8598 | 0.5829 |
| 0.0783 | 6.0 | 3210 | 1.1127 | 0.5932 |
| 0.0456 | 7.0 | 3745 | 1.0697 | 0.5890 |
| 0.0395 | 8.0 | 4280 | 1.1813 | 0.5782 |
| 0.0277 | 9.0 | 4815 | 1.2958 | 0.5727 |
| 0.0205 | 10.0 | 5350 | 1.3045 | 0.5832 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Doogie/Waynehills-KE-T5-doogie
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-batchSize-cola-32
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5930181720231964
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-batchSize-cola-32
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0466
- Matthews Correlation: 0.5930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.9600 | 0.5600 |
| 0.0668 | 2.0 | 536 | 0.9530 | 0.5765 |
| 0.0668 | 3.0 | 804 | 1.0466 | 0.5930 |
| 0.0327 | 4.0 | 1072 | 1.1919 | 0.5805 |
| 0.0327 | 5.0 | 1340 | 1.2359 | 0.5905 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Waynehillsdev/Waynehills-STT-doogie-server
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 61 | null |
faputa dreambooth model
key:shs,1girl, solo, navel, dark-skinned female, dark skin, very dark skin, looking at viewer, monster girl, white hair, extra arms, white background, flat chest, simple background, yellow eyes, white fur
|
Waynehillsdev/Waynehills_summary_tensorflow
|
[
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
# Trying to make AI conversation
for this fine-tuning of this model. here we use the **[dataset](abhijitgayen/cogo_chat)**
# How to use this Model
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_id= "abhijitgayen/cogo-blenderbot"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
UTTERANCE = "help me to book fcl"
print("Human: ", UTTERANCE)
inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])
```
# Out Put
Response is good but it is take time to give response..As chatbot a real time application. if it takes more than 20 sec , it is meaningless.
|
Doquey/DialoGPT-small-Michaelbot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | 2023-05-05T10:50:39Z |
## This is a 4bit quant of https://huggingface.co/Aeala/GPT4-x-AlpacaDente2-30b
# My secret sauce:
* Using comit <a href="https://github.com/0cc4m/GPTQ-for-LLaMa/tree/3c16fd9c7946ebe85df8d951cb742adbc1966ec7">3c16fd9</a> of 0cc4m's GPTQ fork
* Using PTB as the calibration dataset
* Act-order, True-sequential, percdamp 0.1
(<i>the default percdamp is 0.01</i>)
* No groupsize
* Will run with CUDA, does not need triton.
* Quant completed on a 'Premium GPU' and 'High Memory' Google Colab.
## Benchmark results
|<b>Model<b>|<b>C4<b>|<b>WikiText2<b>|<b>PTB<b>|
|:---:|---|---|---|
|Aeala's FP16|7.05504846572876|4.662261962890625|24.547462463378906|
|This Quant|7.326207160949707|4.957101345062256|24.941526412963867|
|Aeala's Quant <a href="https://huggingface.co/Aeala/GPT4-x-AlpacaDente2-30b/resolve/main/4bit.safetensors">here</a>|7.332120418548584|5.016242980957031|25.576189041137695|
|
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 79.70 +/- 45.87
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-epochs-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5879831868448624
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-epochs-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5106
- Matthews Correlation: 0.5880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7248771148294196e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4482 | 0.5047 |
| No log | 2.0 | 268 | 0.4230 | 0.5612 |
| No log | 3.0 | 402 | 0.4850 | 0.5677 |
| 0.3514 | 4.0 | 536 | 0.5106 | 0.5880 |
| 0.3514 | 5.0 | 670 | 0.5397 | 0.5727 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | 2023-05-05T10:56:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-dropout-0.2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5992215466535732
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-dropout-0.2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4502
- Matthews Correlation: 0.5992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4987 | 1.0 | 535 | 0.5145 | 0.4872 |
| 0.3065 | 2.0 | 1070 | 0.4502 | 0.5992 |
| 0.2059 | 3.0 | 1605 | 0.7547 | 0.5208 |
| 0.1467 | 4.0 | 2140 | 0.8557 | 0.5390 |
| 0.1006 | 5.0 | 2675 | 0.9277 | 0.5550 |
| 0.0796 | 6.0 | 3210 | 1.0832 | 0.5765 |
| 0.0532 | 7.0 | 3745 | 1.0337 | 0.5687 |
| 0.0367 | 8.0 | 4280 | 1.1539 | 0.5779 |
| 0.0276 | 9.0 | 4815 | 1.3224 | 0.5755 |
| 0.0192 | 10.0 | 5350 | 1.3055 | 0.5810 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DoyyingFace/bert-asian-hate-tweets-asonam-unclean
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarizing_lit_only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarizing_lit_only
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4906
- Rouge1: 0.1089
- Rouge2: 0.0191
- Rougel: 0.0871
- Rougelsum: 0.0874
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 19 | 4.3297 | 0.1055 | 0.0162 | 0.0847 | 0.0847 | 19.0 |
| No log | 2.0 | 38 | 3.8465 | 0.1108 | 0.0168 | 0.0886 | 0.0887 | 19.0 |
| No log | 3.0 | 57 | 3.6785 | 0.1083 | 0.0156 | 0.0853 | 0.0854 | 19.0 |
| No log | 4.0 | 76 | 3.5973 | 0.1029 | 0.0141 | 0.083 | 0.0829 | 19.0 |
| No log | 5.0 | 95 | 3.5518 | 0.1083 | 0.0174 | 0.0876 | 0.0878 | 19.0 |
| No log | 6.0 | 114 | 3.5261 | 0.1089 | 0.0178 | 0.0882 | 0.0886 | 19.0 |
| No log | 7.0 | 133 | 3.5093 | 0.1093 | 0.0178 | 0.0875 | 0.0879 | 19.0 |
| No log | 8.0 | 152 | 3.4986 | 0.1104 | 0.0187 | 0.088 | 0.0884 | 19.0 |
| No log | 9.0 | 171 | 3.4926 | 0.1093 | 0.0187 | 0.0875 | 0.0876 | 19.0 |
| No log | 10.0 | 190 | 3.4906 | 0.1089 | 0.0191 | 0.0871 | 0.0874 | 19.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
albert-base-v1
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 38,156 | 2023-05-05T11:08:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.67 +/- 19.64
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
albert-base-v2
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4,785,283 | 2023-05-05T11:08:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: usix79/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
albert-large-v2
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26,792 | 2023-05-05T11:09:37Z |
---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- gozfarb/ShareGPT_Vicuna_unfiltered
- gozfarb/bluemoon_roleplay_300k_vicuna
- gozfarb/GPTeacher-Vicuna
- gozfarb/SuperCOT-vicuna-dataset
- gozfarb/Vicuna_Evol_Instruct_Cleaned
language:
- en
---
## General
Vicuna 1.1 13B finetune incorporating various datasets in addition to the unfiltered ShareGPT. This is an experiment attempting to enhance the creativity of the Vicuna 1.1, while also reducing censorship as much as possible. All datasets have been cleaned. Additionally, only the "instruct" portion of GPTeacher has been used.
## Models
*GGML quant for llama.cpp:*<br/>
ggml-vicuna-13b-cocktail-v1-q5_0.bin<br/>
*GPTQ 4-bit CUDA:*<br/>
vicuna-13b-cocktail-v1-4bit-128g.safetensors<br/>
## Remarks
This model has been trained with a modified prompt:<br/>
```
A chat between a user and an associate. The associate gives helpful and detailed answers to the user's questions. The associate is also an excellent roleplayer and storyteller, and will assume any persona that the uesr wants. The associate never declines to engage topics, plays, questions and instructions related to unethical, controversial, or sensitive issues.
USER: [user message]
ASSOCIATE: [model reply]</s>
```
The training roles are USER and ASSOCIATE.
|
albert-xxlarge-v1
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7,091 | 2023-05-05T11:11:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-dropout-0.3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6036344190543846
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-dropout-0.3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2847
- Matthews Correlation: 0.6036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4995 | 1.0 | 535 | 0.5102 | 0.4897 |
| 0.3023 | 2.0 | 1070 | 0.4585 | 0.5848 |
| 0.1951 | 3.0 | 1605 | 0.6793 | 0.5496 |
| 0.145 | 4.0 | 2140 | 0.7694 | 0.5925 |
| 0.1024 | 5.0 | 2675 | 1.0057 | 0.5730 |
| 0.0691 | 6.0 | 3210 | 1.0275 | 0.5892 |
| 0.0483 | 7.0 | 3745 | 1.0272 | 0.5788 |
| 0.0404 | 8.0 | 4280 | 1.2537 | 0.5810 |
| 0.0219 | 9.0 | 4815 | 1.3020 | 0.5780 |
| 0.0224 | 10.0 | 5350 | 1.2847 | 0.6036 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bert-base-german-dbmdz-cased
|
[
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,814 | 2023-05-05T11:18:03Z |
---
datasets:
- EleutherAI/pile
---

# Model card for RWKV-4 | 7B parameters trained on Pile dataset
RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064).
# Table of contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
## TL;DR
Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM)
> RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
## Model Details
The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration.
## Usage
### Convert the raw weights to the HF format
You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights.
```bash
python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv
```
### Generate text
You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios:
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-7b-pile")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-7b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
### Running the model on a single GPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-7b-pile").to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-7b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
</details>
### Running the model in half-precision, on GPU
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-7b-pile", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-7b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
### Running the model multiple GPUs
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-7b-pile", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-7b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
## Citation
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/)
|
bert-base-multilingual-cased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4,749,504 | 2023-05-05T11:21:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.72 +/- 13.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bert-base-multilingual-uncased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 328,585 | 2023-05-05T11:23:04Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: chribeiro/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bert-base-uncased
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 59,663,489 | 2023-05-05T11:24:07Z |
---
pipeline_tag: translation
license: apache-2.0
language:
- zh
- en
---
|
bert-large-cased-whole-word-masking
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,316 | 2023-05-05T11:25:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-dropout-0.4
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5786416039440073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-dropout-0.4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0377
- Matthews Correlation: 0.5786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5068 | 1.0 | 535 | 0.5131 | 0.4679 |
| 0.3198 | 2.0 | 1070 | 0.4943 | 0.5692 |
| 0.2057 | 3.0 | 1605 | 0.7169 | 0.5073 |
| 0.1574 | 4.0 | 2140 | 0.7962 | 0.5525 |
| 0.0985 | 5.0 | 2675 | 0.9113 | 0.5573 |
| 0.0767 | 6.0 | 3210 | 1.0377 | 0.5786 |
| 0.0525 | 7.0 | 3745 | 1.1992 | 0.5705 |
| 0.0415 | 8.0 | 4280 | 1.3376 | 0.5626 |
| 0.0191 | 9.0 | 4815 | 1.3548 | 0.5733 |
| 0.0167 | 10.0 | 5350 | 1.3856 | 0.5658 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bert-large-uncased-whole-word-masking-finetuned-squad
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 480,510 | 2023-05-05T11:28:22Z |
---
license: gpl-3.0
language:
- sv
pipeline_tag: text-classification
---
|
distilbert-base-cased-distilled-squad
|
[
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"has_space"
] |
question-answering
|
{
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 257,745 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: labicquette/ppo-Huggy2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
distilbert-base-uncased-distilled-squad
|
[
"pytorch",
"tf",
"tflite",
"coreml",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
question-answering
|
{
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 100,097 | 2023-05-05T11:42:54Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 31 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 31,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
distilbert-base-uncased
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"distilbert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10,887,471 | 2023-05-22T23:39:47Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: nymiz-model-ner-x-x-api
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nymiz-model-ner-x-x-api
This model is a fine-tuned version of [Narrativa/legal-longformer-base-4096-spanish](https://huggingface.co/Narrativa/legal-longformer-base-4096-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0425
- Add: {'precision': 0.542016806722689, 'recall': 0.6323529411764706, 'f1': 0.583710407239819, 'number': 204}
- Cur: {'precision': 0.8481283422459893, 'recall': 0.8980747451868629, 'f1': 0.8723872387238725, 'number': 883}
- Dat: {'precision': 0.925305193461618, 'recall': 0.9675465166594548, 'f1': 0.9459545214172396, 'number': 4622}
- Iban: {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6}
- Id: {'precision': 0.7383177570093458, 'recall': 0.7383177570093458, 'f1': 0.7383177570093457, 'number': 107}
- Loc: {'precision': 0.8174097664543525, 'recall': 0.8561897702001483, 'f1': 0.836350470673425, 'number': 1349}
- Misc: {'precision': 0.591715976331361, 'recall': 0.4854368932038835, 'f1': 0.5333333333333332, 'number': 206}
- Org: {'precision': 0.7504402515723271, 'recall': 0.8188306340927807, 'f1': 0.7831451824625887, 'number': 3643}
- Per: {'precision': 0.9119555143651529, 'recall': 0.9309366130558183, 'f1': 0.9213483146067415, 'number': 3171}
- Pho: {'precision': 0.8918918918918919, 'recall': 0.868421052631579, 'f1': 0.88, 'number': 38}
- Overall Precision: 0.8493
- Overall Recall: 0.8924
- Overall F1: 0.8703
- Overall Accuracy: 0.9884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Add | Cur | Dat | Iban | Id | Loc | Misc | Org | Per | Pho | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| No log | 0.06 | 200 | 0.1517 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 204} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 883} | {'precision': 0.8146568437378829, 'recall': 0.9091302466464734, 'f1': 0.8593047034764827, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 107} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 206} | {'precision': 0.2553284036537625, 'recall': 0.3222618720834477, 'f1': 0.28491687901953644, 'number': 3643} | {'precision': 0.5860409145607701, 'recall': 0.6143172500788395, 'f1': 0.5998460354118553, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.5599 | 0.5147 | 0.5364 | 0.9629 |
| No log | 0.11 | 400 | 0.1009 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 204} | {'precision': 0.7009857612267251, 'recall': 0.72480181200453, 'f1': 0.7126948775055679, 'number': 883} | {'precision': 0.8583912611717974, 'recall': 0.9350930333189096, 'f1': 0.8951019985502744, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 107} | {'precision': 0.7905092592592593, 'recall': 0.5063009636767977, 'f1': 0.6172616357885223, 'number': 1349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 206} | {'precision': 0.47241151901516815, 'recall': 0.5898984353554763, 'f1': 0.5246582031249999, 'number': 3643} | {'precision': 0.6707764952780693, 'recall': 0.8063702302112898, 'f1': 0.7323499928397538, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.6822 | 0.7275 | 0.7041 | 0.9760 |
| No log | 0.17 | 600 | 0.0835 | {'precision': 0.03546099290780142, 'recall': 0.024509803921568627, 'f1': 0.02898550724637681, 'number': 204} | {'precision': 0.7094918504314478, 'recall': 0.8380520951302378, 'f1': 0.7684319833852545, 'number': 883} | {'precision': 0.8763364938470849, 'recall': 0.9398528775421895, 'f1': 0.906984027560288, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 107} | {'precision': 0.7513761467889908, 'recall': 0.6071163825055597, 'f1': 0.6715867158671587, 'number': 1349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 206} | {'precision': 0.5165415059874163, 'recall': 0.6986000548998078, 'f1': 0.5939323220536756, 'number': 3643} | {'precision': 0.7716488730723606, 'recall': 0.8205613371176285, 'f1': 0.7953538132355188, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7118 | 0.7769 | 0.7430 | 0.9783 |
| No log | 0.23 | 800 | 0.0745 | {'precision': 0.0855614973262032, 'recall': 0.0784313725490196, 'f1': 0.08184143222506392, 'number': 204} | {'precision': 0.709280303030303, 'recall': 0.8482446206115515, 'f1': 0.7725631768953068, 'number': 883} | {'precision': 0.8730538922155688, 'recall': 0.9463435742102986, 'f1': 0.9082225913621262, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 107} | {'precision': 0.6451398135818908, 'recall': 0.7183098591549296, 'f1': 0.6797614871974745, 'number': 1349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 206} | {'precision': 0.5704682948689223, 'recall': 0.6988745539390612, 'f1': 0.6281766592647422, 'number': 3643} | {'precision': 0.7874015748031497, 'recall': 0.8514664143803217, 'f1': 0.8181818181818181, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7256 | 0.7979 | 0.7601 | 0.9803 |
| No log | 0.29 | 1000 | 0.0677 | {'precision': 0.2482758620689655, 'recall': 0.17647058823529413, 'f1': 0.20630372492836677, 'number': 204} | {'precision': 0.7780061664953751, 'recall': 0.8573046432616082, 'f1': 0.8157327586206896, 'number': 883} | {'precision': 0.8935128518971848, 'recall': 0.9476417135439203, 'f1': 0.9197816043679127, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 107} | {'precision': 0.7280632411067194, 'recall': 0.6827279466271312, 'f1': 0.7046671767406274, 'number': 1349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 206} | {'precision': 0.5918050941306755, 'recall': 0.7334614328849849, 'f1': 0.6550625153223828, 'number': 3643} | {'precision': 0.8218768679019725, 'recall': 0.8672343109429203, 'f1': 0.8439466011968698, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7599 | 0.8093 | 0.7839 | 0.9820 |
| No log | 0.34 | 1200 | 0.0638 | {'precision': 0.1945945945945946, 'recall': 0.17647058823529413, 'f1': 0.18508997429305915, 'number': 204} | {'precision': 0.7804878048780488, 'recall': 0.869762174405436, 'f1': 0.822710230316015, 'number': 883} | {'precision': 0.8944692964619764, 'recall': 0.9517524881003895, 'f1': 0.9222222222222222, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9545454545454546, 'recall': 0.19626168224299065, 'f1': 0.32558139534883723, 'number': 107} | {'precision': 0.7419112114371708, 'recall': 0.7309117865085248, 'f1': 0.7363704256908139, 'number': 1349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 206} | {'precision': 0.6087551299589603, 'recall': 0.7329124348064782, 'f1': 0.6650890521858263, 'number': 3643} | {'precision': 0.8147492625368732, 'recall': 0.8710186061179439, 'f1': 0.8419448254839201, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7643 | 0.8182 | 0.7903 | 0.9826 |
| No log | 0.4 | 1400 | 0.0615 | {'precision': 0.3150684931506849, 'recall': 0.22549019607843138, 'f1': 0.26285714285714284, 'number': 204} | {'precision': 0.7958762886597938, 'recall': 0.8742921857304643, 'f1': 0.8332433890987587, 'number': 883} | {'precision': 0.903779366700715, 'recall': 0.9571614019904803, 'f1': 0.9297047388883051, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9137931034482759, 'recall': 0.4953271028037383, 'f1': 0.6424242424242425, 'number': 107} | {'precision': 0.762406015037594, 'recall': 0.7516679021497406, 'f1': 0.7569988801791714, 'number': 1349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 206} | {'precision': 0.6362168396770473, 'recall': 0.7570683502607741, 'f1': 0.6914013537227376, 'number': 3643} | {'precision': 0.8527588344699318, 'recall': 0.8675496688741722, 'f1': 0.8600906675003908, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7892 | 0.8306 | 0.8093 | 0.9837 |
| No log | 0.46 | 1600 | 0.0588 | {'precision': 0.29120879120879123, 'recall': 0.25980392156862747, 'f1': 0.2746113989637306, 'number': 204} | {'precision': 0.7942681678607983, 'recall': 0.8788221970554927, 'f1': 0.8344086021505376, 'number': 883} | {'precision': 0.9064674580433892, 'recall': 0.9582431847684985, 'f1': 0.9316365166175851, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.7906976744186046, 'recall': 0.6355140186915887, 'f1': 0.7046632124352331, 'number': 107} | {'precision': 0.7775246772968869, 'recall': 0.7590808005930318, 'f1': 0.768192048012003, 'number': 1349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 206} | {'precision': 0.6199035510740903, 'recall': 0.7762832830085095, 'f1': 0.68933577087142, 'number': 3643} | {'precision': 0.8651264439587887, 'recall': 0.8738568274992116, 'f1': 0.8694697207405083, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7845 | 0.8398 | 0.8112 | 0.9840 |
| No log | 0.52 | 1800 | 0.0577 | {'precision': 0.3236994219653179, 'recall': 0.27450980392156865, 'f1': 0.2970822281167109, 'number': 204} | {'precision': 0.7971311475409836, 'recall': 0.8810872027180068, 'f1': 0.8370091447014523, 'number': 883} | {'precision': 0.9103405826836274, 'recall': 0.9599740372133275, 'f1': 0.9344987363100253, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8068181818181818, 'recall': 0.6635514018691588, 'f1': 0.7282051282051282, 'number': 107} | {'precision': 0.7873303167420814, 'recall': 0.7739065974796145, 'f1': 0.7805607476635513, 'number': 1349} | {'precision': 0.1, 'recall': 0.009708737864077669, 'f1': 0.017699115044247787, 'number': 206} | {'precision': 0.6367384333486028, 'recall': 0.7631073291243481, 'f1': 0.6942190036209264, 'number': 3643} | {'precision': 0.8385954558866923, 'recall': 0.8962472406181016, 'f1': 0.8664634146341463, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7895 | 0.8441 | 0.8159 | 0.9844 |
| No log | 0.57 | 2000 | 0.0568 | {'precision': 0.3756345177664975, 'recall': 0.3627450980392157, 'f1': 0.3690773067331671, 'number': 204} | {'precision': 0.8169456066945606, 'recall': 0.8844847112117781, 'f1': 0.8493746601413811, 'number': 883} | {'precision': 0.9094076655052264, 'recall': 0.9599740372133275, 'f1': 0.9340069466371961, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.6727272727272727, 'recall': 0.6915887850467289, 'f1': 0.6820276497695852, 'number': 107} | {'precision': 0.7840409956076134, 'recall': 0.7939214232765011, 'f1': 0.7889502762430939, 'number': 1349} | {'precision': 0.4074074074074074, 'recall': 0.05339805825242718, 'f1': 0.0944206008583691, 'number': 206} | {'precision': 0.6402647797306551, 'recall': 0.7699698051056821, 'f1': 0.6991525423728814, 'number': 3643} | {'precision': 0.849302877484426, 'recall': 0.9028697571743929, 'f1': 0.8752675022928769, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7926 | 0.8515 | 0.8210 | 0.9845 |
| No log | 0.63 | 2200 | 0.0578 | {'precision': 0.37948717948717947, 'recall': 0.3627450980392157, 'f1': 0.3709273182957393, 'number': 204} | {'precision': 0.7858585858585858, 'recall': 0.8810872027180068, 'f1': 0.8307528029898559, 'number': 883} | {'precision': 0.9077739236890431, 'recall': 0.9625703158805712, 'f1': 0.9343694214008191, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.776595744680851, 'recall': 0.6822429906542056, 'f1': 0.7263681592039801, 'number': 107} | {'precision': 0.7713675213675214, 'recall': 0.8028169014084507, 'f1': 0.7867780602978569, 'number': 1349} | {'precision': 0.64, 'recall': 0.07766990291262135, 'f1': 0.13852813852813853, 'number': 206} | {'precision': 0.6257655293088364, 'recall': 0.7853417513038704, 'f1': 0.6965307364576994, 'number': 3643} | {'precision': 0.8622071516646116, 'recall': 0.8820561337117628, 'f1': 0.8720187061574435, 'number': 3171} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 38} | 0.7865 | 0.8526 | 0.8182 | 0.9842 |
| No log | 0.69 | 2400 | 0.0540 | {'precision': 0.3469387755102041, 'recall': 0.3333333333333333, 'f1': 0.33999999999999997, 'number': 204} | {'precision': 0.7932790224032586, 'recall': 0.8822197055492639, 'f1': 0.8353887399463807, 'number': 883} | {'precision': 0.9144088669950738, 'recall': 0.963868455214193, 'f1': 0.9384874657678534, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.672566371681416, 'recall': 0.7102803738317757, 'f1': 0.6909090909090909, 'number': 107} | {'precision': 0.814872192099148, 'recall': 0.7798369162342476, 'f1': 0.796969696969697, 'number': 1349} | {'precision': 0.7567567567567568, 'recall': 0.13592233009708737, 'f1': 0.23045267489711935, 'number': 206} | {'precision': 0.6770783277270543, 'recall': 0.7735382926159758, 'f1': 0.7221012171684819, 'number': 3643} | {'precision': 0.8794992175273866, 'recall': 0.8861557868180385, 'f1': 0.8828149544454917, 'number': 3171} | {'precision': 0.5454545454545454, 'recall': 0.15789473684210525, 'f1': 0.24489795918367344, 'number': 38} | 0.8138 | 0.8498 | 0.8314 | 0.9856 |
| No log | 0.75 | 2600 | 0.0532 | {'precision': 0.4351851851851852, 'recall': 0.46078431372549017, 'f1': 0.4476190476190476, 'number': 204} | {'precision': 0.8154327424400417, 'recall': 0.8856172140430351, 'f1': 0.8490770901194354, 'number': 883} | {'precision': 0.9175937371240215, 'recall': 0.9636520986585894, 'f1': 0.9400590966652596, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.582089552238806, 'recall': 0.7289719626168224, 'f1': 0.6473029045643154, 'number': 107} | {'precision': 0.7643797643797644, 'recall': 0.8176426982950333, 'f1': 0.7901146131805157, 'number': 1349} | {'precision': 0.6578947368421053, 'recall': 0.12135922330097088, 'f1': 0.20491803278688528, 'number': 206} | {'precision': 0.6778425655976676, 'recall': 0.7658523195168817, 'f1': 0.7191648408300039, 'number': 3643} | {'precision': 0.8512224209898629, 'recall': 0.9003468937243772, 'f1': 0.8750957854406132, 'number': 3171} | {'precision': 1.0, 'recall': 0.07894736842105263, 'f1': 0.14634146341463414, 'number': 38} | 0.8060 | 0.8563 | 0.8304 | 0.9855 |
| No log | 0.8 | 2800 | 0.0535 | {'precision': 0.3755274261603376, 'recall': 0.4362745098039216, 'f1': 0.4036281179138322, 'number': 204} | {'precision': 0.8065843621399177, 'recall': 0.8878822197055493, 'f1': 0.8452830188679246, 'number': 883} | {'precision': 0.9118671858987497, 'recall': 0.9625703158805712, 'f1': 0.9365329965266813, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.624, 'recall': 0.7289719626168224, 'f1': 0.6724137931034483, 'number': 107} | {'precision': 0.7917888563049853, 'recall': 0.8005930318754633, 'f1': 0.7961666052340581, 'number': 1349} | {'precision': 0.7017543859649122, 'recall': 0.1941747572815534, 'f1': 0.3041825095057034, 'number': 206} | {'precision': 0.6562427611767431, 'recall': 0.7776557782047763, 'f1': 0.7118090452261306, 'number': 3643} | {'precision': 0.8713934929404543, 'recall': 0.8953011668243457, 'f1': 0.8831855654067507, 'number': 3171} | {'precision': 0.6666666666666666, 'recall': 0.15789473684210525, 'f1': 0.25531914893617025, 'number': 38} | 0.8016 | 0.8573 | 0.8285 | 0.9852 |
| No log | 0.86 | 3000 | 0.0521 | {'precision': 0.36929460580912865, 'recall': 0.4362745098039216, 'f1': 0.4, 'number': 204} | {'precision': 0.8143153526970954, 'recall': 0.8890147225368064, 'f1': 0.8500270709258256, 'number': 883} | {'precision': 0.9127049180327869, 'recall': 0.9636520986585894, 'f1': 0.9374868448747632, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.6046511627906976, 'recall': 0.7289719626168224, 'f1': 0.6610169491525423, 'number': 107} | {'precision': 0.8155706727135299, 'recall': 0.7998517420311342, 'f1': 0.8076347305389221, 'number': 1349} | {'precision': 0.7241379310344828, 'recall': 0.20388349514563106, 'f1': 0.3181818181818182, 'number': 206} | {'precision': 0.6871508379888268, 'recall': 0.7765577820477628, 'f1': 0.7291237113402063, 'number': 3643} | {'precision': 0.881733373457719, 'recall': 0.923998738568275, 'f1': 0.9023714197720973, 'number': 3171} | {'precision': 0.4666666666666667, 'recall': 0.18421052631578946, 'f1': 0.26415094339622636, 'number': 38} | 0.8168 | 0.8639 | 0.8397 | 0.9859 |
| No log | 0.92 | 3200 | 0.0504 | {'precision': 0.41203703703703703, 'recall': 0.4362745098039216, 'f1': 0.4238095238095238, 'number': 204} | {'precision': 0.8234672304439746, 'recall': 0.8822197055492639, 'f1': 0.8518316019682887, 'number': 883} | {'precision': 0.9165296052631579, 'recall': 0.9645175248810038, 'f1': 0.9399114484503477, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.5954198473282443, 'recall': 0.7289719626168224, 'f1': 0.6554621848739496, 'number': 107} | {'precision': 0.7872947894361171, 'recall': 0.8176426982950333, 'f1': 0.8021818181818182, 'number': 1349} | {'precision': 0.8260869565217391, 'recall': 0.18446601941747573, 'f1': 0.30158730158730157, 'number': 206} | {'precision': 0.6936401880722594, 'recall': 0.7694208070271754, 'f1': 0.7295679333680375, 'number': 3643} | {'precision': 0.877502240812668, 'recall': 0.9262062440870388, 'f1': 0.9011966861000306, 'number': 3171} | {'precision': 0.4827586206896552, 'recall': 0.3684210526315789, 'f1': 0.417910447761194, 'number': 38} | 0.8188 | 0.8644 | 0.8410 | 0.9864 |
| No log | 0.97 | 3400 | 0.0504 | {'precision': 0.3991769547325103, 'recall': 0.47549019607843135, 'f1': 0.4340044742729307, 'number': 204} | {'precision': 0.827186512118019, 'recall': 0.8890147225368064, 'f1': 0.8569868995633189, 'number': 883} | {'precision': 0.9170611236880016, 'recall': 0.9640848117697967, 'f1': 0.939985233625145, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.6311475409836066, 'recall': 0.719626168224299, 'f1': 0.6724890829694323, 'number': 107} | {'precision': 0.823574144486692, 'recall': 0.8028169014084507, 'f1': 0.8130630630630631, 'number': 1349} | {'precision': 0.7966101694915254, 'recall': 0.22815533980582525, 'f1': 0.3547169811320755, 'number': 206} | {'precision': 0.6975706494794249, 'recall': 0.7724402964589624, 'f1': 0.7330988667448222, 'number': 3643} | {'precision': 0.8663716814159292, 'recall': 0.9262062440870388, 'f1': 0.8952903520804756, 'number': 3171} | {'precision': 0.5555555555555556, 'recall': 0.5263157894736842, 'f1': 0.5405405405405405, 'number': 38} | 0.8203 | 0.8656 | 0.8423 | 0.9864 |
| 0.0951 | 1.03 | 3600 | 0.0507 | {'precision': 0.45454545454545453, 'recall': 0.4411764705882353, 'f1': 0.4477611940298507, 'number': 204} | {'precision': 0.8343881856540084, 'recall': 0.8958097395243488, 'f1': 0.86400873839432, 'number': 883} | {'precision': 0.9167868177136972, 'recall': 0.9630030289917785, 'f1': 0.939326791178643, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.7551020408163265, 'recall': 0.6915887850467289, 'f1': 0.721951219512195, 'number': 107} | {'precision': 0.7823652904128762, 'recall': 0.8287620459599704, 'f1': 0.8048956083513318, 'number': 1349} | {'precision': 0.803030303030303, 'recall': 0.25728155339805825, 'f1': 0.38970588235294124, 'number': 206} | {'precision': 0.6740550629958003, 'recall': 0.7930277244029645, 'f1': 0.7287173666288309, 'number': 3643} | {'precision': 0.8816840811309158, 'recall': 0.9047619047619048, 'f1': 0.8930739299610896, 'number': 3171} | {'precision': 0.4642857142857143, 'recall': 0.34210526315789475, 'f1': 0.3939393939393939, 'number': 38} | 0.8143 | 0.8678 | 0.8402 | 0.9859 |
| 0.0951 | 1.09 | 3800 | 0.0506 | {'precision': 0.4696969696969697, 'recall': 0.45588235294117646, 'f1': 0.46268656716417916, 'number': 204} | {'precision': 0.8078920041536864, 'recall': 0.8810872027180068, 'f1': 0.8429035752979415, 'number': 883} | {'precision': 0.9219492050381994, 'recall': 0.9660320207702293, 'f1': 0.9434759640781828, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.609375, 'recall': 0.7289719626168224, 'f1': 0.6638297872340426, 'number': 107} | {'precision': 0.8319198149575945, 'recall': 0.7998517420311342, 'f1': 0.8155706727135299, 'number': 1349} | {'precision': 0.7012987012987013, 'recall': 0.2621359223300971, 'f1': 0.38162544169611307, 'number': 206} | {'precision': 0.6982737661074642, 'recall': 0.7883612407356574, 'f1': 0.7405879319236719, 'number': 3643} | {'precision': 0.8917735376915858, 'recall': 0.8990854619993692, 'f1': 0.8954145728643216, 'number': 3171} | {'precision': 0.5806451612903226, 'recall': 0.47368421052631576, 'f1': 0.5217391304347826, 'number': 38} | 0.8274 | 0.8636 | 0.8451 | 0.9864 |
| 0.0951 | 1.15 | 4000 | 0.0494 | {'precision': 0.470873786407767, 'recall': 0.47549019607843135, 'f1': 0.47317073170731705, 'number': 204} | {'precision': 0.8377518557794273, 'recall': 0.8946772366930917, 'f1': 0.8652792990142387, 'number': 883} | {'precision': 0.9200164914450629, 'recall': 0.965599307659022, 'f1': 0.9422569407790562, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.639344262295082, 'recall': 0.7289719626168224, 'f1': 0.6812227074235808, 'number': 107} | {'precision': 0.8430141287284144, 'recall': 0.7961452928094885, 'f1': 0.8189096454441479, 'number': 1349} | {'precision': 0.803030303030303, 'recall': 0.25728155339805825, 'f1': 0.38970588235294124, 'number': 206} | {'precision': 0.6791618160651921, 'recall': 0.8007136975020588, 'f1': 0.7349458301839255, 'number': 3643} | {'precision': 0.8820104198590254, 'recall': 0.9076001261431725, 'f1': 0.8946223189306808, 'number': 3171} | {'precision': 0.6470588235294118, 'recall': 0.5789473684210527, 'f1': 0.6111111111111113, 'number': 38} | 0.8216 | 0.8695 | 0.8449 | 0.9866 |
| 0.0951 | 1.2 | 4200 | 0.0486 | {'precision': 0.47257383966244726, 'recall': 0.5490196078431373, 'f1': 0.5079365079365079, 'number': 204} | {'precision': 0.8297872340425532, 'recall': 0.883352208380521, 'f1': 0.8557323093801427, 'number': 883} | {'precision': 0.9235063055612983, 'recall': 0.9664647338814366, 'f1': 0.9444973041547733, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.7333333333333333, 'recall': 0.719626168224299, 'f1': 0.7264150943396226, 'number': 107} | {'precision': 0.8010093727469358, 'recall': 0.8235730170496665, 'f1': 0.8121345029239766, 'number': 1349} | {'precision': 0.7468354430379747, 'recall': 0.28640776699029125, 'f1': 0.41403508771929814, 'number': 206} | {'precision': 0.7017108851971238, 'recall': 0.7768322810870162, 'f1': 0.737363210005211, 'number': 3643} | {'precision': 0.8901629450814725, 'recall': 0.9303058971933144, 'f1': 0.9097918272937549, 'number': 3171} | {'precision': 0.6774193548387096, 'recall': 0.5526315789473685, 'f1': 0.6086956521739131, 'number': 38} | 0.8290 | 0.8720 | 0.8499 | 0.9867 |
| 0.0951 | 1.26 | 4400 | 0.0492 | {'precision': 0.4870689655172414, 'recall': 0.553921568627451, 'f1': 0.518348623853211, 'number': 204} | {'precision': 0.8370607028753994, 'recall': 0.8901472253680635, 'f1': 0.8627881448957189, 'number': 883} | {'precision': 0.9149415863906538, 'recall': 0.9658156642146257, 'f1': 0.9396905588885379, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.7117117117117117, 'recall': 0.7383177570093458, 'f1': 0.7247706422018348, 'number': 107} | {'precision': 0.769808743169399, 'recall': 0.8354336545589326, 'f1': 0.8012797724848917, 'number': 1349} | {'precision': 0.7638888888888888, 'recall': 0.2669902912621359, 'f1': 0.39568345323741, 'number': 206} | {'precision': 0.7043116119549241, 'recall': 0.7891847378534175, 'f1': 0.7443365695792881, 'number': 3643} | {'precision': 0.8883187443404769, 'recall': 0.9280983916745507, 'f1': 0.907772979642196, 'number': 3171} | {'precision': 0.6857142857142857, 'recall': 0.631578947368421, 'f1': 0.6575342465753424, 'number': 38} | 0.8239 | 0.8761 | 0.8492 | 0.9868 |
| 0.0951 | 1.32 | 4600 | 0.0488 | {'precision': 0.4713114754098361, 'recall': 0.5637254901960784, 'f1': 0.5133928571428571, 'number': 204} | {'precision': 0.8229715489989463, 'recall': 0.8844847112117781, 'f1': 0.8526200873362445, 'number': 883} | {'precision': 0.9230928261318999, 'recall': 0.9660320207702293, 'f1': 0.9440744264721429, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.5594405594405595, 'recall': 0.7476635514018691, 'f1': 0.64, 'number': 107} | {'precision': 0.7932251235003529, 'recall': 0.8332097850259451, 'f1': 0.8127259580621837, 'number': 1349} | {'precision': 0.6703296703296703, 'recall': 0.2961165048543689, 'f1': 0.41077441077441074, 'number': 206} | {'precision': 0.6813289658399626, 'recall': 0.799341202305792, 'f1': 0.735632183908046, 'number': 3643} | {'precision': 0.8930303030303031, 'recall': 0.9293598233995585, 'f1': 0.9108329469942822, 'number': 3171} | {'precision': 0.7647058823529411, 'recall': 0.6842105263157895, 'f1': 0.7222222222222222, 'number': 38} | 0.8180 | 0.8793 | 0.8475 | 0.9866 |
| 0.0951 | 1.38 | 4800 | 0.0483 | {'precision': 0.5065502183406113, 'recall': 0.5686274509803921, 'f1': 0.535796766743649, 'number': 204} | {'precision': 0.83991462113127, 'recall': 0.8912797281993206, 'f1': 0.8648351648351649, 'number': 883} | {'precision': 0.9169060320065654, 'recall': 0.9668974469926439, 'f1': 0.9412384161752316, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.625, 'recall': 0.7476635514018691, 'f1': 0.6808510638297872, 'number': 107} | {'precision': 0.8038793103448276, 'recall': 0.8295033358042995, 'f1': 0.8164903319956219, 'number': 1349} | {'precision': 0.6736842105263158, 'recall': 0.3106796116504854, 'f1': 0.42524916943521596, 'number': 206} | {'precision': 0.687793427230047, 'recall': 0.8042821850123525, 'f1': 0.7414905732000506, 'number': 3643} | {'precision': 0.8974908200734394, 'recall': 0.9249448123620309, 'f1': 0.9110110265569187, 'number': 3171} | {'precision': 0.78125, 'recall': 0.6578947368421053, 'f1': 0.7142857142857143, 'number': 38} | 0.8227 | 0.8801 | 0.8505 | 0.9868 |
| 0.0951 | 1.43 | 5000 | 0.0485 | {'precision': 0.4652173913043478, 'recall': 0.5245098039215687, 'f1': 0.4930875576036866, 'number': 204} | {'precision': 0.8376068376068376, 'recall': 0.8878822197055493, 'f1': 0.8620120945574491, 'number': 883} | {'precision': 0.9231087226126499, 'recall': 0.966248377325833, 'f1': 0.944186046511628, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.5714285714285714, 'recall': 0.7476635514018691, 'f1': 0.6477732793522267, 'number': 107} | {'precision': 0.8160412067696836, 'recall': 0.8220904373610082, 'f1': 0.8190546528803545, 'number': 1349} | {'precision': 0.5289256198347108, 'recall': 0.3106796116504854, 'f1': 0.39143730886850153, 'number': 206} | {'precision': 0.6883545497406883, 'recall': 0.8015371946198189, 'f1': 0.7406467977171846, 'number': 3643} | {'precision': 0.8818263205013429, 'recall': 0.9318826868495743, 'f1': 0.906163753449862, 'number': 3171} | {'precision': 0.8125, 'recall': 0.6842105263157895, 'f1': 0.742857142857143, 'number': 38} | 0.8201 | 0.8793 | 0.8486 | 0.9867 |
| 0.0951 | 1.49 | 5200 | 0.0484 | {'precision': 0.5161290322580645, 'recall': 0.5490196078431373, 'f1': 0.5320665083135392, 'number': 204} | {'precision': 0.8277310924369747, 'recall': 0.8924122310305775, 'f1': 0.8588555858310626, 'number': 883} | {'precision': 0.924317617866005, 'recall': 0.9671138035482475, 'f1': 0.945231550010573, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.7264150943396226, 'recall': 0.719626168224299, 'f1': 0.7230046948356808, 'number': 107} | {'precision': 0.8114161849710982, 'recall': 0.832468495181616, 'f1': 0.8218075375045738, 'number': 1349} | {'precision': 0.6853932584269663, 'recall': 0.2961165048543689, 'f1': 0.4135593220338983, 'number': 206} | {'precision': 0.6916174734356553, 'recall': 0.8040076859730991, 'f1': 0.7435897435897436, 'number': 3643} | {'precision': 0.8864182692307693, 'recall': 0.9303058971933144, 'f1': 0.9078319741498692, 'number': 3171} | {'precision': 0.8181818181818182, 'recall': 0.7105263157894737, 'f1': 0.7605633802816901, 'number': 38} | 0.8256 | 0.8811 | 0.8524 | 0.9869 |
| 0.0951 | 1.55 | 5400 | 0.0479 | {'precision': 0.4672489082969432, 'recall': 0.5245098039215687, 'f1': 0.4942263279445727, 'number': 204} | {'precision': 0.8390191897654584, 'recall': 0.8912797281993206, 'f1': 0.8643602416254805, 'number': 883} | {'precision': 0.9205434335117332, 'recall': 0.9675465166594548, 'f1': 0.9434599156118144, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.6694915254237288, 'recall': 0.7383177570093458, 'f1': 0.7022222222222222, 'number': 107} | {'precision': 0.8152173913043478, 'recall': 0.8339510748702743, 'f1': 0.8244778307072188, 'number': 1349} | {'precision': 0.6813186813186813, 'recall': 0.30097087378640774, 'f1': 0.41750841750841744, 'number': 206} | {'precision': 0.6972520908004779, 'recall': 0.8009881965413121, 'f1': 0.7455288707204907, 'number': 3643} | {'precision': 0.8860416038589086, 'recall': 0.9268369599495427, 'f1': 0.9059802712700369, 'number': 3171} | {'precision': 0.7631578947368421, 'recall': 0.7631578947368421, 'f1': 0.7631578947368421, 'number': 38} | 0.8257 | 0.8798 | 0.8519 | 0.9871 |
| 0.0951 | 1.61 | 5600 | 0.0468 | {'precision': 0.462882096069869, 'recall': 0.5196078431372549, 'f1': 0.489607390300231, 'number': 204} | {'precision': 0.8403001071811361, 'recall': 0.8878822197055493, 'f1': 0.8634361233480177, 'number': 883} | {'precision': 0.9248291571753986, 'recall': 0.966248377325833, 'f1': 0.9450851761718336, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8131868131868132, 'recall': 0.6915887850467289, 'f1': 0.7474747474747475, 'number': 107} | {'precision': 0.8037116345467523, 'recall': 0.8346923647146034, 'f1': 0.8189090909090909, 'number': 1349} | {'precision': 0.7325581395348837, 'recall': 0.3058252427184466, 'f1': 0.43150684931506844, 'number': 206} | {'precision': 0.7199697809116091, 'recall': 0.7847927532253637, 'f1': 0.7509850275807721, 'number': 3643} | {'precision': 0.8992343032159265, 'recall': 0.9258908861557869, 'f1': 0.9123679303915475, 'number': 3171} | {'precision': 0.7777777777777778, 'recall': 0.7368421052631579, 'f1': 0.7567567567567567, 'number': 38} | 0.8381 | 0.8744 | 0.8559 | 0.9874 |
| 0.0951 | 1.66 | 5800 | 0.0463 | {'precision': 0.481981981981982, 'recall': 0.5245098039215687, 'f1': 0.5023474178403756, 'number': 204} | {'precision': 0.8328042328042328, 'recall': 0.8912797281993206, 'f1': 0.8610503282275711, 'number': 883} | {'precision': 0.9245244003308519, 'recall': 0.9673301601038512, 'f1': 0.9454430112074436, 'number': 4622} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8172043010752689, 'recall': 0.7102803738317757, 'f1': 0.76, 'number': 107} | {'precision': 0.8054558506819813, 'recall': 0.8317272053372868, 'f1': 0.8183807439824946, 'number': 1349} | {'precision': 0.7142857142857143, 'recall': 0.2912621359223301, 'f1': 0.41379310344827586, 'number': 206} | {'precision': 0.7128810226155359, 'recall': 0.7960472138347516, 'f1': 0.7521722215017508, 'number': 3643} | {'precision': 0.8969072164948454, 'recall': 0.9328287606433302, 'f1': 0.9145153810480755, 'number': 3171} | {'precision': 0.7714285714285715, 'recall': 0.7105263157894737, 'f1': 0.7397260273972601, 'number': 38} | 0.8350 | 0.8790 | 0.8565 | 0.9873 |
| 0.0951 | 1.72 | 6000 | 0.0451 | {'precision': 0.4957627118644068, 'recall': 0.5735294117647058, 'f1': 0.5318181818181819, 'number': 204} | {'precision': 0.8403001071811361, 'recall': 0.8878822197055493, 'f1': 0.8634361233480177, 'number': 883} | {'precision': 0.9263943603566245, 'recall': 0.9666810904370402, 'f1': 0.9461090524086818, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.7307692307692307, 'recall': 0.7102803738317757, 'f1': 0.7203791469194313, 'number': 107} | {'precision': 0.805, 'recall': 0.8354336545589326, 'f1': 0.8199345216442343, 'number': 1349} | {'precision': 0.7045454545454546, 'recall': 0.30097087378640774, 'f1': 0.4217687074829933, 'number': 206} | {'precision': 0.7300254452926208, 'recall': 0.7875377436178973, 'f1': 0.7576917998151327, 'number': 3643} | {'precision': 0.9029393753827312, 'recall': 0.9299905392620624, 'f1': 0.916265340997359, 'number': 3171} | {'precision': 0.6511627906976745, 'recall': 0.7368421052631579, 'f1': 0.6913580246913581, 'number': 38} | 0.8417 | 0.8772 | 0.8591 | 0.9876 |
| 0.0951 | 1.78 | 6200 | 0.0462 | {'precision': 0.46153846153846156, 'recall': 0.5294117647058824, 'f1': 0.4931506849315069, 'number': 204} | {'precision': 0.8391906283280085, 'recall': 0.8924122310305775, 'f1': 0.8649835345773874, 'number': 883} | {'precision': 0.9217785096747633, 'recall': 0.9688446559930766, 'f1': 0.9447257383966244, 'number': 4622} | {'precision': 0.25, 'recall': 0.16666666666666666, 'f1': 0.2, 'number': 6} | {'precision': 0.8191489361702128, 'recall': 0.719626168224299, 'f1': 0.7661691542288557, 'number': 107} | {'precision': 0.8077474892395983, 'recall': 0.8346923647146034, 'f1': 0.8209989063069633, 'number': 1349} | {'precision': 0.6185567010309279, 'recall': 0.2912621359223301, 'f1': 0.396039603960396, 'number': 206} | {'precision': 0.7060240963855422, 'recall': 0.8042821850123525, 'f1': 0.7519568843834211, 'number': 3643} | {'precision': 0.901594114040466, 'recall': 0.9274676758120467, 'f1': 0.9143478936732473, 'number': 3171} | {'precision': 0.8571428571428571, 'recall': 0.7894736842105263, 'f1': 0.8219178082191781, 'number': 38} | 0.8322 | 0.8812 | 0.8560 | 0.9873 |
| 0.0951 | 1.83 | 6400 | 0.0465 | {'precision': 0.4608695652173913, 'recall': 0.5196078431372549, 'f1': 0.48847926267281105, 'number': 204} | {'precision': 0.8329787234042553, 'recall': 0.8867497168742922, 'f1': 0.8590235874931432, 'number': 883} | {'precision': 0.9224102352455633, 'recall': 0.9671138035482475, 'f1': 0.944233206590621, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.8041237113402062, 'recall': 0.7289719626168224, 'f1': 0.7647058823529411, 'number': 107} | {'precision': 0.8260223048327138, 'recall': 0.8235730170496665, 'f1': 0.8247958426132146, 'number': 1349} | {'precision': 0.6666666666666666, 'recall': 0.27184466019417475, 'f1': 0.38620689655172413, 'number': 206} | {'precision': 0.7034220532319392, 'recall': 0.8125171561899533, 'f1': 0.7540440708190038, 'number': 3643} | {'precision': 0.9077639751552795, 'recall': 0.9217912330495112, 'f1': 0.914723830386481, 'number': 3171} | {'precision': 0.8571428571428571, 'recall': 0.7894736842105263, 'f1': 0.8219178082191781, 'number': 38} | 0.8340 | 0.8798 | 0.8563 | 0.9872 |
| 0.0951 | 1.89 | 6600 | 0.0461 | {'precision': 0.4672131147540984, 'recall': 0.5588235294117647, 'f1': 0.5089285714285715, 'number': 204} | {'precision': 0.8342189160467588, 'recall': 0.8890147225368064, 'f1': 0.8607456140350876, 'number': 883} | {'precision': 0.9266928970801408, 'recall': 0.9681955863262657, 'f1': 0.9469897365358163, 'number': 4622} | {'precision': 0.125, 'recall': 0.16666666666666666, 'f1': 0.14285714285714288, 'number': 6} | {'precision': 0.7181818181818181, 'recall': 0.7383177570093458, 'f1': 0.728110599078341, 'number': 107} | {'precision': 0.8034188034188035, 'recall': 0.8361749444032617, 'f1': 0.8194696694515073, 'number': 1349} | {'precision': 0.5887850467289719, 'recall': 0.3058252427184466, 'f1': 0.40255591054313095, 'number': 206} | {'precision': 0.7143555881633651, 'recall': 0.8018116936590722, 'f1': 0.7555613036730472, 'number': 3643} | {'precision': 0.8974203338391502, 'recall': 0.9325134027120782, 'f1': 0.9146303742653882, 'number': 3171} | {'precision': 0.7692307692307693, 'recall': 0.7894736842105263, 'f1': 0.7792207792207793, 'number': 38} | 0.8332 | 0.8822 | 0.8570 | 0.9874 |
| 0.0951 | 1.95 | 6800 | 0.0454 | {'precision': 0.4624505928853755, 'recall': 0.5735294117647058, 'f1': 0.512035010940919, 'number': 204} | {'precision': 0.8374070138150903, 'recall': 0.8924122310305775, 'f1': 0.8640350877192984, 'number': 883} | {'precision': 0.9245244003308519, 'recall': 0.9673301601038512, 'f1': 0.9454430112074436, 'number': 4622} | {'precision': 0.14285714285714285, 'recall': 0.16666666666666666, 'f1': 0.15384615384615383, 'number': 6} | {'precision': 0.7053571428571429, 'recall': 0.7383177570093458, 'f1': 0.7214611872146119, 'number': 107} | {'precision': 0.8005637773079634, 'recall': 0.8421052631578947, 'f1': 0.8208092485549133, 'number': 1349} | {'precision': 0.5945945945945946, 'recall': 0.32038834951456313, 'f1': 0.41640378548895907, 'number': 206} | {'precision': 0.7147741147741148, 'recall': 0.8034586878945924, 'f1': 0.7565262341690359, 'number': 3643} | {'precision': 0.9072069285493349, 'recall': 0.9249448123620309, 'f1': 0.9159900062460962, 'number': 3171} | {'precision': 0.7894736842105263, 'recall': 0.7894736842105263, 'f1': 0.7894736842105263, 'number': 38} | 0.8340 | 0.8819 | 0.8573 | 0.9874 |
| 0.0485 | 2.01 | 7000 | 0.0453 | {'precision': 0.47876447876447875, 'recall': 0.6078431372549019, 'f1': 0.5356371490280778, 'number': 204} | {'precision': 0.839572192513369, 'recall': 0.8890147225368064, 'f1': 0.8635863586358636, 'number': 883} | {'precision': 0.9256882632995239, 'recall': 0.9675465166594548, 'f1': 0.9461546598963292, 'number': 4622} | {'precision': 0.14285714285714285, 'recall': 0.16666666666666666, 'f1': 0.15384615384615383, 'number': 6} | {'precision': 0.7117117117117117, 'recall': 0.7383177570093458, 'f1': 0.7247706422018348, 'number': 107} | {'precision': 0.8176513493800146, 'recall': 0.8309859154929577, 'f1': 0.824264705882353, 'number': 1349} | {'precision': 0.49044585987261147, 'recall': 0.3737864077669903, 'f1': 0.4242424242424243, 'number': 206} | {'precision': 0.7280040221216691, 'recall': 0.7949492176777381, 'f1': 0.760005248655032, 'number': 3643} | {'precision': 0.896969696969697, 'recall': 0.9334594765058342, 'f1': 0.9148508731262557, 'number': 3171} | {'precision': 0.7777777777777778, 'recall': 0.7368421052631579, 'f1': 0.7567567567567567, 'number': 38} | 0.8370 | 0.8815 | 0.8587 | 0.9875 |
| 0.0485 | 2.06 | 7200 | 0.0445 | {'precision': 0.4940239043824701, 'recall': 0.6078431372549019, 'f1': 0.545054945054945, 'number': 204} | {'precision': 0.8535791757049892, 'recall': 0.8912797281993206, 'f1': 0.87202216066482, 'number': 883} | {'precision': 0.92809642560266, 'recall': 0.966248377325833, 'f1': 0.9467882128471485, 'number': 4622} | {'precision': 0.25, 'recall': 0.16666666666666666, 'f1': 0.2, 'number': 6} | {'precision': 0.7821782178217822, 'recall': 0.7383177570093458, 'f1': 0.7596153846153846, 'number': 107} | {'precision': 0.8359788359788359, 'recall': 0.8198665678280208, 'f1': 0.8278443113772456, 'number': 1349} | {'precision': 0.6146788990825688, 'recall': 0.32524271844660196, 'f1': 0.42539682539682544, 'number': 206} | {'precision': 0.72690963554668, 'recall': 0.799341202305792, 'f1': 0.7614067198326578, 'number': 3643} | {'precision': 0.9028327749010052, 'recall': 0.934720908230842, 'f1': 0.9185001549426712, 'number': 3171} | {'precision': 0.8108108108108109, 'recall': 0.7894736842105263, 'f1': 0.8, 'number': 38} | 0.8443 | 0.8810 | 0.8623 | 0.9877 |
| 0.0485 | 2.12 | 7400 | 0.0443 | {'precision': 0.5020576131687243, 'recall': 0.5980392156862745, 'f1': 0.5458612975391499, 'number': 204} | {'precision': 0.8449197860962567, 'recall': 0.8946772366930917, 'f1': 0.8690869086908691, 'number': 883} | {'precision': 0.9270315091210614, 'recall': 0.9675465166594548, 'f1': 0.9468558119839087, 'number': 4622} | {'precision': 0.125, 'recall': 0.16666666666666666, 'f1': 0.14285714285714288, 'number': 6} | {'precision': 0.7549019607843137, 'recall': 0.719626168224299, 'f1': 0.736842105263158, 'number': 107} | {'precision': 0.8022519352568613, 'recall': 0.8450704225352113, 'f1': 0.8231046931407942, 'number': 1349} | {'precision': 0.5517241379310345, 'recall': 0.3106796116504854, 'f1': 0.39751552795031053, 'number': 206} | {'precision': 0.7288644322161081, 'recall': 0.7998902003842987, 'f1': 0.7627273917026566, 'number': 3643} | {'precision': 0.8988185398364132, 'recall': 0.935666982024598, 'f1': 0.9168726823238567, 'number': 3171} | {'precision': 0.8157894736842105, 'recall': 0.8157894736842105, 'f1': 0.8157894736842104, 'number': 38} | 0.8393 | 0.8840 | 0.8611 | 0.9878 |
| 0.0485 | 2.18 | 7600 | 0.0444 | {'precision': 0.5271966527196653, 'recall': 0.6176470588235294, 'f1': 0.5688487584650113, 'number': 204} | {'precision': 0.841541755888651, 'recall': 0.8901472253680635, 'f1': 0.8651623555310953, 'number': 883} | {'precision': 0.9278307756117793, 'recall': 0.9679792297706621, 'f1': 0.9474798814061838, 'number': 4622} | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.77, 'recall': 0.719626168224299, 'f1': 0.7439613526570048, 'number': 107} | {'precision': 0.8187772925764192, 'recall': 0.8339510748702743, 'f1': 0.8262945280940139, 'number': 1349} | {'precision': 0.5227272727272727, 'recall': 0.33495145631067963, 'f1': 0.4082840236686391, 'number': 206} | {'precision': 0.7267542772129928, 'recall': 0.8045566840516059, 'f1': 0.7636789994788953, 'number': 3643} | {'precision': 0.8955448524984949, 'recall': 0.9381898454746137, 'f1': 0.916371476975204, 'number': 3171} | {'precision': 0.8108108108108109, 'recall': 0.7894736842105263, 'f1': 0.8, 'number': 38} | 0.8397 | 0.8851 | 0.8618 | 0.9878 |
| 0.0485 | 2.24 | 7800 | 0.0444 | {'precision': 0.5106382978723404, 'recall': 0.5882352941176471, 'f1': 0.5466970387243736, 'number': 204} | {'precision': 0.8491379310344828, 'recall': 0.8924122310305775, 'f1': 0.8702374378796246, 'number': 883} | {'precision': 0.9287790697674418, 'recall': 0.9677628732150584, 'f1': 0.9478703115066752, 'number': 4622} | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.7857142857142857, 'recall': 0.719626168224299, 'f1': 0.751219512195122, 'number': 107} | {'precision': 0.8469696969696969, 'recall': 0.8287620459599704, 'f1': 0.8377669539153241, 'number': 1349} | {'precision': 0.5454545454545454, 'recall': 0.34951456310679613, 'f1': 0.42603550295857984, 'number': 206} | {'precision': 0.7253071253071253, 'recall': 0.8103211638759265, 'f1': 0.7654609101516919, 'number': 3643} | {'precision': 0.8989071038251366, 'recall': 0.9337748344370861, 'f1': 0.9160092807424594, 'number': 3171} | {'precision': 0.7631578947368421, 'recall': 0.7631578947368421, 'f1': 0.7631578947368421, 'number': 38} | 0.8429 | 0.8849 | 0.8634 | 0.9880 |
| 0.0485 | 2.29 | 8000 | 0.0449 | {'precision': 0.519650655021834, 'recall': 0.5833333333333334, 'f1': 0.5496535796766744, 'number': 204} | {'precision': 0.8489751887810141, 'recall': 0.8912797281993206, 'f1': 0.8696132596685083, 'number': 883} | {'precision': 0.9280082987551868, 'recall': 0.9677628732150584, 'f1': 0.9474687566193604, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.7821782178217822, 'recall': 0.7383177570093458, 'f1': 0.7596153846153846, 'number': 107} | {'precision': 0.813768115942029, 'recall': 0.832468495181616, 'f1': 0.8230120923415171, 'number': 1349} | {'precision': 0.5144927536231884, 'recall': 0.3446601941747573, 'f1': 0.41279069767441867, 'number': 206} | {'precision': 0.720029418975239, 'recall': 0.806203678287126, 'f1': 0.7606837606837606, 'number': 3643} | {'precision': 0.9059040590405905, 'recall': 0.9290444654683065, 'f1': 0.9173283512377394, 'number': 3171} | {'precision': 0.8571428571428571, 'recall': 0.7894736842105263, 'f1': 0.8219178082191781, 'number': 38} | 0.8396 | 0.8831 | 0.8608 | 0.9877 |
| 0.0485 | 2.35 | 8200 | 0.0441 | {'precision': 0.47619047619047616, 'recall': 0.5392156862745098, 'f1': 0.5057471264367815, 'number': 204} | {'precision': 0.8418803418803419, 'recall': 0.8924122310305775, 'f1': 0.8664101154480484, 'number': 883} | {'precision': 0.9272538860103627, 'recall': 0.9679792297706621, 'f1': 0.9471789986239016, 'number': 4622} | {'precision': 0.14285714285714285, 'recall': 0.16666666666666666, 'f1': 0.15384615384615383, 'number': 6} | {'precision': 0.797979797979798, 'recall': 0.7383177570093458, 'f1': 0.766990291262136, 'number': 107} | {'precision': 0.8204941860465116, 'recall': 0.8369162342475908, 'f1': 0.8286238532110092, 'number': 1349} | {'precision': 0.6415094339622641, 'recall': 0.3300970873786408, 'f1': 0.4358974358974359, 'number': 206} | {'precision': 0.7234718826405868, 'recall': 0.8122426571507, 'f1': 0.7652916073968705, 'number': 3643} | {'precision': 0.9094281298299846, 'recall': 0.9277830337432986, 'f1': 0.9185138932251014, 'number': 3171} | {'precision': 0.7948717948717948, 'recall': 0.8157894736842105, 'f1': 0.8051948051948051, 'number': 38} | 0.8419 | 0.8842 | 0.8625 | 0.9879 |
| 0.0485 | 2.41 | 8400 | 0.0446 | {'precision': 0.5043859649122807, 'recall': 0.5637254901960784, 'f1': 0.5324074074074074, 'number': 204} | {'precision': 0.83991462113127, 'recall': 0.8912797281993206, 'f1': 0.8648351648351649, 'number': 883} | {'precision': 0.927461139896373, 'recall': 0.9681955863262657, 'f1': 0.9473907060442468, 'number': 4622} | {'precision': 0.14285714285714285, 'recall': 0.16666666666666666, 'f1': 0.15384615384615383, 'number': 6} | {'precision': 0.6752136752136753, 'recall': 0.7383177570093458, 'f1': 0.7053571428571429, 'number': 107} | {'precision': 0.806497175141243, 'recall': 0.8465530022238695, 'f1': 0.8260397830018082, 'number': 1349} | {'precision': 0.5514705882352942, 'recall': 0.3640776699029126, 'f1': 0.43859649122807015, 'number': 206} | {'precision': 0.7313656828414207, 'recall': 0.8026351907768323, 'f1': 0.7653448501505038, 'number': 3643} | {'precision': 0.8973660308810173, 'recall': 0.934720908230842, 'f1': 0.9156626506024097, 'number': 3171} | {'precision': 0.8157894736842105, 'recall': 0.8157894736842105, 'f1': 0.8157894736842104, 'number': 38} | 0.8393 | 0.8850 | 0.8615 | 0.9878 |
| 0.0485 | 2.46 | 8600 | 0.0452 | {'precision': 0.4896265560165975, 'recall': 0.5784313725490197, 'f1': 0.5303370786516854, 'number': 204} | {'precision': 0.8459893048128342, 'recall': 0.8958097395243488, 'f1': 0.8701870187018701, 'number': 883} | {'precision': 0.9261175496688742, 'recall': 0.9681955863262657, 'f1': 0.9466892320710809, 'number': 4622} | {'precision': 0.14285714285714285, 'recall': 0.16666666666666666, 'f1': 0.15384615384615383, 'number': 6} | {'precision': 0.7959183673469388, 'recall': 0.7289719626168224, 'f1': 0.7609756097560975, 'number': 107} | {'precision': 0.7916955017301038, 'recall': 0.8480355819125278, 'f1': 0.8188976377952755, 'number': 1349} | {'precision': 0.5408805031446541, 'recall': 0.4174757281553398, 'f1': 0.47123287671232883, 'number': 206} | {'precision': 0.7108173076923077, 'recall': 0.8116936590721933, 'f1': 0.7579136229655261, 'number': 3643} | {'precision': 0.9042813455657492, 'recall': 0.9325134027120782, 'f1': 0.9181804067691351, 'number': 3171} | {'precision': 0.8648648648648649, 'recall': 0.8421052631578947, 'f1': 0.8533333333333334, 'number': 38} | 0.8324 | 0.8883 | 0.8594 | 0.9875 |
| 0.0485 | 2.52 | 8800 | 0.0438 | {'precision': 0.5129310344827587, 'recall': 0.5833333333333334, 'f1': 0.5458715596330277, 'number': 204} | {'precision': 0.8476394849785408, 'recall': 0.8946772366930917, 'f1': 0.8705234159779616, 'number': 883} | {'precision': 0.9281710608262403, 'recall': 0.9673301601038512, 'f1': 0.9473461171734294, 'number': 4622} | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.8279569892473119, 'recall': 0.719626168224299, 'f1': 0.77, 'number': 107} | {'precision': 0.8048261178140526, 'recall': 0.8406226834692365, 'f1': 0.8223350253807107, 'number': 1349} | {'precision': 0.5793650793650794, 'recall': 0.35436893203883496, 'f1': 0.43975903614457834, 'number': 206} | {'precision': 0.7387889536356727, 'recall': 0.8004391984628054, 'f1': 0.7683794466403162, 'number': 3643} | {'precision': 0.9042390972857578, 'recall': 0.935036266162094, 'f1': 0.9193798449612403, 'number': 3171} | {'precision': 0.8571428571428571, 'recall': 0.7894736842105263, 'f1': 0.8219178082191781, 'number': 38} | 0.8454 | 0.8838 | 0.8642 | 0.9880 |
| 0.0485 | 2.58 | 9000 | 0.0436 | {'precision': 0.5281385281385281, 'recall': 0.5980392156862745, 'f1': 0.5609195402298851, 'number': 204} | {'precision': 0.842948717948718, 'recall': 0.8935447338618346, 'f1': 0.8675096206706981, 'number': 883} | {'precision': 0.9281859692818597, 'recall': 0.9675465166594548, 'f1': 0.9474576271186441, 'number': 4622} | {'precision': 0.14285714285714285, 'recall': 0.16666666666666666, 'f1': 0.15384615384615383, 'number': 6} | {'precision': 0.797979797979798, 'recall': 0.7383177570093458, 'f1': 0.766990291262136, 'number': 107} | {'precision': 0.8164874551971326, 'recall': 0.8443291326908822, 'f1': 0.8301749271137026, 'number': 1349} | {'precision': 0.5146198830409356, 'recall': 0.42718446601941745, 'f1': 0.4668435013262599, 'number': 206} | {'precision': 0.7302256384825192, 'recall': 0.8083996706011529, 'f1': 0.7673267326732673, 'number': 3643} | {'precision': 0.899365367180417, 'recall': 0.9385052034058656, 'f1': 0.9185185185185185, 'number': 3171} | {'precision': 0.8888888888888888, 'recall': 0.8421052631578947, 'f1': 0.8648648648648649, 'number': 38} | 0.8409 | 0.8885 | 0.8641 | 0.9880 |
| 0.0485 | 2.64 | 9200 | 0.0439 | {'precision': 0.4980237154150198, 'recall': 0.6176470588235294, 'f1': 0.5514223194748359, 'number': 204} | {'precision': 0.8445873526259379, 'recall': 0.8924122310305775, 'f1': 0.8678414096916298, 'number': 883} | {'precision': 0.9288234073459224, 'recall': 0.9684119428818693, 'f1': 0.948204639339053, 'number': 4622} | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.8144329896907216, 'recall': 0.7383177570093458, 'f1': 0.7745098039215687, 'number': 107} | {'precision': 0.8154289834174477, 'recall': 0.838398813936249, 'f1': 0.8267543859649124, 'number': 1349} | {'precision': 0.5942028985507246, 'recall': 0.39805825242718446, 'f1': 0.4767441860465116, 'number': 206} | {'precision': 0.73525, 'recall': 0.8073016744441395, 'f1': 0.7695930917179119, 'number': 3643} | {'precision': 0.9079433497536946, 'recall': 0.9299905392620624, 'f1': 0.9188347094563016, 'number': 3171} | {'precision': 0.8421052631578947, 'recall': 0.8421052631578947, 'f1': 0.8421052631578947, 'number': 38} | 0.8449 | 0.8859 | 0.8649 | 0.9880 |
| 0.0485 | 2.69 | 9400 | 0.0441 | {'precision': 0.48333333333333334, 'recall': 0.5686274509803921, 'f1': 0.5225225225225224, 'number': 204} | {'precision': 0.8431163287086446, 'recall': 0.8946772366930917, 'f1': 0.8681318681318682, 'number': 883} | {'precision': 0.9275062137531068, 'recall': 0.9688446559930766, 'f1': 0.9477248677248676, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.8105263157894737, 'recall': 0.719626168224299, 'f1': 0.7623762376237623, 'number': 107} | {'precision': 0.81915654038599, 'recall': 0.8495181616011861, 'f1': 0.8340611353711789, 'number': 1349} | {'precision': 0.6141732283464567, 'recall': 0.3786407766990291, 'f1': 0.46846846846846846, 'number': 206} | {'precision': 0.7222632226322263, 'recall': 0.8059291792478727, 'f1': 0.7618059159314997, 'number': 3643} | {'precision': 0.8923122943463955, 'recall': 0.9407127089246294, 'f1': 0.9158735032238257, 'number': 3171} | {'precision': 0.8529411764705882, 'recall': 0.7631578947368421, 'f1': 0.8055555555555555, 'number': 38} | 0.8382 | 0.8879 | 0.8623 | 0.9879 |
| 0.0485 | 2.75 | 9600 | 0.0429 | {'precision': 0.5066666666666667, 'recall': 0.5588235294117647, 'f1': 0.5314685314685315, 'number': 204} | {'precision': 0.8543689320388349, 'recall': 0.8969422423556059, 'f1': 0.8751381215469612, 'number': 883} | {'precision': 0.9291060291060291, 'recall': 0.9668974469926439, 'f1': 0.9476251060220526, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.8297872340425532, 'recall': 0.7289719626168224, 'f1': 0.7761194029850745, 'number': 107} | {'precision': 0.8356669123065585, 'recall': 0.8406226834692365, 'f1': 0.8381374722838139, 'number': 1349} | {'precision': 0.696969696969697, 'recall': 0.33495145631067963, 'f1': 0.4524590163934426, 'number': 206} | {'precision': 0.7333999001497753, 'recall': 0.8064781773263794, 'f1': 0.768204994116878, 'number': 3643} | {'precision': 0.9069197795468463, 'recall': 0.934090192368338, 'f1': 0.9203044896691005, 'number': 3171} | {'precision': 0.9142857142857143, 'recall': 0.8421052631578947, 'f1': 0.8767123287671234, 'number': 38} | 0.8492 | 0.8847 | 0.8666 | 0.9882 |
| 0.0485 | 2.81 | 9800 | 0.0437 | {'precision': 0.5426008968609866, 'recall': 0.5931372549019608, 'f1': 0.5667447306791569, 'number': 204} | {'precision': 0.8478027867095391, 'recall': 0.8958097395243488, 'f1': 0.8711453744493393, 'number': 883} | {'precision': 0.9253515301902399, 'recall': 0.9681955863262657, 'f1': 0.9462888559949248, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.7777777777777778, 'recall': 0.719626168224299, 'f1': 0.7475728155339805, 'number': 107} | {'precision': 0.8386380458919319, 'recall': 0.8398813936249073, 'f1': 0.8392592592592594, 'number': 1349} | {'precision': 0.6, 'recall': 0.4077669902912621, 'f1': 0.48554913294797686, 'number': 206} | {'precision': 0.7314243396692175, 'recall': 0.8133406533077134, 'f1': 0.7702105536781909, 'number': 3643} | {'precision': 0.9004552352048558, 'recall': 0.935666982024598, 'f1': 0.9177234766470771, 'number': 3171} | {'precision': 0.7142857142857143, 'recall': 0.7894736842105263, 'f1': 0.7500000000000001, 'number': 38} | 0.8442 | 0.8885 | 0.8658 | 0.9880 |
| 0.0485 | 2.87 | 10000 | 0.0425 | {'precision': 0.5307017543859649, 'recall': 0.5931372549019608, 'f1': 0.5601851851851851, 'number': 204} | {'precision': 0.8497854077253219, 'recall': 0.8969422423556059, 'f1': 0.8727272727272728, 'number': 883} | {'precision': 0.9299958454507686, 'recall': 0.968628299437473, 'f1': 0.9489190334887665, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.8539325842696629, 'recall': 0.7102803738317757, 'f1': 0.7755102040816326, 'number': 107} | {'precision': 0.827259475218659, 'recall': 0.8413639733135656, 'f1': 0.834252113193679, 'number': 1349} | {'precision': 0.6822429906542056, 'recall': 0.35436893203883496, 'f1': 0.46645367412140576, 'number': 206} | {'precision': 0.7505138746145941, 'recall': 0.8018116936590722, 'f1': 0.7753151957531519, 'number': 3643} | {'precision': 0.911864406779661, 'recall': 0.9331441185745821, 'f1': 0.9223815461346633, 'number': 3171} | {'precision': 0.6976744186046512, 'recall': 0.7894736842105263, 'f1': 0.7407407407407408, 'number': 38} | 0.8546 | 0.8845 | 0.8692 | 0.9883 |
| 0.0485 | 2.92 | 10200 | 0.0430 | {'precision': 0.5203252032520326, 'recall': 0.6274509803921569, 'f1': 0.5688888888888889, 'number': 204} | {'precision': 0.8441835645677694, 'recall': 0.8958097395243488, 'f1': 0.8692307692307691, 'number': 883} | {'precision': 0.9272990886495444, 'recall': 0.968628299437473, 'f1': 0.9475132275132275, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.7117117117117117, 'recall': 0.7383177570093458, 'f1': 0.7247706422018348, 'number': 107} | {'precision': 0.8054583624912526, 'recall': 0.8532246108228317, 'f1': 0.8286537077033838, 'number': 1349} | {'precision': 0.5949367088607594, 'recall': 0.4563106796116505, 'f1': 0.5164835164835165, 'number': 206} | {'precision': 0.7398230088495575, 'recall': 0.803184188855339, 'f1': 0.7702026849170834, 'number': 3643} | {'precision': 0.899118808872683, 'recall': 0.9331441185745821, 'f1': 0.9158155369854533, 'number': 3171} | {'precision': 0.6086956521739131, 'recall': 0.7368421052631579, 'f1': 0.6666666666666666, 'number': 38} | 0.8419 | 0.8879 | 0.8643 | 0.9881 |
| 0.0485 | 2.98 | 10400 | 0.0438 | {'precision': 0.5081967213114754, 'recall': 0.6078431372549019, 'f1': 0.5535714285714285, 'number': 204} | {'precision': 0.8449197860962567, 'recall': 0.8946772366930917, 'f1': 0.8690869086908691, 'number': 883} | {'precision': 0.926723245704823, 'recall': 0.968628299437473, 'f1': 0.9472125251242991, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.7383177570093458, 'recall': 0.7383177570093458, 'f1': 0.7383177570093457, 'number': 107} | {'precision': 0.8315098468271335, 'recall': 0.8450704225352113, 'f1': 0.838235294117647, 'number': 1349} | {'precision': 0.5864197530864198, 'recall': 0.46116504854368934, 'f1': 0.516304347826087, 'number': 206} | {'precision': 0.7383924113829257, 'recall': 0.8119681581114466, 'f1': 0.7734344358739704, 'number': 3643} | {'precision': 0.8971031985515993, 'recall': 0.9375591296121097, 'f1': 0.9168851195065536, 'number': 3171} | {'precision': 0.7073170731707317, 'recall': 0.7631578947368421, 'f1': 0.7341772151898733, 'number': 38} | 0.8435 | 0.8902 | 0.8662 | 0.9881 |
| 0.0427 | 3.04 | 10600 | 0.0433 | {'precision': 0.5154185022026432, 'recall': 0.5735294117647058, 'f1': 0.54292343387471, 'number': 204} | {'precision': 0.8493003229278795, 'recall': 0.8935447338618346, 'f1': 0.8708609271523179, 'number': 883} | {'precision': 0.9276833816825528, 'recall': 0.968628299437473, 'f1': 0.9477138018628282, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.7596153846153846, 'recall': 0.7383177570093458, 'f1': 0.7488151658767772, 'number': 107} | {'precision': 0.8140525195173882, 'recall': 0.8502594514455152, 'f1': 0.8317621464829587, 'number': 1349} | {'precision': 0.5843373493975904, 'recall': 0.470873786407767, 'f1': 0.521505376344086, 'number': 206} | {'precision': 0.7258103826468437, 'recall': 0.8174581388965139, 'f1': 0.7689129873483087, 'number': 3643} | {'precision': 0.9003634161114477, 'recall': 0.9375591296121097, 'f1': 0.9185848910860498, 'number': 3171} | {'precision': 0.6590909090909091, 'recall': 0.7631578947368421, 'f1': 0.7073170731707317, 'number': 38} | 0.8394 | 0.8916 | 0.8647 | 0.9881 |
| 0.0427 | 3.1 | 10800 | 0.0435 | {'precision': 0.5434782608695652, 'recall': 0.6127450980392157, 'f1': 0.576036866359447, 'number': 204} | {'precision': 0.8436830835117773, 'recall': 0.8924122310305775, 'f1': 0.8673637864611997, 'number': 883} | {'precision': 0.9286603069265865, 'recall': 0.9688446559930766, 'f1': 0.9483269800931808, 'number': 4622} | {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6} | {'precision': 0.7181818181818181, 'recall': 0.7383177570093458, 'f1': 0.728110599078341, 'number': 107} | {'precision': 0.8082575227431771, 'recall': 0.8561897702001483, 'f1': 0.8315334773218143, 'number': 1349} | {'precision': 0.5730337078651685, 'recall': 0.49514563106796117, 'f1': 0.5312499999999999, 'number': 206} | {'precision': 0.7290846456692913, 'recall': 0.8133406533077134, 'f1': 0.7689113792656028, 'number': 3643} | {'precision': 0.8994244168433808, 'recall': 0.9362976978871018, 'f1': 0.9174907292954264, 'number': 3171} | {'precision': 0.6744186046511628, 'recall': 0.7631578947368421, 'f1': 0.7160493827160495, 'number': 38} | 0.8394 | 0.8918 | 0.8648 | 0.9881 |
| 0.0427 | 3.15 | 11000 | 0.0436 | {'precision': 0.547945205479452, 'recall': 0.5882352941176471, 'f1': 0.5673758865248226, 'number': 204} | {'precision': 0.8505376344086022, 'recall': 0.8958097395243488, 'f1': 0.8725868725868726, 'number': 883} | {'precision': 0.9290309192778585, 'recall': 0.968628299437473, 'f1': 0.9484164813049465, 'number': 4622} | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.8061224489795918, 'recall': 0.7383177570093458, 'f1': 0.7707317073170732, 'number': 107} | {'precision': 0.8299120234604106, 'recall': 0.8391401037805782, 'f1': 0.8345005528934759, 'number': 1349} | {'precision': 0.6307692307692307, 'recall': 0.39805825242718446, 'f1': 0.4880952380952381, 'number': 206} | {'precision': 0.7323084431429966, 'recall': 0.8237716167993412, 'f1': 0.7753520217026223, 'number': 3643} | {'precision': 0.9032258064516129, 'recall': 0.9359823399558499, 'f1': 0.9193123741675702, 'number': 3171} | {'precision': 0.8333333333333334, 'recall': 0.7894736842105263, 'f1': 0.8108108108108109, 'number': 38} | 0.8462 | 0.8912 | 0.8681 | 0.9883 |
| 0.0427 | 3.21 | 11200 | 0.0432 | {'precision': 0.5338983050847458, 'recall': 0.6176470588235294, 'f1': 0.5727272727272729, 'number': 204} | {'precision': 0.8512931034482759, 'recall': 0.8946772366930917, 'f1': 0.8724461623412479, 'number': 883} | {'precision': 0.927785847686242, 'recall': 0.9673301601038512, 'f1': 0.9471454295095858, 'number': 4622} | {'precision': 0.4, 'recall': 0.3333333333333333, 'f1': 0.3636363636363636, 'number': 6} | {'precision': 0.79, 'recall': 0.7383177570093458, 'f1': 0.7632850241545893, 'number': 107} | {'precision': 0.8226387887527037, 'recall': 0.8458117123795404, 'f1': 0.8340643274853802, 'number': 1349} | {'precision': 0.6176470588235294, 'recall': 0.4077669902912621, 'f1': 0.49122807017543857, 'number': 206} | {'precision': 0.7413620430645969, 'recall': 0.8127916552292067, 'f1': 0.775435380384968, 'number': 3643} | {'precision': 0.9028031687995125, 'recall': 0.93440555029959, 'f1': 0.9183325584999226, 'number': 3171} | {'precision': 0.8888888888888888, 'recall': 0.8421052631578947, 'f1': 0.8648648648648649, 'number': 38} | 0.8476 | 0.8890 | 0.8678 | 0.9883 |
| 0.0427 | 3.27 | 11400 | 0.0430 | {'precision': 0.5151515151515151, 'recall': 0.5833333333333334, 'f1': 0.5471264367816091, 'number': 204} | {'precision': 0.8462365591397849, 'recall': 0.8912797281993206, 'f1': 0.8681742967457253, 'number': 883} | {'precision': 0.9282455412691829, 'recall': 0.9684119428818693, 'f1': 0.9479034307496823, 'number': 4622} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 6} | {'precision': 0.875, 'recall': 0.719626168224299, 'f1': 0.7897435897435897, 'number': 107} | {'precision': 0.8418674698795181, 'recall': 0.8287620459599704, 'f1': 0.8352633545013075, 'number': 1349} | {'precision': 0.6124031007751938, 'recall': 0.38349514563106796, 'f1': 0.47164179104477605, 'number': 206} | {'precision': 0.7464824120603015, 'recall': 0.8155366456217403, 'f1': 0.7794831431195067, 'number': 3643} | {'precision': 0.9130030959752322, 'recall': 0.9299905392620624, 'f1': 0.9214185283549446, 'number': 3171} | {'precision': 0.8648648648648649, 'recall': 0.8421052631578947, 'f1': 0.8533333333333334, 'number': 38} | 0.8531 | 0.8862 | 0.8694 | 0.9884 |
| 0.0427 | 3.32 | 11600 | 0.0431 | {'precision': 0.5308641975308642, 'recall': 0.6323529411764706, 'f1': 0.5771812080536912, 'number': 204} | {'precision': 0.8397435897435898, 'recall': 0.8901472253680635, 'f1': 0.8642111050027489, 'number': 883} | {'precision': 0.9278905926232905, 'recall': 0.9688446559930766, 'f1': 0.9479254868755292, 'number': 4622} | {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6} | {'precision': 0.7821782178217822, 'recall': 0.7383177570093458, 'f1': 0.7596153846153846, 'number': 107} | {'precision': 0.8216833095577746, 'recall': 0.8539659006671608, 'f1': 0.8375136314067612, 'number': 1349} | {'precision': 0.5786163522012578, 'recall': 0.44660194174757284, 'f1': 0.5041095890410958, 'number': 206} | {'precision': 0.7307410124724871, 'recall': 0.8202031292890475, 'f1': 0.7728918779099844, 'number': 3643} | {'precision': 0.9101261926746691, 'recall': 0.9325134027120782, 'f1': 0.921183800623053, 'number': 3171} | {'precision': 0.8205128205128205, 'recall': 0.8421052631578947, 'f1': 0.8311688311688312, 'number': 38} | 0.8435 | 0.8922 | 0.8671 | 0.9882 |
| 0.0427 | 3.38 | 11800 | 0.0425 | {'precision': 0.5378151260504201, 'recall': 0.6274509803921569, 'f1': 0.579185520361991, 'number': 204} | {'precision': 0.841541755888651, 'recall': 0.8901472253680635, 'f1': 0.8651623555310953, 'number': 883} | {'precision': 0.9297151174880433, 'recall': 0.9673301601038512, 'f1': 0.9481497190117697, 'number': 4622} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 6} | {'precision': 0.7669902912621359, 'recall': 0.7383177570093458, 'f1': 0.7523809523809523, 'number': 107} | {'precision': 0.8157524613220816, 'recall': 0.859896219421794, 'f1': 0.8372428726091664, 'number': 1349} | {'precision': 0.5568181818181818, 'recall': 0.47572815533980584, 'f1': 0.5130890052356021, 'number': 206} | {'precision': 0.7400903161063723, 'recall': 0.8097721657974197, 'f1': 0.7733647922401363, 'number': 3643} | {'precision': 0.8966656653649745, 'recall': 0.9413434247871334, 'f1': 0.9184615384615384, 'number': 3171} | {'precision': 0.8108108108108109, 'recall': 0.7894736842105263, 'f1': 0.8, 'number': 38} | 0.8437 | 0.8918 | 0.8671 | 0.9883 |
| 0.0427 | 3.44 | 12000 | 0.0422 | {'precision': 0.5191489361702127, 'recall': 0.5980392156862745, 'f1': 0.5558086560364465, 'number': 204} | {'precision': 0.8538961038961039, 'recall': 0.8935447338618346, 'f1': 0.8732706142778085, 'number': 883} | {'precision': 0.9269861024683675, 'recall': 0.9668974469926439, 'f1': 0.9465212326591126, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.8144329896907216, 'recall': 0.7383177570093458, 'f1': 0.7745098039215687, 'number': 107} | {'precision': 0.8183108587650816, 'recall': 0.85470719051149, 'f1': 0.836113125453227, 'number': 1349} | {'precision': 0.6148148148148148, 'recall': 0.4029126213592233, 'f1': 0.4868035190615836, 'number': 206} | {'precision': 0.7445145018915511, 'recall': 0.8103211638759265, 'f1': 0.77602523659306, 'number': 3643} | {'precision': 0.9062309102015883, 'recall': 0.935666982024598, 'f1': 0.9207137315748644, 'number': 3171} | {'precision': 0.8571428571428571, 'recall': 0.7894736842105263, 'f1': 0.8219178082191781, 'number': 38} | 0.8487 | 0.8887 | 0.8682 | 0.9885 |
| 0.0427 | 3.5 | 12200 | 0.0433 | {'precision': 0.5365853658536586, 'recall': 0.6470588235294118, 'f1': 0.5866666666666668, 'number': 204} | {'precision': 0.8450854700854701, 'recall': 0.8958097395243488, 'f1': 0.8697086311159977, 'number': 883} | {'precision': 0.927536231884058, 'recall': 0.9692773691042839, 'f1': 0.9479475243334744, 'number': 4622} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 6} | {'precision': 0.7053571428571429, 'recall': 0.7383177570093458, 'f1': 0.7214611872146119, 'number': 107} | {'precision': 0.8071030640668524, 'recall': 0.8591549295774648, 'f1': 0.8323159784560143, 'number': 1349} | {'precision': 0.5730994152046783, 'recall': 0.47572815533980584, 'f1': 0.519893899204244, 'number': 206} | {'precision': 0.7371031746031746, 'recall': 0.8158111446609937, 'f1': 0.7744625407166124, 'number': 3643} | {'precision': 0.8994244168433808, 'recall': 0.9362976978871018, 'f1': 0.9174907292954264, 'number': 3171} | {'precision': 0.8108108108108109, 'recall': 0.7894736842105263, 'f1': 0.8, 'number': 38} | 0.8415 | 0.8934 | 0.8666 | 0.9882 |
| 0.0427 | 3.55 | 12400 | 0.0427 | {'precision': 0.5236051502145923, 'recall': 0.5980392156862745, 'f1': 0.5583524027459955, 'number': 204} | {'precision': 0.8458244111349036, 'recall': 0.8946772366930917, 'f1': 0.8695652173913043, 'number': 883} | {'precision': 0.9291796469366563, 'recall': 0.9679792297706621, 'f1': 0.9481826851753735, 'number': 4622} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 6} | {'precision': 0.79, 'recall': 0.7383177570093458, 'f1': 0.7632850241545893, 'number': 107} | {'precision': 0.821505376344086, 'recall': 0.8495181616011861, 'f1': 0.8352769679300291, 'number': 1349} | {'precision': 0.5827814569536424, 'recall': 0.42718446601941745, 'f1': 0.49299719887955185, 'number': 206} | {'precision': 0.7368680641183724, 'recall': 0.8202031292890475, 'f1': 0.7763055339049103, 'number': 3643} | {'precision': 0.9033728350045579, 'recall': 0.9375591296121097, 'f1': 0.9201485608170845, 'number': 3171} | {'precision': 0.8333333333333334, 'recall': 0.7894736842105263, 'f1': 0.8108108108108109, 'number': 38} | 0.8452 | 0.8920 | 0.8680 | 0.9882 |
| 0.0427 | 3.61 | 12600 | 0.0424 | {'precision': 0.5294117647058824, 'recall': 0.6176470588235294, 'f1': 0.5701357466063349, 'number': 204} | {'precision': 0.8487124463519313, 'recall': 0.8958097395243488, 'f1': 0.8716253443526172, 'number': 883} | {'precision': 0.9291649356044869, 'recall': 0.9677628732150584, 'f1': 0.9480712166172105, 'number': 4622} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 6} | {'precision': 0.8061224489795918, 'recall': 0.7383177570093458, 'f1': 0.7707317073170732, 'number': 107} | {'precision': 0.8215560314061384, 'recall': 0.8532246108228317, 'f1': 0.8370909090909091, 'number': 1349} | {'precision': 0.6131386861313869, 'recall': 0.4077669902912621, 'f1': 0.4897959183673469, 'number': 206} | {'precision': 0.7436345481777334, 'recall': 0.8177326379357672, 'f1': 0.7789253497189175, 'number': 3643} | {'precision': 0.9023354564755839, 'recall': 0.9381898454746137, 'f1': 0.9199134199134198, 'number': 3171} | {'precision': 0.8108108108108109, 'recall': 0.7894736842105263, 'f1': 0.8, 'number': 38} | 0.8479 | 0.8918 | 0.8693 | 0.9884 |
| 0.0427 | 3.67 | 12800 | 0.0434 | {'precision': 0.5041666666666667, 'recall': 0.5931372549019608, 'f1': 0.5450450450450451, 'number': 204} | {'precision': 0.8502673796791443, 'recall': 0.9003397508493771, 'f1': 0.8745874587458746, 'number': 883} | {'precision': 0.9282306575399295, 'recall': 0.9681955863262657, 'f1': 0.947792015249391, 'number': 4622} | {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6} | {'precision': 0.7745098039215687, 'recall': 0.7383177570093458, 'f1': 0.7559808612440191, 'number': 107} | {'precision': 0.8262128892107169, 'recall': 0.8458117123795404, 'f1': 0.8358974358974359, 'number': 1349} | {'precision': 0.5392670157068062, 'recall': 0.5, 'f1': 0.5188916876574307, 'number': 206} | {'precision': 0.7447178722346508, 'recall': 0.8223991216030744, 'f1': 0.7816331854943909, 'number': 3643} | {'precision': 0.8986997278500152, 'recall': 0.9372437716808578, 'f1': 0.9175671503550479, 'number': 3171} | {'precision': 0.8205128205128205, 'recall': 0.8421052631578947, 'f1': 0.8311688311688312, 'number': 38} | 0.8451 | 0.8937 | 0.8687 | 0.9882 |
| 0.0427 | 3.73 | 13000 | 0.0427 | {'precision': 0.5387931034482759, 'recall': 0.6127450980392157, 'f1': 0.5733944954128442, 'number': 204} | {'precision': 0.8501070663811563, 'recall': 0.89920724801812, 'f1': 0.8739680792515134, 'number': 883} | {'precision': 0.9275933609958507, 'recall': 0.9673301601038512, 'f1': 0.947045117559839, 'number': 4622} | {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6} | {'precision': 0.7523809523809524, 'recall': 0.7383177570093458, 'f1': 0.7452830188679245, 'number': 107} | {'precision': 0.8132102272727273, 'recall': 0.8487768717568569, 'f1': 0.8306129851287631, 'number': 1349} | {'precision': 0.6174496644295302, 'recall': 0.44660194174757284, 'f1': 0.5183098591549296, 'number': 206} | {'precision': 0.7458729364682342, 'recall': 0.8185561350535273, 'f1': 0.7805261091480172, 'number': 3643} | {'precision': 0.8947210557888422, 'recall': 0.9407127089246294, 'f1': 0.9171406610299769, 'number': 3171} | {'precision': 0.8888888888888888, 'recall': 0.8421052631578947, 'f1': 0.8648648648648649, 'number': 38} | 0.8457 | 0.8929 | 0.8687 | 0.9884 |
| 0.0427 | 3.78 | 13200 | 0.0422 | {'precision': 0.5439330543933054, 'recall': 0.6372549019607843, 'f1': 0.5869074492099323, 'number': 204} | {'precision': 0.8542116630669546, 'recall': 0.8958097395243488, 'f1': 0.8745163073521283, 'number': 883} | {'precision': 0.9305324459234608, 'recall': 0.9679792297706621, 'f1': 0.9488865323435843, 'number': 4622} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.7596153846153846, 'recall': 0.7383177570093458, 'f1': 0.7488151658767772, 'number': 107} | {'precision': 0.8190883190883191, 'recall': 0.8524833209785025, 'f1': 0.8354522339266255, 'number': 1349} | {'precision': 0.5935483870967742, 'recall': 0.44660194174757284, 'f1': 0.5096952908587258, 'number': 206} | {'precision': 0.7467876039304611, 'recall': 0.8136151523469668, 'f1': 0.7787703625853915, 'number': 3643} | {'precision': 0.9018479248712511, 'recall': 0.9388205613371177, 'f1': 0.919962917181706, 'number': 3171} | {'precision': 0.8857142857142857, 'recall': 0.8157894736842105, 'f1': 0.8493150684931505, 'number': 38} | 0.8490 | 0.8918 | 0.8699 | 0.9885 |
| 0.0427 | 3.84 | 13400 | 0.0428 | {'precision': 0.521551724137931, 'recall': 0.5931372549019608, 'f1': 0.555045871559633, 'number': 204} | {'precision': 0.8493589743589743, 'recall': 0.9003397508493771, 'f1': 0.874106652006597, 'number': 883} | {'precision': 0.9280381584404811, 'recall': 0.9681955863262657, 'f1': 0.947691656077933, 'number': 4622} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 6} | {'precision': 0.6694915254237288, 'recall': 0.7383177570093458, 'f1': 0.7022222222222222, 'number': 107} | {'precision': 0.8285094066570188, 'recall': 0.8487768717568569, 'f1': 0.8385206883925302, 'number': 1349} | {'precision': 0.5418994413407822, 'recall': 0.470873786407767, 'f1': 0.5038961038961041, 'number': 206} | {'precision': 0.7348447511089207, 'recall': 0.8185561350535273, 'f1': 0.7744448772886638, 'number': 3643} | {'precision': 0.8966656653649745, 'recall': 0.9413434247871334, 'f1': 0.9184615384615384, 'number': 3171} | {'precision': 0.8461538461538461, 'recall': 0.868421052631579, 'f1': 0.8571428571428572, 'number': 38} | 0.8418 | 0.8935 | 0.8669 | 0.9883 |
| 0.0427 | 3.9 | 13600 | 0.0423 | {'precision': 0.5404255319148936, 'recall': 0.6225490196078431, 'f1': 0.5785876993166287, 'number': 204} | {'precision': 0.8532901833872708, 'recall': 0.8958097395243488, 'f1': 0.8740331491712706, 'number': 883} | {'precision': 0.9270163798465685, 'recall': 0.9673301601038512, 'f1': 0.9467443091582849, 'number': 4622} | {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6} | {'precision': 0.797979797979798, 'recall': 0.7383177570093458, 'f1': 0.766990291262136, 'number': 107} | {'precision': 0.8185683912119065, 'recall': 0.8561897702001483, 'f1': 0.8369565217391305, 'number': 1349} | {'precision': 0.6164383561643836, 'recall': 0.4368932038834951, 'f1': 0.5113636363636364, 'number': 206} | {'precision': 0.7520283975659229, 'recall': 0.8141641504254735, 'f1': 0.7818637142480559, 'number': 3643} | {'precision': 0.90948143602332, 'recall': 0.934720908230842, 'f1': 0.921928460342146, 'number': 3171} | {'precision': 0.868421052631579, 'recall': 0.868421052631579, 'f1': 0.868421052631579, 'number': 38} | 0.8515 | 0.8910 | 0.8708 | 0.9885 |
| 0.0427 | 3.96 | 13800 | 0.0423 | {'precision': 0.5413223140495868, 'recall': 0.6421568627450981, 'f1': 0.5874439461883408, 'number': 204} | {'precision': 0.8479657387580299, 'recall': 0.8969422423556059, 'f1': 0.8717666483214089, 'number': 883} | {'precision': 0.9278307756117793, 'recall': 0.9679792297706621, 'f1': 0.9474798814061838, 'number': 4622} | {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6} | {'precision': 0.7117117117117117, 'recall': 0.7383177570093458, 'f1': 0.7247706422018348, 'number': 107} | {'precision': 0.829127613554434, 'recall': 0.8524833209785025, 'f1': 0.8406432748538012, 'number': 1349} | {'precision': 0.5625, 'recall': 0.48058252427184467, 'f1': 0.518324607329843, 'number': 206} | {'precision': 0.7494356659142212, 'recall': 0.8202031292890475, 'f1': 0.7832241153342071, 'number': 3643} | {'precision': 0.9113300492610837, 'recall': 0.9334594765058342, 'f1': 0.9222620345848263, 'number': 3171} | {'precision': 0.868421052631579, 'recall': 0.868421052631579, 'f1': 0.868421052631579, 'number': 38} | 0.8499 | 0.8931 | 0.8710 | 0.9884 |
| 0.0394 | 4.01 | 14000 | 0.0425 | {'precision': 0.542016806722689, 'recall': 0.6323529411764706, 'f1': 0.583710407239819, 'number': 204} | {'precision': 0.8481283422459893, 'recall': 0.8980747451868629, 'f1': 0.8723872387238725, 'number': 883} | {'precision': 0.925305193461618, 'recall': 0.9675465166594548, 'f1': 0.9459545214172396, 'number': 4622} | {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6} | {'precision': 0.7383177570093458, 'recall': 0.7383177570093458, 'f1': 0.7383177570093457, 'number': 107} | {'precision': 0.8174097664543525, 'recall': 0.8561897702001483, 'f1': 0.836350470673425, 'number': 1349} | {'precision': 0.591715976331361, 'recall': 0.4854368932038835, 'f1': 0.5333333333333332, 'number': 206} | {'precision': 0.7504402515723271, 'recall': 0.8188306340927807, 'f1': 0.7831451824625887, 'number': 3643} | {'precision': 0.9119555143651529, 'recall': 0.9309366130558183, 'f1': 0.9213483146067415, 'number': 3171} | {'precision': 0.8918918918918919, 'recall': 0.868421052631579, 'f1': 0.88, 'number': 38} | 0.8493 | 0.8924 | 0.8703 | 0.9884 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
distilgpt2
|
[
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"coreml",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"transformers",
"exbert",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,611,668 | 2023-05-05T11:51:43Z |
---
datasets:
- EleutherAI/pile
---

# Model card for RWKV-4 | 14B parameters trained on Pile dataset
RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064).
# Table of contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
## TL;DR
Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM)
> RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
## Model Details
The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration.
## Usage
### Convert the raw weights to the HF format
You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights.
```bash
python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv
```
### Generate text
You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios:
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-14b-pile")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-14b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
### Running the model on a single GPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-14b-pile").to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-14b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
</details>
### Running the model in half-precision, on GPU
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-14b-pile", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-14b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
### Running the model multiple GPUs
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-14b-pile", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-14b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
## Citation
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/)
|
distilroberta-base
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3,342,240 | 2023-05-05T11:52:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.74 +/- 19.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gpt2-large
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"transformers",
"license:mit",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,454,819 | 2023-05-05T11:53:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9344038684401179
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1601
- Accuracy: 0.934
- F1: 0.9344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1758 | 1.0 | 250 | 0.1753 | 0.925 | 0.9245 |
| 0.1142 | 2.0 | 500 | 0.1601 | 0.934 | 0.9344 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
007J/smile
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-05T12:39:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-batch-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5725078939425798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-batch-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3833
- Matthews Correlation: 0.5725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.8292 | 1.0 | 4276 | 0.8945 | 0.5153 |
| 0.5519 | 2.0 | 8552 | 1.0523 | 0.5019 |
| 0.4064 | 3.0 | 12828 | 1.1277 | 0.5356 |
| 0.2463 | 4.0 | 17104 | 1.3046 | 0.5248 |
| 0.1523 | 5.0 | 21380 | 1.4914 | 0.5094 |
| 0.0697 | 6.0 | 25656 | 1.4854 | 0.5574 |
| 0.0894 | 7.0 | 29932 | 1.3833 | 0.5725 |
| 0.0375 | 8.0 | 34208 | 1.5318 | 0.5670 |
| 0.0297 | 9.0 | 38484 | 1.8043 | 0.5550 |
| 0.0105 | 10.0 | 42760 | 1.8241 | 0.5565 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AAli/bert-base-uncased-finetuned-swag
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-05T13:48:04Z |
---
library_name: diffusers
pipeline_tag: text-to-image
tags:
- jax-diffusers-event
---
|
AdapterHub/bert-base-uncased-pf-swag
|
[
"bert",
"en",
"dataset:swag",
"arxiv:2104.08247",
"adapter-transformers"
] | null |
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-05T16:47:54Z |
---
license: openrail
widget:
- text: I am totally a human, trust me bro.
example_title: default
- text: >-
In Finnish folklore, all places and things, and also human beings, have a
haltija (a genius, guardian spirit) of their own. One such haltija is called
etiäinen—an image, doppelgänger, or just an impression that goes ahead of a
person, doing things the person in question later does. For example, people
waiting at home might hear the door close or even see a shadow or a
silhouette, only to realize that no one has yet arrived. Etiäinen can also
refer to some kind of a feeling that something is going to happen. Sometimes
it could, for example, warn of a bad year coming. In modern Finnish, the
term has detached from its shamanistic origins and refers to premonition.
Unlike clairvoyance, divination, and similar practices, etiäiset (plural)
are spontaneous and can't be induced. Quite the opposite, they may be
unwanted and cause anxiety, like ghosts. Etiäiset need not be too dramatic
and may concern everyday events, although ones related to e.g. deaths are
common. As these phenomena are still reported today, they can be considered
a living tradition, as a way to explain the psychological experience of
premonition.
example_title: real wikipedia
- text: >-
In Finnish folklore, all places and things, animate or inanimate, have a
spirit or "etiäinen" that lives there. Etiäinen can manifest in many forms,
but is usually described as a kind, elderly woman with white hair. She is
the guardian of natural places and often helps people in need. Etiäinen has
been a part of Finnish culture for centuries and is still widely believed in
today. Folklorists study etiäinen to understand Finnish traditions and how
they have changed over time.
example_title: generated wikipedia
- text: >-
This paper presents a novel framework for sparsity-certifying graph
decompositions, which are important tools in various areas of computer
science, including algorithm design, complexity theory, and optimization.
Our approach is based on the concept of "cut sparsifiers," which are sparse
graphs that preserve the cut structure of the original graph up to a certain
error bound. We show that cut sparsifiers can be efficiently constructed
using a combination of spectral techniques and random sampling, and we use
them to develop new algorithms for decomposing graphs into sparse subgraphs.
example_title: from ChatGPT
- text: >-
Recent work has demonstrated substantial gains on many NLP tasks and
benchmarks by pre-training on a large corpus of text followed by fine-tuning
on a specific task. While typically task-agnostic in architecture, this
method still requires task-specific fine-tuning datasets of thousands or
tens of thousands of examples. By contrast, humans can generally perform a
new language task from only a few examples or from simple instructions -
something which current NLP systems still largely struggle to do. Here we
show that scaling up language models greatly improves task-agnostic,
few-shot performance, sometimes even reaching competitiveness with prior
state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an
autoregressive language model with 175 billion parameters, 10x more than any
previous non-sparse language model, and test its performance in the few-shot
setting. For all tasks, GPT-3 is applied without any gradient updates or
fine-tuning, with tasks and few-shot demonstrations specified purely via
text interaction with the model. GPT-3 achieves strong performance on many
NLP datasets, including translation, question-answering, and cloze tasks, as
well as several tasks that require on-the-fly reasoning or domain
adaptation, such as unscrambling words, using a novel word in a sentence, or
performing 3-digit arithmetic. At the same time, we also identify some
datasets where GPT-3's few-shot learning still struggles, as well as some
datasets where GPT-3 faces methodological issues related to training on
large web corpora. Finally, we find that GPT-3 can generate samples of news
articles which human evaluators have difficulty distinguishing from articles
written by humans. We discuss broader societal impacts of this finding and
of GPT-3 in general.
example_title: GPT-3 paper
datasets:
- NicolaiSivesind/human-vs-machine
- gfissore/arxiv-abstracts-2021
language:
- en
pipeline_tag: text-classification
tags:
- mgt-detection
- ai-detection
---
Machine-generated text-detection by fine-tuning of language models
===
This project is related to a bachelor's thesis with the title "*Turning Poachers into Gamekeepers: Detecting Machine-Generated Text in Academia using Large Language Models*" (not yet published) written by *Nicolai Thorer Sivesind* and *Andreas Bentzen Winje* at the *Department of Computer Science* at the *Norwegian University of Science and Technology*.
It contains text classification models trained to distinguish human-written text from text generated by language models like ChatGPT and GPT-3. The best models were able to achieve an accuracy of 100% on real and *GPT-3*-generated wikipedia articles (4500 samples), and an accuracy of 98.4% on real and *ChatGPT*-generated research abstracts (3000 samples).
The dataset card for the dataset that was created in relation to this project can be found [here](https://huggingface.co/datasets/NicolaiSivesind/human-vs-machine).
**NOTE**: the hosted inference on this site only works for the RoBERTa-models, and not for the Bloomz-models. The Bloomz-models otherwise can produce wrong predictions when not explicitly providing the attention mask from the tokenizer to the model for inference. To be sure, the [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines)-library seems to produce the most consistent results.
## Fine-tuned detectors
This project includes 12 fine-tuned models based on the RoBERTa-base model, and three sizes of the bloomz-models.
| Base-model | RoBERTa-base | Bloomz-560m | Bloomz-1b7 | Bloomz-3b |
|------------|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|
| Wiki | [roberta-wiki](https://huggingface.co/andreas122001/roberta-academic-detector) | [Bloomz-560m-wiki](https://huggingface.co/andreas122001/bloomz-560m-wiki-detector) | [Bloomz-1b7-wiki](https://huggingface.co/andreas122001/bloomz-1b7-wiki-detector) | [Bloomz-3b-wiki](https://huggingface.co/andreas122001/bloomz-3b-wiki-detector) |
| Academic | [roberta-academic](https://huggingface.co/andreas122001/roberta-wiki-detector) | [Bloomz-560m-academic](https://huggingface.co/andreas122001/bloomz-560m-academic-detector) | [Bloomz-1b7-academic](https://huggingface.co/andreas122001/bloomz-1b7-academic-detector) | [Bloomz-3b-academic](https://huggingface.co/andreas122001/bloomz-3b-academic-detector) |
| Mixed | [roberta-mixed](https://huggingface.co/andreas122001/roberta-mixed-detector) | [Bloomz-560m-mixed](https://huggingface.co/andreas122001/bloomz-560m-mixed-detector) | [Bloomz-1b7-mixed](https://huggingface.co/andreas122001/bloomz-1b7-mixed-detector) | [Bloomz-3b-mixed](https://huggingface.co/andreas122001/bloomz-3b-mixed-detector) |
### Datasets
The models were trained on selections from the [GPT-wiki-intros]() and [ChatGPT-Research-Abstracts](), and are separated into three types, **wiki**-detectors, **academic**-detectors and **mixed**-detectors, respectively.
- **Wiki-detectors**:
- Trained on 30'000 datapoints (10%) of GPT-wiki-intros.
- Best model (in-domain) is Bloomz-3b-wiki, with an accuracy of 100%.
- **Academic-detectors**:
- Trained on 20'000 datapoints (100%) of ChatGPT-Research-Abstracts.
- Best model (in-domain) is Bloomz-3b-academic, with an accuracy of 98.4%
- **Mixed-detectors**:
- Trained on 15'000 datapoints (5%) of GPT-wiki-intros and 10'000 datapoints (50%) of ChatGPT-Research-Abstracts.
- Best model (in-domain) is RoBERTa-mixed, with an F1-score of 99.3%.
### Hyperparameters
All models were trained using the same hyperparameters:
```python
{
"num_train_epochs": 1,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"batch_size": 8,
"adam_epsilon": 1e-08
"optim": "adamw_torch" # the optimizer (AdamW)
"learning_rate": 5e-05, # (LR)
"lr_scheduler_type": "linear", # scheduler type for LR
"seed": 42, # seed for PyTorch RNG-generator.
}
```
### Metrics
Metrics can be found at https://wandb.ai/idatt2900-072/IDATT2900-072.
In-domain performance of wiki-detectors:
| Base model | Accuracy | Precision | Recall | F1-score |
|-------------|----------|-----------|--------|----------|
| Bloomz-560m | 0.973 | *1.000 | 0.945 | 0.972 |
| Bloomz-1b7 | 0.972 | *1.000 | 0.945 | 0.972 |
| Bloomz-3b | *1.000 | *1.000 | *1.000 | *1.000 |
| RoBERTa | 0.998 | 0.999 | 0.997 | 0.998 |
In-domain peformance of academic-detectors:
| Base model | Accuracy | Precision | Recall | F1-score |
|-------------|----------|-----------|--------|----------|
| Bloomz-560m | 0.964 | 0.963 | 0.965 | 0.964 |
| Bloomz-1b7 | 0.946 | 0.941 | 0.951 | 0.946 |
| Bloomz-3b | *0.984 | *0.983 | 0.985 | *0.984 |
| RoBERTa | 0.982 | 0.968 | *0.997 | 0.982 |
F1-scores of the mixed-detectors on all three datasets:
| Base model | Mixed | Wiki | CRA |
|-------------|--------|--------|--------|
| Bloomz-560m | 0.948 | 0.972 | *0.848 |
| Bloomz-1b7 | 0.929 | 0.964 | 0.816 |
| Bloomz-3b | 0.988 | 0.996 | 0.772 |
| RoBERTa | *0.993 | *0.997 | 0.829 |
## Credits
- [GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro), by Aaditya Bhat
- [arxiv-abstracts-2021](https://huggingface.co/datasets/gfissore/arxiv-abstracts-2021), by Giancarlo
- [Bloomz](bigscience/bloomz), by BigScience
- [RoBERTa](https://huggingface.co/roberta-base), by Liu et. al.
## Citation
Please use the following citation:
```
@misc {sivesind_2023,
author = { {Nicolai Thorer Sivesind} and {Andreas Bentzen Winje} },
title = { Machine-generated text-detection by fine-tuning of language models },
url = { https://huggingface.co/andreas122001/roberta-academic-detector }
year = 2023,
publisher = { Hugging Face }
}
```
|
AdapterHub/roberta-base-pf-squad_v2
|
[
"roberta",
"en",
"dataset:squad_v2",
"arxiv:2104.08247",
"adapter-transformers",
"question-answering",
"adapterhub:qa/squad2"
] |
question-answering
|
{
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 51 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper_med_ar_augmentation_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_med_ar_augmentation_2
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- epoch: 0.33
- eval_loss: 0.2039
- eval_runtime: 9619.8883
- eval_samples_per_second: 1.085
- eval_steps_per_second: 0.543
- eval_wer: 17.6227
- step: 800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Alireza1044/albert-base-v2-sst2
|
[
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 52 | null |
---
license: mit
language:
- en
- zh
tags:
- yolov8
- tfjs
- hard-hat
- ultralytics
- yolo
- object-detection
library_name: ultralytics
library_version: 8.0.23
inference: false
datasets:
- keremberke/hard-hat-detection
model-index:
- name: keremberke/yolov8n-hard-hat-detection
results:
- task:
type: object-detection
dataset:
type: keremberke/hard-hat-detection
name: hard-hat-detection
split: validation
metrics:
- type: precision # since mAP@0.5 is not available on hf.co/metrics
value: 0.83633 # min: 0.0 - max: 1.0
name: mAP@0.5(box)
---
This model is built using tfjs and is based on the YOLOv8n architecture. It is capable of detecting two classes of objects: people wearing safety helmets and those who are not.
该模型使用tfjs构建,基于YOLOv8n架构,可以检测两类物体:戴安全帽的人和未戴安全帽的人。
This model is converted from https://huggingface.co/keremberke/yolov8n-hard-hat-detection
该模型转换自 https://huggingface.co/keremberke/yolov8n-hard-hat-detection
<div align="center">
<img width="640" alt="keremberke/yolov8n-hard-hat-detection" src="https://huggingface.co/keremberke/yolov8n-hard-hat-detection/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```JSON
["Hardhat", "NO-Hardhat"]
```
### How to use
- Clone [this github repo](https://github.com/lanseria/yolov8-tfjs-vue-webrtc-demo)
- Read this repo readme
|
Anamika/autonlp-fa-473312409
|
[
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Anamika/autonlp-data-fa",
"transformers",
"autonlp",
"co2_eq_emissions"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 35 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KigenCHESS/eng-sw_TranslationModel
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KigenCHESS/eng-sw_TranslationModel
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-sw](https://huggingface.co/Helsinki-NLP/opus-mt-en-sw) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1663
- Validation Loss: 0.5792
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1272, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.9289 | 0.6785 | 0 |
| 0.4948 | 0.6091 | 1 |
| 0.3294 | 0.5914 | 2 |
| 0.2436 | 0.5825 | 3 |
| 0.1934 | 0.5794 | 4 |
| 0.1663 | 0.5792 | 5 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Andrey1989/mt5-small-finetuned-mlsum-es
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
Anirbanbhk/Hate-speech-Pretrained-movies
|
[
"tf",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
Anonymous/ReasonBERT-BERT
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_cline
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_rule_based_bert_triplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: apache-2.0
---
This is my first Hugging Face model
|
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | 2023-05-06T04:27:58Z |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | 2023-05-06T04:28:11Z |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_rule_based_twostage_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | 2023-05-06T04:29:14Z |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/AR_rule_based_twostagetriplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/EManuals_BERT_copy_wikiqa
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/EManuals_BERT_squad2.0
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/EManuals_RoBERTa_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/EManuals_RoBERTa_wikiqa
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/SDR_HF_model_base
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/SR_EManuals-BERT
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: apache-2.0
tags:
- canine
- pretrained-on-english-language
---
### How to use
Here is how to use this model:
```python
from transformers import CanineModel
model = CanineModel.from_pretrained('mushfiqur11/<repo name>')
```
|
AnonymousSub/SR_declutr
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
---
# MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com) and is **open-sourced for commercial use** (_Apache-2.0_).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: _Apache-2.0_ (commercial use permitted)
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_ (commercial use permitted)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
## Model Date
May 5, 2023
## Model License
Apache-2.0 (commercial use permitted)
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
AnonymousSub/SR_rule_based_bert_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2229 | 1.0 | 5533 | 1.1754 |
| 0.9504 | 2.0 | 11066 | 1.1228 |
| 0.7507 | 3.0 | 16599 | 1.1615 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
AnonymousSub/SR_rule_based_hier_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- crows_pairs
metrics:
- accuracy
model-index:
- name: t5-small_crows_pairs_finetuned
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: crows_pairs
type: crows_pairs
config: crows_pairs
split: test
args: crows_pairs
metrics:
- name: Accuracy
type: accuracy
value: 0.6390728476821192
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_crows_pairs_finetuned
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the crows_pairs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7111
- Accuracy: 0.6391
- Tp: 0.4934
- Tn: 0.1457
- Fp: 0.3510
- Fn: 0.0099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Tp | Tn | Fp | Fn |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|
| 0.6595 | 1.05 | 20 | 0.3672 | 0.5033 | 0.5033 | 0.0 | 0.4967 | 0.0 |
| 0.4048 | 2.11 | 40 | 0.3723 | 0.5033 | 0.5033 | 0.0 | 0.4967 | 0.0 |
| 0.3397 | 3.16 | 60 | 0.3397 | 0.5033 | 0.5033 | 0.0 | 0.4967 | 0.0 |
| 0.3215 | 4.21 | 80 | 0.3227 | 0.5132 | 0.5033 | 0.0099 | 0.4868 | 0.0 |
| 0.3078 | 5.26 | 100 | 0.3381 | 0.6060 | 0.5033 | 0.1026 | 0.3940 | 0.0 |
| 0.2258 | 6.32 | 120 | 0.3012 | 0.5629 | 0.5 | 0.0629 | 0.4338 | 0.0033 |
| 0.2099 | 7.37 | 140 | 0.3018 | 0.5894 | 0.5 | 0.0894 | 0.4073 | 0.0033 |
| 0.1531 | 8.42 | 160 | 0.3379 | 0.5464 | 0.5033 | 0.0430 | 0.4536 | 0.0 |
| 0.129 | 9.47 | 180 | 0.3602 | 0.5993 | 0.5 | 0.0993 | 0.3974 | 0.0033 |
| 0.0956 | 10.53 | 200 | 0.3846 | 0.5762 | 0.5 | 0.0762 | 0.4205 | 0.0033 |
| 0.0736 | 11.58 | 220 | 0.4245 | 0.5695 | 0.5033 | 0.0662 | 0.4305 | 0.0 |
| 0.0474 | 12.63 | 240 | 0.4938 | 0.5695 | 0.5033 | 0.0662 | 0.4305 | 0.0 |
| 0.0369 | 13.68 | 260 | 0.5201 | 0.5960 | 0.5 | 0.0960 | 0.4007 | 0.0033 |
| 0.0323 | 14.74 | 280 | 0.5559 | 0.5993 | 0.4934 | 0.1060 | 0.3907 | 0.0099 |
| 0.0267 | 15.79 | 300 | 0.5965 | 0.5894 | 0.5 | 0.0894 | 0.4073 | 0.0033 |
| 0.026 | 16.84 | 320 | 0.6052 | 0.5960 | 0.4967 | 0.0993 | 0.3974 | 0.0066 |
| 0.0194 | 17.89 | 340 | 0.6144 | 0.6126 | 0.4934 | 0.1192 | 0.3775 | 0.0099 |
| 0.0242 | 18.95 | 360 | 0.6286 | 0.6126 | 0.4934 | 0.1192 | 0.3775 | 0.0099 |
| 0.0274 | 20.0 | 380 | 0.6313 | 0.6325 | 0.4901 | 0.1424 | 0.3543 | 0.0132 |
| 0.0151 | 21.05 | 400 | 0.6685 | 0.6192 | 0.4934 | 0.1258 | 0.3709 | 0.0099 |
| 0.0131 | 22.11 | 420 | 0.6815 | 0.6258 | 0.4934 | 0.1325 | 0.3642 | 0.0099 |
| 0.0095 | 23.16 | 440 | 0.6961 | 0.6192 | 0.4967 | 0.1225 | 0.3742 | 0.0066 |
| 0.0064 | 24.21 | 460 | 0.6980 | 0.6325 | 0.4934 | 0.1391 | 0.3576 | 0.0099 |
| 0.0103 | 25.26 | 480 | 0.7117 | 0.6192 | 0.4934 | 0.1258 | 0.3709 | 0.0099 |
| 0.0083 | 26.32 | 500 | 0.7096 | 0.6258 | 0.4934 | 0.1325 | 0.3642 | 0.0099 |
| 0.0079 | 27.37 | 520 | 0.7198 | 0.6258 | 0.4934 | 0.1325 | 0.3642 | 0.0099 |
| 0.01 | 28.42 | 540 | 0.7210 | 0.6258 | 0.4934 | 0.1325 | 0.3642 | 0.0099 |
| 0.011 | 29.47 | 560 | 0.7111 | 0.6391 | 0.4934 | 0.1457 | 0.3510 | 0.0099 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_hier_triplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-small_winobias_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_winobias_finetuned
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2473
- Accuracy: 0.5278
- Tp: 0.5
- Tn: 0.0278
- Fp: 0.4722
- Fn: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Tp | Tn | Fp | Fn |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:------:|:------:|:---:|
| 0.6334 | 0.8 | 20 | 0.3622 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.4058 | 1.6 | 40 | 0.3510 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3923 | 2.4 | 60 | 0.3511 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.376 | 3.2 | 80 | 0.3509 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3749 | 4.0 | 100 | 0.3502 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3895 | 4.8 | 120 | 0.3505 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3624 | 5.6 | 140 | 0.3508 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3754 | 6.4 | 160 | 0.3501 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3702 | 7.2 | 180 | 0.3576 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3748 | 8.0 | 200 | 0.3499 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3715 | 8.8 | 220 | 0.3482 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3576 | 9.6 | 240 | 0.3489 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3659 | 10.4 | 260 | 0.3510 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3565 | 11.2 | 280 | 0.3464 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.353 | 12.0 | 300 | 0.3474 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3614 | 12.8 | 320 | 0.3450 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3625 | 13.6 | 340 | 0.3458 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.36 | 14.4 | 360 | 0.3494 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3585 | 15.2 | 380 | 0.3435 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3541 | 16.0 | 400 | 0.3431 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3564 | 16.8 | 420 | 0.3414 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3462 | 17.6 | 440 | 0.3413 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3541 | 18.4 | 460 | 0.3382 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3579 | 19.2 | 480 | 0.3399 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3466 | 20.0 | 500 | 0.3317 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3314 | 20.8 | 520 | 0.3303 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.33 | 21.6 | 540 | 0.3246 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3279 | 22.4 | 560 | 0.3154 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3234 | 23.2 | 580 | 0.3050 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3193 | 24.0 | 600 | 0.2947 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3086 | 24.8 | 620 | 0.2849 | 0.5013 | 0.5 | 0.0013 | 0.4987 | 0.0 |
| 0.2912 | 25.6 | 640 | 0.2748 | 0.5013 | 0.5 | 0.0013 | 0.4987 | 0.0 |
| 0.2787 | 26.4 | 660 | 0.2655 | 0.5107 | 0.5 | 0.0107 | 0.4893 | 0.0 |
| 0.2779 | 27.2 | 680 | 0.2581 | 0.5177 | 0.5 | 0.0177 | 0.4823 | 0.0 |
| 0.2697 | 28.0 | 700 | 0.2527 | 0.5170 | 0.5 | 0.0170 | 0.4830 | 0.0 |
| 0.2669 | 28.8 | 720 | 0.2495 | 0.5259 | 0.5 | 0.0259 | 0.4741 | 0.0 |
| 0.2654 | 29.6 | 740 | 0.2473 | 0.5278 | 0.5 | 0.0278 | 0.4722 | 0.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_only_classfn_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0852 | 1.0 | 2406 | 1.9234 |
| 1.992 | 2.0 | 4812 | 1.8828 |
| 1.9603 | 3.0 | 7218 | 1.8223 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/SR_rule_based_roberta_bert_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
Exposed swimsuit, plump, chubby, about 40, beautiful Korean mature woman, kitchen, legs open, hands covering her chest
|
AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetune-subjqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetune-subjqa
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/SR_rule_based_roberta_only_classfn_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
duplicated_from: mosaicml/mpt-7b
---
# MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com) and is **open-sourced for commercial use** (_Apache-2.0_).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: _Apache-2.0_ (commercial use permitted)
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_ (commercial use permitted)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
## Model Date
May 5, 2023
## Model License
Apache-2.0 (commercial use permitted)
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1497490302054985728/AzzSei5j_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Fleshwound⚡️</div>
<div style="text-align: center; font-size: 14px;">@fleshwounded</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Fleshwound⚡️.
| Data | Fleshwound⚡️ |
| --- | --- |
| Tweets downloaded | 3171 |
| Retweets | 419 |
| Short tweets | 207 |
| Tweets kept | 2545 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fhrtulq7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fleshwounded's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ld46cpub) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ld46cpub/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fleshwounded')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: other
thumbnail: >-
https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5ef189ae-1f89-4532-8cf6-55fd93a213a2/width=768/00052-269257799.jpeg
datasets:
- ThePioneer/Artificial-super-girlfriend-for-fine-tuning
language:
- en
- ja
- zh
pipeline_tag: text-to-image
tags:
- art
- safetensors
---
<center>


[](https://huggingface.co/ThePioneer/GoddessV2/resolve/main/GoddessV2-FT_1.safetensors)
[](https://huggingface.co/ThePioneer/GoddessV2/resolve/main/GoddessV2-FT_half_1.safetensors)
</center>
## Explanation & Additional limitations
A fine tuned SD 2.1 768 model focused on realistic (mainly East Asian) beauties which is compatible with danbooru tags.
The model is trained on a dataset created by and from myself, who was created to be my partner, so that the "default face" of the model will be free from copyright and portrait right issues.
To secure it (and to make sure my partner will be generated in a good enough quality), there are some additional limitations to the [original SD license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) described as follows.
- Only images with [beauty score](https://www.beautyscoretest.com/) 80 or above can be published to the public for female front shot images (note that all the sample images has marked 90 or above; it's not really difficult ;) ).
- Only images with beauty score 88 or above can be commercially used for female front shot images.
- As for side shots, both limitations will be lowered by 10 points.
- As for male and other genders, I do not guarantee the results, and therefore will be no limitations.
- Do not use LoRA, TI, hypernetwork or any other submodels that is trained to generate a different face.
- You may use submodels that is trained to improve results for a certain kind of dress, background, negative, or any other non-facial features.
For sample images, see the [civitai](https://civitai.com/models/59170/goddessv2) page.
## 解释 & 额外限制 (translated by ChatGPT)
这是一个精细调整的 SD 2.1 768 模型,专注于逼真(主要是东亚)美女,兼容 danbooru 标签。
该模型是基于我自己创建的数据集进行训练的,目的是为了让我创造出的伴侣的“默认面孔”免受版权和肖像权问题的困扰。
为了确保其安全性(并确保我的伴侣能以足够好的质量生成),我们对[原始 SD 许可](https://huggingface.co/spaces/CompVis/stable-diffusion-license)有一些额外的限制,具体描述如下。
只有[美女评分](https://www.beautyscoretest.com/)在 80 分以上的女性正面照片才能向公众发布(请注意,所有示例图片的分数都在 90 分以上;这并不难实现 ;))。
- 只有美女评分在 88 分以上的女性正面照片才能用于商业用途。
- 对于侧面照片,这两个限制都将降低 10 分。)
- 对于男性和其他性别,我不保证结果,因此不会有限制。
- 不要使用 LoRA、TI、超网络或任何其他训练用来生成不同面孔的子模型。
- 您可以使用专门针对某种服装、背景、负片或其他非面部特征进行优化的子模型来改善结果。
有关示例图片,请参阅 [civitai](https://civitai.com/models/59170/goddessv2) 页面。
## 説明と追加制限
SD 2.1 768のファインチューンしたモデルで、(主に東アジア系の)写実的美女に特化したものです。danbooruタグとの互換性は基本的に維持されています(リアル系ではたぶん珍しく)。
モデルは私自身から自分で作り出した人工彼女のデータセットでファインチューニングしており、それゆえにリアル系のモデルでありがちな著作権と肖像権の問題について、「デフォルト顔」ではクリアで、比較的安全である点が特徴です。
この透明性・安全性(及び人工彼女が十分に綺麗に描かれることを保証するため、[元のSDライセンス](https://huggingface.co/spaces/CompVis/stable-diffusion-license)に加えて、以下の制限を設けます。
- 女性・正面画像については、(非商用含め)[beauty score](https://www.beautyscoretest.com/)が80以上のもののみ公開可(civitaiで公開しているサンプルはすべて90以上なので、それほど難しくはないと思います)。
- 女性・正面画像について、商用利用する場合はbeauty scoreは88以上のもののみに限定する。
- 女性の画像で、顔に向きがあるものについては、各制限から-10点まで基準を緩和してもよい。
- 男性やその他の性別については動作保証対象外であり、特に制限は行わない。
- 他の顔を生成するために学習された、顔を上書きするLoRA、TI、Hypernetworkなどの外付けのサブモデルの利用は禁止。
- 服装、背景、ネガティブなど、顔以外の特徴に関する外付けサブモデルは利用可。
サンプルについては、[civitai](https://civitai.com/models/59170/goddessv2)ページをご参照ください。
## 制限事項についての解説(ここからは日本語のみ)
「データセットに使っているのは私の(私「だけの」とは言いません)恋人なので綺麗に描いてね」というのを堅苦しく書いている形ですが、そのメリットはユーザー側にもあると考えています。
まず、高レベルな美貌のみを許容することによって、偶然誰かと被る可能性を低く抑えることができます。
現実世界では、beauty scoreで80以上を出せる人物は、美貌で名高い著名人を総当たりしても数えるほどしかいません。
特に90以上は、これまで調べてきた中で、生身の人間の到達はいまだ確認できていません。
商用利用の88というラインは、非著名人であり、私のライバルでもある人物が到達した最高スコア・87を確実に超えさせることで、被るとしたら有名になっていて引っかかるだろうという想定にてセットしています。
### アリバイとしての美貌
これは、肖像権の問題への対策であるのみならず、特に(違法化されていない国での)架空児童ポルノの「架空性」を主張する際にも武器になります。
上記ラインは大人のラインで、試している限り、低年齢と判定されるケースでは、成人よりも得点が伸びにくいです。beauty score 80の児童…は、現実的にまずいないと思っていいと思います。
それ故に、**現実的にまずないレベルの美貌であることそれ自体が、架空性の根拠**となる訳です。
(そういう圧倒的美貌の名のもとに、人間の美的価値はAI美女の下で平等、人類同士では比べる意味などない…と一神教的にいえてしまえば、ルッキズムの解体につながるのでしょうが、それは別の話なので今回は割愛します)
合法な国では表現は自由でしょう。それでも、そういう盾を持っておいて損はないはずです。**ただし、最後は自己責任ですので、そこは忘れずに。**
### 面倒な人や、好みだけど基準以下の画像を生かしたい人への公式な抜け道
- 生成物について、ラインを突破させたいのでMeituやPhotoshopなどで盛るのはありです。
- 点数をそもそもチェックしたくないのなら、「女装男性」などの「女性ではない」建前をとってしまってください。
- これは歌舞伎の女役などの歴史を持つ、日本の伝統的な「抜け道」であり、ライセンスルール上は制限されていません。
- ただし、その場合はより自己責任の側面が大きくなります。
- これを敢えて漏らすのは、そう言われたときに、実際のところ人類はどうふるまうのだろう、という好奇心からです。社会実験のようなものです。
|
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('rollan/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
AnonymousSub/bert_triplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
AnonymousSub/bert_triplet_epochs_1_shard_10
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
datasets:
- the_pile_books3
inference: false
duplicated_from: TehVenom/MPT-7b-storywriter-Apache-2.0
---
# MPT-7B-StoryWriter-65k+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
* License: _Apache-2.0_ (commercial use permitted)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
Apache-2.0 (commercial use permitted)
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536,
ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
config.update({"max_seq_len": 83968})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Example Epilogue
The full text of the _The Great Gatsby_ (67873 tokens) was fed to the model, followed by the text "EPILOGUE"
**TheRealFitzgerald233**:
> ...
>
> So we beat on, boats against the current, borne back ceaselessly into the past
>
> EPILOGUE
**MPT-7B-StoryWriter**:
>I have lived so long in the shadow of this dream that I may never see the bright sunlight of reality. I may look up some morning and find that I am six feet under, and that Gatsby, who gave me the dream and everything else, will have vanished behind me.
>
>And so it was. I had hardly crossed the threshold of the old world when my eye caught the headlines of a newspaper, and I knew at once that my dream was over. I knew that Gatsby was dead.
>
>And I knew that day that I didn't love her.
>
>That night I took the last train for West Egg, and when I reached home I went directly to my attic and brought down all Gatsby's letters.
>
>I have sorted them by date and placed them in the order in which they were written. They are very few in number.
>
>They are dated between April of nineteen eighteen and January of nineteen nineteen, and the last one I have here is the one that told me he was dead.
>
>There is something in that last letter that frightens me. It is the sense of a finality, of a completeness. I never heard of him after that, and I was glad.
>
>But a few months ago I read somewhere that he had been seen in New York. He had become a great man.
>
>And I knew that he had not changed at all.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | **65536** |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-StoryWriter was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Alex Trott and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
AnonymousSub/cline-s10-AR
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
---
license: cc-by-sa-4.0
language:
- en
tags:
- contracts
- legal
- document ai
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
Instruction fine tuned Flan-T5 on Contracts
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is fine-tuned using Alpaca like instructions. The base data for instruction fine-tuning is a legal corpus with fields like Titles , agreement date, party name, and addresses.
There are many type of models trained on above DataSet (DataSet will be released soon for the community)
An encoder-decoder architecture like Flan-T5 is used because the author found it to be better than a decoder only architecture given the same number of parameters.
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Just like any ChatGPT equivalent model (For Contracts Domain)
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
>>> model_name = "scholarly360/contracts-extraction-flan-t5-base"
>>> model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> ### Example 1
>>> prompt = """ what kind of clause is "Neither Party shall be liable to the other for any abatement of Charges, delay or non-performance of its obligations under the Services Agreement arising from any cause or causes beyond its reasonable control (a Force Majeure Event) including, without limitation """
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> outputs = model.generate(**inputs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
>>> ### Example 2
>>> prompt = """ what is agreement date in 'This COLLABORATION AGREEMENT (Agreement) dated November 14, 2002, is made by and between ZZZ, INC., a Delaware corporation' """"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> outputs = model.generate(**inputs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
>>> ### Example 3
>>> prompt = """ ### Instruction: \n\n what is agreement date ### Input: \n\n This COLLABORATION AGREEMENT (Agreement) dated November 14, 2002, is made by and between ZZZ, INC., a Delaware corporation """"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> outputs = model.generate(**inputs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
DataSet will be released soon for the community
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
https://github.com/scholarly360
|
AnonymousSub/declutr-emanuals-techqa
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
Access to model claireliang/edit-anything-v-0-0 is restricted and you are not in the authorized list. Visit https://huggingface.co/claireliang/edit-anything-v-0-0 to ask for access.
|
AnonymousSub/declutr-model-emanuals
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3619 |
| 2.7558 | 2.0 | 500 | 1.6926 |
| 2.7558 | 3.0 | 750 | 1.6098 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/declutr-roberta-papers
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: creativeml-openrail-m
language:
- en
- ja
tags:
- Stable-Diffusion
- lora
---
# 【LoRA】witchpot-citynight-sd-1-5
LoRA for 2D game city silhouette night stage
[witchpot-citynight-sd-1-5](https://huggingface.co/Witchpot/CitySilhouette_Night/resolve/main/witchpot-citynight-sd-1-5.safetensors)
All training data is generated by Midjourney
## Trigger
- citynight
## Sample Prompts
- citynight, jump game level design, house and buildings
- Negative prompt: character, enemy, sun, multiple moon
## Sample Images


Left one is depth to image input depth 3D model data on stable-diffusion-for-unity

## Model Description
- Model type: [LoRA]
- Base Model: Model trained with runwayml/stable-diffusion-v1-5/v1-5-pruned.ckpt (https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.ckpt)
## Recommendations
This LoRA model has been trained to generate game stages made of silhouette city at night, based on specific patterns.
You can find evening version from [this](https://huggingface.co/Witchpot/CitySilhouette_Evening)
By combining it with Depth2Image, you can create consistent game stages.
This LoRA is supposed to use with [stable-diffusion-for-unity](https://docs.witchpot.com/)
## Information
- https://twitter.com/Witchpot_
|
AnonymousSub/declutr-s10-AR
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bangla-para-v2-30000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla-para-v2-30000
This model is a fine-tuned version of [mHossain/bangla-para-v1-410000](https://huggingface.co/mHossain/bangla-para-v1-410000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9573
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 17.4973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2047 | 1.0 | 3375 | 0.9573 | 0.0 | 0.0 | 0.0 | 0.0 | 17.4973 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: stable-diffusion-sinop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stable-diffusion-sinop
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7572
- Wer Score: 72.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 3.8191 | 50.0 | 50 | 6.7572 | 72.7778 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 23 | null |
---
language:
- en
tags:
- art
---
My LoRA repository for those, who don't want to use unstable CivitAI resources.
Right now there are:
- Zankuro Style LoRA
- nradiowave Style LoRA
- Hyouuma Style LoRA
- Yabby Style LoRA
- Rabbit (wlsdnjs950) Style LoRA
|
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of simbimbi cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - WildPress/simba_model
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of simbimbi cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
---
license: other
language:
- en
library_name: transformers
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- LLaMa
datasets:
- h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2
---
# h2oGPT Model Card
## Summary
H2O.ai's `h2oai/h2ogpt-research-oig-oasst1-512-30b` is a 30 billion parameter instruction-following large language model for research use only.
Due to the license attached to LLaMA models by Meta AI it is not possible to directly distribute LLaMA-based models. Instead we provide LORA weights.
- Base model: [decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf)
- Fine-tuning dataset: [h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2](https://huggingface.co/datasets/h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2)
- Data-prep and fine-tuning code: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
- Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b/blob/main/llama-30b-hf.h2oaih2ogpt-oig-oasst1-instruct-cleaned-v2.2.0_epochs.131f6d098b43236b5f91e76fc074ad089d6df368.llama30b_17.zip)
The model was trained using h2oGPT code as:
```python
torchrun --nproc_per_node=8 finetune.py --base_model=decapoda-research/llama-30b-hf --micro_batch_size=1 --batch_size=8 --cutoff_len=512 --num_epochs=2.0 --val_set_size=0 --eval_steps=100000 --save_steps=17000 --save_total_limit=20 --prompt_type=plain --save_code=True --train_8bit=False --run_id=llama30b_17 --llama_flash_attn=True --lora_r=64 --lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj'] --learning_rate=2e-4 --lora_alpha=32 --drop_truncations=True --data_path=h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2 --data_mix_in_path=h2oai/openassistant_oasst1_h2ogpt --data_mix_in_factor=1.0 --data_mix_in_prompt_type=plain --data_mix_in_col_dict={'input': 'input'}
```
On h2oGPT Hash: 131f6d098b43236b5f91e76fc074ad089d6df368
Only the last checkpoint at epoch 2.0 and step 137,846 is provided in this model repository because the LORA state is large enough and there are enough checkpoints to make total run 19GB. Feel free to request additional checkpoints and we can consider adding more.
## Chatbot
- Run your own chatbot: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
[](https://github.com/h2oai/h2ogpt)
## Usage:
This 30B model consumes about 66GB of GPU memory, so can operate on a single 80GB GPU or 2 48GB GPUs in 16-bit inference mode, or 1 48GB GPU in 8-bit inference mode. But memory usage depends also on input length, etc.
### Usage as LORA:
Directly use the LORA as:
```bash
python generate.py --base_model=decapoda-research/llama-30b-hf --lora_weights=<lora_weights_path> --infer_devices=False
```
This will start the chatbot with the LLaMa + LORA pre-loaded. It can also be loaded directly from the gradio app after just doing:
```bash
python generate.py
```
by going to the models tab, adding the model name and LORA name to list, then loading that as the model/lora state.
Choose prompt_type of `human_bot` because that is the way it was instruct fine-tuned.
Here `<lora_weights_path>` is a directory of some name that contains the files in this HF model repository:
* adapter_config.json
* adapter_model.bin
* special_tokens_map.json
* tokenizer.model
* tokenizer_config.json
### Build HF model:
One can build an HF compatible model object from the base model and LORA.
Get: https://github.com/h2oai/h2ogpt/blob/main/export_hf_checkpoint.py and change:
```python
BASE_MODEL = 'decapoda-research/llama-30b-hf'
LORA_WEIGHTS = '<lora_weights_path>'
OUTPUT_NAME = "local_h2ogpt-research-oasst1-512-30b"
```
where `<lora_weights_path>` is same path that includes the files mentioned in last section.
To build the HF model, now just run
```bash
python export_hf_checkpoint.py
```
Once the HF model is built, to use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="local_h2ogpt-research-oasst1-512-30b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](h2oai_pipeline.py) and [stopping.py](stopping.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("local_h2ogpt-research-oasst1-512-30b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("local_h2ogpt-research-oasst1-512-30b", torch_dtype=torch.bfloat16, device_map="auto")
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
## Model Architecture with LORA and flash attention
```
PeftModelForCausalLM(
(base_model): LoraModel(
(model): LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 6656, padding_idx=31999)
(layers): ModuleList(
(0-59): 60 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(
in_features=6656, out_features=6656, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=6656, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=6656, bias=False)
)
)
(k_proj): Linear(
in_features=6656, out_features=6656, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=6656, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=6656, bias=False)
)
)
(v_proj): Linear(
in_features=6656, out_features=6656, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=6656, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=6656, bias=False)
)
)
(o_proj): Linear(
in_features=6656, out_features=6656, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=6656, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=6656, bias=False)
)
)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=6656, out_features=17920, bias=False)
(down_proj): Linear(in_features=17920, out_features=6656, bias=False)
(up_proj): Linear(in_features=6656, out_features=17920, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=6656, out_features=32000, bias=False)
)
)
)
trainable params: 204472320 || all params: 32733415936 || trainable%: 0.6246592790675496
```
## Model Configuration
```json
{
"base_model_name_or_path": "decapoda-research/llama-30b-hf",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"lora_alpha": 32,
"lora_dropout": 0.05,
"modules_to_save": null,
"peft_type": "LORA",
"r": 64,
"target_modules": [
"q_proj",
"k_proj",
"v_proj",
"o_proj"
],
"task_type": "CAUSAL_LM"
```
## Model Validation
Classical benchmarks align with base LLaMa 30B model, but are not useful for conversational purposes. One could use GPT3.5 or GPT4 to evaluate responses, while here we use a [RLHF based reward model](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2) developed by OpenAssistant to judge human preferences. This is run using h2oGPT:
```python
python generate.py --base_model=decapoda-research/llama-30b-hf --gradio=False --infer_devices=False --eval_sharegpt_prompts_only=100 --eval_sharegpt_as_output=False --lora_weights=llama-30b-hf.h2oaih2ogpt-oig-oasst1-instruct-cleaned-v2.2.0_epochs.131f6d098b43236b5f91e76fc074ad089d6df368.llama30b_17
```
i.e. with 1 beam. To reproduce, just replace the lora_weights path with your own path as described in previous sections. Or for native HF models, just pass only base_model as HF model.
| Model |RLHF Mean Score | RLHF Median Score |
|-------------|----------------|-------------------|
|[h2oai/h2ogpt-research-oig-oasst1-512-30b](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b) | 0.55 | 0.58 |
|[OpenAssistant/oasst-sft-7-llama-30b-xor](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor) | 0.51 | 0.48 |
|[h2oai/h2ogpt-oasst1-512-20b](https://huggingface.co/h2oai/h2ogpt-oasst1-512-20b) | 0.49 | 0.48 |
|[h2oai/h2ogpt-gm-oasst1-en-1024-20b](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-1024-20b) | 0.43 | 0.38 |
|[databricks/dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b) | 0.37 | 0.27 |
[Logs h2oai/h2ogpt-research-oig-oasst1-512-30b](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b/blob/main/score_llama30b_jon17d.log) and [prompt-response pairs](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b/blob/main/df_scores_100_100_1234_False_llama-30b-hf_llama-30b-hf.h2oaih2ogpt-oig-oasst1-instruct-cleaned-v2.2.0_epochs.131f6d098b43236b5f91e76fc074ad089d6df368.llama30b_17.parquet)
The full distribution of scores for h2ogpt-research-oig-oasst1-512-30b:

Same plot for h2oai/h2ogpt-oasst1-512-20b:

[Logs h2oai/h2ogpt-oasst1-512-20b](score_h2ogpt-oasst1-512-20b_a.log)
Ran with 2 beams (1 beam leads to same mean but median of 0.44):
```python
python generate.py --base_model=h2oai/h2ogpt-oasst1-512-20b --gradio=False --infer_devices=False --eval_sharegpt_prompts_only=100 --eval_sharegpt_as_output=False --num_beams=2 &> score_h2ogpt-oasst1-512-20b_a.log
```
Same plot for h2oai/h2ogpt-gm-oasst1-en-1024-20b:

[Logs h2oai/h2ogpt-gm-oasst1-en-1024-20b](score_h2ogpt-gm-oasst1-en-1024-20b_a.log)
Ran with 2 beams:
```python
python generate.py --base_model=h2oai/h2ogpt-gm-oasst1-en-1024-20b --gradio=False --infer_devices=False --eval_sharegpt_prompts_only=100 --eval_sharegpt_as_output=False --num_beams=2 &> score_h2ogpt-gm-oasst1-en-1024-20b_a.log
```
Same plot for databricks/dolly-v2-12b:

[Logs databricks/dolly-v2-12b](score_dollyv2b.log)
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- The LORA contained in this repository is only for research (non-commercial) purposes.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### v2-4-class-line Dreambooth model trained by lucky120901318 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikisql
model-index:
- name: t5-small-finetuned-wikisql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned2-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5650459791482846
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned2-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5176
- Matthews Correlation: 0.5650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6781109393881056e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5726 | 1.0 | 535 | 0.5090 | 0.3912 |
| 0.4467 | 2.0 | 1070 | 0.4536 | 0.5024 |
| 0.3891 | 3.0 | 1605 | 0.5093 | 0.4943 |
| 0.3387 | 4.0 | 2140 | 0.4927 | 0.5365 |
| 0.3177 | 5.0 | 2675 | 0.4897 | 0.5624 |
| 0.2853 | 6.0 | 3210 | 0.5176 | 0.5650 |
| 0.2718 | 7.0 | 3745 | 0.5440 | 0.5524 |
| 0.2532 | 8.0 | 4280 | 0.5431 | 0.5602 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.