modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 06:28:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 546
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 06:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lyhhhhhh/mt5-small-finetuned-test-class3
|
lyhhhhhh
| 2022-11-29T23:53:45Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T23:52:43Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mt5-small-finetuned-test-class3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-test-class3
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.2262
- Validation Loss: 1.8557
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 64112, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.0384 | 2.3228 | 0 |
| 2.7913 | 2.1021 | 1 |
| 2.5264 | 1.9837 | 2 |
| 2.4013 | 1.9247 | 3 |
| 2.3268 | 1.8783 | 4 |
| 2.2781 | 1.8712 | 5 |
| 2.2462 | 1.8563 | 6 |
| 2.2262 | 1.8557 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ririying/my-finetuned-mt5-class0
|
ririying
| 2022-11-29T23:52:59Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T23:52:19Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-finetuned-mt5-class0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-finetuned-mt5-class0
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0505
- Validation Loss: 1.7733
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 107192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5536 | 2.1181 | 0 |
| 2.4769 | 1.9296 | 1 |
| 2.2865 | 1.8569 | 2 |
| 2.1928 | 1.8241 | 3 |
| 2.1344 | 1.8022 | 4 |
| 2.0953 | 1.7880 | 5 |
| 2.0671 | 1.7811 | 6 |
| 2.0505 | 1.7733 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/billym2k-elonmusk-lexfridman
|
huggingtweets
| 2022-11-29T23:52:17Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-29T23:29:00Z |
---
language: en
thumbnail: http://www.huggingtweets.com/billym2k-elonmusk-lexfridman/1669765849257/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/956331551435960322/OaqR8pAB_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521369379941715968/bg0KgPWm_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Lex Fridman & Shibetoshi Nakamoto</div>
<div style="text-align: center; font-size: 14px;">@billym2k-elonmusk-lexfridman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Lex Fridman & Shibetoshi Nakamoto.
| Data | Elon Musk | Lex Fridman | Shibetoshi Nakamoto |
| --- | --- | --- | --- |
| Tweets downloaded | 3198 | 2411 | 341 |
| Retweets | 127 | 253 | 1 |
| Short tweets | 965 | 49 | 49 |
| Tweets kept | 2106 | 2109 | 291 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nokzkg2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @billym2k-elonmusk-lexfridman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1cnzg4dt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1cnzg4dt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/billym2k-elonmusk-lexfridman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lyhhhhhh/mt5-small-finetuned-test-class2
|
lyhhhhhh
| 2022-11-29T23:51:06Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T23:50:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mt5-small-finetuned-test-class2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-test-class2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3668
- Validation Loss: 1.9101
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 44464, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.4693 | 2.3874 | 0 |
| 3.0670 | 2.1557 | 1 |
| 2.7416 | 2.0547 | 2 |
| 2.5824 | 2.0089 | 3 |
| 2.4922 | 1.9654 | 4 |
| 2.4299 | 1.9344 | 5 |
| 2.3906 | 1.9255 | 6 |
| 2.3668 | 1.9101 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SirVeggie/cutesexyrobutts
|
SirVeggie
| 2022-11-29T23:42:19Z | 0 | 16 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-23T21:37:56Z |
---
license: creativeml-openrail-m
---
# Cutesexyrobutts stable diffusion model
Original artist: Cutesexyrobutts\
Patreon: https://www.patreon.com/cutesexyrobutts
## Basic explanation
Token and Class words are what guide the AI to produce images similar to the trained style/object/character.
Include any mix of these words in the prompt to produce verying results, or exclude them to have a less pronounced effect.
There is usually at least a slight stylistic effect even without the words, but it is recommended to include at least one.
Adding token word/phrase class word/phrase at the start of the prompt in that order produces results most similar to the trained concept, but they can be included elsewhere as well. Some models produce better results when not including all token/class words.
3k models are are more flexible, while 5k models produce images closer to the trained concept.
I recommend 2k/3k models for normal use, and 5k/6k models for model merging and use without token/class words.
However it can be also very prompt specific. I highly recommend self-experimentation.
These models are subject to the same legal concerns as their base models.
## Comparison
Epoch 5 version was earlier in the waifu diffusion 1.3 training process, so it is easier to produce more varied, non anime, results.
Robutts-any is the newest and best model.
## robutts-any
```
token: m_robutts
class: illustration style
base: anything v3
```
## robutts
```
token: §
class: robutts
base: waifu diffusion 1.3
```
## robutts_e5
```
token: §
class: robutts
base: waifu diffusion 1.3-e5
```
|
lct-rug-2022/edos-2023-baseline-microsoft-deberta-v3-base-label_vector
|
lct-rug-2022
| 2022-11-29T23:41:42Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T11:22:40Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: edos-2023-baseline-microsoft-deberta-v3-base-label_vector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-microsoft-deberta-v3-base-label_vector
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5524
- F1: 0.3162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1209 | 1.18 | 100 | 1.9990 | 0.0801 |
| 1.7997 | 2.35 | 200 | 1.7293 | 0.1349 |
| 1.5749 | 3.53 | 300 | 1.6080 | 0.2431 |
| 1.3674 | 4.71 | 400 | 1.5411 | 0.2793 |
| 1.2214 | 5.88 | 500 | 1.5285 | 0.2980 |
| 1.0752 | 7.06 | 600 | 1.5165 | 0.3054 |
| 0.9899 | 8.24 | 700 | 1.5210 | 0.3186 |
| 0.8733 | 9.41 | 800 | 1.5385 | 0.3134 |
| 0.8578 | 10.59 | 900 | 1.5524 | 0.3162 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
davidaponte/sd-class-butterflies-32
|
davidaponte
| 2022-11-29T23:24:52Z | 31 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T23:23:17Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋. It was trained using a Tesla T4 GPU on a
Google Colab Notebook.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(davidaponte/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
lct-rug-2022/edos-2023-baseline-albert-base-v2-label_vector
|
lct-rug-2022
| 2022-11-29T22:57:00Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T21:58:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: edos-2023-baseline-albert-base-v2-label_vector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-albert-base-v2-label_vector
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8762
- F1: 0.1946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1002 | 1.18 | 100 | 1.9982 | 0.1023 |
| 1.7832 | 2.35 | 200 | 1.8435 | 0.1310 |
| 1.57 | 3.53 | 300 | 1.8097 | 0.1552 |
| 1.3719 | 4.71 | 400 | 1.8216 | 0.1631 |
| 1.2072 | 5.88 | 500 | 1.8138 | 0.1811 |
| 1.0186 | 7.06 | 600 | 1.8762 | 0.1946 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-banking-18-16-5
|
fathyshalab
| 2022-11-29T22:47:43Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T22:20:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-18-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-18-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Euchale/ArcaneInkpunk2
|
Euchale
| 2022-11-29T22:28:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-29T21:19:34Z |
50/50 Merge of Arcane V3 and Inkpunk V2
|
dn-gh/ddpm-apes-128
|
dn-gh
| 2022-11-29T21:55:07Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-14T21:53:21Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
# ddpm-apes-128

## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
from diffusers import DDPMPipeline
import torch
model_id = "dn-gh/ddpm-apes-128"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id).to(device)
# run pipeline in inference
image = ddpm().images[0]
# save image
image.save("generated_image.png")
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
This model is trained on 4866 images generated with [ykilcher/apes](https://huggingface.co/ykilcher/apes) for 30 epochs.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/dn-gh/ddpm-apes-128/tensorboard?#scalars)
|
ririying/mt5-small-finetuned-test
|
ririying
| 2022-11-29T21:41:01Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T18:37:20Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ririying/mt5-small-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ririying/mt5-small-finetuned-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0505
- Validation Loss: 1.7733
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 107192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5536 | 2.1181 | 0 |
| 2.4769 | 1.9296 | 1 |
| 2.2865 | 1.8569 | 2 |
| 2.1928 | 1.8241 | 3 |
| 2.1344 | 1.8022 | 4 |
| 2.0953 | 1.7880 | 5 |
| 2.0671 | 1.7811 | 6 |
| 2.0505 | 1.7733 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
lct-rug-2022/edos-2023-baseline-roberta-base-label_category
|
lct-rug-2022
| 2022-11-29T20:46:52Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T20:24:29Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: edos-2023-baseline-roberta-base-label_category
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-roberta-base-label_category
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0133
- F1: 0.5792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.169 | 1.18 | 100 | 1.0580 | 0.2159 |
| 0.9143 | 2.35 | 200 | 0.9283 | 0.5405 |
| 0.7535 | 3.53 | 300 | 0.9387 | 0.5665 |
| 0.6085 | 4.71 | 400 | 0.9574 | 0.5664 |
| 0.53 | 5.88 | 500 | 1.0133 | 0.5792 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AndrewChar/model-QA-5-epoch-RU
|
AndrewChar
| 2022-11-29T19:36:19Z | 34 | 17 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"ru",
"dataset:sberquad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_keras_callback
language: ru
datasets:
- sberquad
model-index:
- name: model-QA-5-epoch-RU
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# model-QA-5-epoch-RU
This model is a fine-tuned version of [AndrewChar/diplom-prod-epoch-4-datast-sber-QA](https://huggingface.co/AndrewChar/diplom-prod-epoch-4-datast-sber-QA) on sberquad
dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1991
- Validation Loss: 0.0
- Epoch: 5
## Model description
Модель отвечающая на вопрос по контектсу
это дипломная работа
## Intended uses & limitations
Контекст должен содержать не более 512 токенов
## Training and evaluation data
DataSet SberSQuAD
{'exact_match': 54.586, 'f1': 73.644}
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_re': 2e-06 'decay_steps': 2986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1991 | | 5 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Alred/bart-base-finetuned-summarization-cnn-ver1.2
|
Alred
| 2022-11-29T19:26:01Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-11-29T18:10:27Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: bart-base-finetuned-summarization-cnn-ver1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-summarization-cnn-ver1.2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2476
- Bertscore-mean-precision: 0.8904
- Bertscore-mean-recall: 0.8611
- Bertscore-mean-f1: 0.8753
- Bertscore-median-precision: 0.8891
- Bertscore-median-recall: 0.8600
- Bertscore-median-f1: 0.8741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|
| 2.3305 | 1.0 | 5742 | 2.2125 | 0.8845 | 0.8587 | 0.8713 | 0.8840 | 0.8577 | 0.8706 |
| 1.7751 | 2.0 | 11484 | 2.2028 | 0.8910 | 0.8616 | 0.8759 | 0.8903 | 0.8603 | 0.8744 |
| 1.4564 | 3.0 | 17226 | 2.2476 | 0.8904 | 0.8611 | 0.8753 | 0.8891 | 0.8600 | 0.8741 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
lct-rug-2022/edos-2023-baseline-bert-base-uncased-label_category
|
lct-rug-2022
| 2022-11-29T19:24:32Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T12:51:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: edos-2023-baseline-bert-base-uncased-label_category
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-bert-base-uncased-label_category
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0354
- F1: 0.5675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1743 | 1.18 | 100 | 1.1120 | 0.1949 |
| 1.0197 | 2.35 | 200 | 1.0548 | 0.3307 |
| 0.8872 | 3.53 | 300 | 0.9621 | 0.4795 |
| 0.7117 | 4.71 | 400 | 0.9876 | 0.4947 |
| 0.6173 | 5.88 | 500 | 0.9615 | 0.5447 |
| 0.5015 | 7.06 | 600 | 0.9973 | 0.5512 |
| 0.4076 | 8.24 | 700 | 1.0052 | 0.5620 |
| 0.3381 | 9.41 | 800 | 1.0354 | 0.5675 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Tirendaz/sentiment-model-on-imdb-dataset
|
Tirendaz
| 2022-11-29T19:16:53Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T17:43:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: sentiment-model-on-imdb-dataset
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.85
- name: F1
type: f1
value: 0.8543689320388349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-on-imdb-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3694
- Accuracy: 0.85
- F1: 0.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ikanher/sd-class-butterflies-32
|
ikanher
| 2022-11-29T19:07:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T19:07:21Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ikanher/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
renesteeman/whisper-base-dutch-25
|
renesteeman
| 2022-11-29T19:02:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"nl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-29T10:35:49Z |
---
language:
- nl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Base Dutch 25
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: nl, split: test'
metrics:
- name: Wer
type: wer
value: 29.948494805079477
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Dutch 25
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4919
- Wer: 29.9485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3704 | 0.78 | 500 | 0.5438 | 33.9890 |
| 0.2356 | 1.56 | 1000 | 0.5059 | 31.3516 |
| 0.1335 | 2.34 | 1500 | 0.4953 | 30.5745 |
| 0.0998 | 3.12 | 2000 | 0.4919 | 29.9485 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pcuenq/Paella
|
pcuenq
| 2022-11-29T18:51:19Z | 0 | 6 | null |
[
"text-to-image",
"endpoints-template",
"en",
"arxiv:2211.07292",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-image
| 2022-11-29T15:36:58Z |
---
title: Paella
emoji: 🥘
language:
- en
tags:
- text-to-image
- endpoints-template
license: mit
---
Paella is a novel text-to-image model that uses a compressed quantized latent space, based on a f8 VQGAN, and a masked training objective to achieve fast generation in ~10 inference steps.
* [Paper](https://arxiv.org/abs/2211.07292)
* [Official implementation](https://github.com/dome272/Paella)
Biases and content acknowledgment
Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on 600 million images from the improved <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B aesthetic</a> dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes.
|
Guizmus/SD_PoW_Collection
|
Guizmus
| 2022-11-29T18:47:05Z | 0 | 13 |
EveryDream
|
[
"EveryDream",
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-09T22:34:09Z |
---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/showcase.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
library_name: "EveryDream"
inference: false
---

# Intro
This is a collection of models related to the "Picture of the Week" contest on Stable Diffusion discord.
I try to make a model out of all the submission for people to continue enjoy the theme after the even, and see a little of their designs in other people's creations. The token stays "PoW Style" and I balance the learning on the low side, so that it doesn't just replicate creations.
I also make smaller quality models to help make pictures for the contest itself, based on the theme.
# 29 novembre 2022, "The Stable Kitchen"
## Theme : Burgers and Fries
Welcome to the VERY FIRST edition of the most Stable Kitchen in the universe!
On today’s menu will be Sandwiches & Frie. Since you’re here for the first time, I will explain how it works! You can generate your orders and we will make them for you. Take a seat, flip through the menu, bring all of your favorite ingredients~
* The sandwich with the most cheddar? 5 beef burgers? An infinite fries generator?
* Serve us your best sandwich and fries combo!
Not even the sky's the limit my friend,
You want it?
You have it!
As long as it's delicious, of course!
We’ll see you on the chopping block for this week’s Stable Kitchen!

## Models
### Burgy

* Burgers, burgers burgers
* training: 40 pictures, 6 epochs of 40 repeats, batch size 6, LR1e-6, EveryDream
* balance : Strong, burgers
* **Activation token :** `Burgy`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/ckpts/Burgy.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/dataset_Burgy.zip)
# 22 novembre 2022, "Imaginary Friend"
## Theme : Imaginary Friend
Do you remember putting your hands into what seemed as if it were just plain air and giggling like a child? Having conversations with someone who “wasn’t there”? Nowadays the term “Imaginary Friend” isn’t as frequently used as it used to be, right? Let’s bring it back.
* Can you build your Imaginary Friends actualized?
* What traits do you recall of them? Are they still young? Have they grown up now? Do they resemble you, or a creature that isn’t human?
* Where would you find this Imaginary Friend? Where do they reside? What do they stand for?
Our prompt for this event was created by @Andrekerygma
"a boy drinking tea with a cute monster on the bedroom, disney infinity character design, pixar, artstation, vinyl, toy, figurine, 3 d model, cinema 4 d, substance 3 d painter, vray, unreal engine 5, octane render, cinematic"

## Models
### PoW ArtStyle 22-11-22

* based on all the submissions to the PoW
* training: 73 pictures, 6000 steps on batch 6, 1e-6 polynomial LR.
* balance : a little lighter on the style than last week, still manages to reproduce most participants
* **Activation token :** `PoW ArtStyle`
* Other noticable tokens : Your Discord username, if you participated. Also TMNT,NikeAir Shoes and Sid, Ice Age movie
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/ckpts/PoWArtStyle_ImaginaryFriend.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/PoW_221122_dataset.zip)
### CharacterChan Style

* based on the "Character" dreamer community of the Stable Diffusion Discord
* training: 50 pictures, 160 total repeat, LR1e-6
* balance : correct, but some sub concepts have overtrain a little, like the clown.
* **Activation token :** `CharacterChan Style`
* [CKPT](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CharacterChanStyle-v1.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CharacterChanStyle-v1.zip)
* [Model page](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection#characterchan-style)
### CreatureChan Style

* based on the "Creature" dreamer community of the Stable Diffusion Discord
* training: 50 pictures, 160 total repeat, LR1e-6
* balance : good
* **Activation token :** `CreatureChan Style`
* [CKPT](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CreatureChanStyle-v1.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CreatureChanStyle-v1.zip)
* [Model page](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection#creaturechan-style)
# 14 novembre 2022, "The Never-Ending Loop"
## Theme : The Never-Ending Loop
It is a passed-down proverb that lines represent the flow of time itself. They converge and take shape. They twist, tangle, sometimes unravel, break, and then connect again.
* Without words, how are we able to accurately represent this flow of time with only lines? geometrically, intricately, asymmetricaly, seamlessly, ornately...
* Think of a never-ending pattern, texture, or shape– looping on and on for what feels infinite.
* Just how detailed are you able to get with your patterns?
Our prompt for this event was created by @Asukii !
"the fractal flow of time stretches towards the horizon, surreal fractal intertwined looping pathways, dramatic cinematic perspective, detailed delicate intricate ornate linework, geometric abstract masterwork digital art, quantum wavetracing, ink drawing, optical illusion"


## Models
### PoW Style 14-11-22

* based on all the submissions to the PoW
* training: 101 pictures, 9000 steps on batch 6, 1e-6 polynomial LR.
* balance : a little strong on the style but it made it possible to differentiate each participants
* **Activation token :** `PoW Style`
* Other noticable tokens : Your Discord username, if you participated. Also Rick Roll and "fullbody shot"
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/ckpts/PoWStyle_NeverEndingLoop.ckpt)
* [Diffusers : Guizmus/SD_PoW_Collection/141122/diffusers](https://huggingface.co/Guizmus/SD_PoW_Collection/tree/main/141122/diffusers/)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/PoW_141122_2_dataset.zip)
### Fractime Style

* based on the suggested prompt and theme
* training: 50 pictures, 1750 steps on batch 6, 1e-6 polynomial LR.
* balance : correct, but the style doesn't apply to every subject
* **Activation token :** `Fractime Style`
* Other noticable tokens : intricate, nebula, illusion, person, road, tree, boat
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/ckpts/FractimeStyle.ckpt)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/PoW_141122_1_dataset.zip)
# 09 novembre 2022, "Abstralities"
## Theme : Abstract Realities
Glitch, warp, static, shape, flicker, break, bend, mend
Have you ever felt your reality shift out from under your feet? Our perception falters and repairs itself in the blink of an eye. Just how much do our brains influence what we perceive? How much control do we have over molding these realities?
With the introduction of AI and its rapid pace taking the world by storm, we are seeing single-handedly just how these realities can bring worlds into fruition.
* Can you show us your altered reality?
* Are these realities truly broken, or only bent?
Our example prompt for this event was created by @Aether !
"household objects floating in space, bedroom, furniture, home living, warped reality, cosmic horror, nightmare, retrofuturism, surrealism, abstract, illustrations by alan nasmith"


## Models
### PoW Style 09-11-22

* Main model based on all the results from the PoW
* training: 51 pictures, 3000 steps on 1e-6 polynomial LR.
* balanced on the light side, add attention/weight on the activation token
* **Activation token :** `PoW Style`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/PoWStyle_Abstralities.ckpt)
* [Diffusers : Guizmus/SD_PoW_Collection/091122/diffusers](https://huggingface.co/Guizmus/SD_PoW_Collection/tree/main/091122/diffusers/)
* [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/dataset.zip)
### Bendstract Style

* based on the suggested prompt
* training: 100 pictures, 7500 steps on 1e-6 polynomial LR. overtrained
* **Activation token :** `Bendstract Style`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/Bendstract-v1.ckpt)
### endingReality Style

* based on the suggested prompt
* training: 68 pictures, 6000 steps on 1e-6 polynomial LR. overtrained
* **Activation token :** `BendingReality Style`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/BendingReality_Style-v1.ckpt)
### PoW Style mid-submissions 09-11-22

* based on the first few submissions
* training: 24 pictures, 2400 steps on 1e-6 polynomial LR. a little too trained
* **Activation token :** `PoW Style`
* [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/PoWStyle_midrun.ckpt)
# License
These models are open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
epsil/sd-class-butterflies-64
|
epsil
| 2022-11-29T18:13:23Z | 5 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T18:13:12Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(epsil/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
epsil/sd-class-butterflies-32
|
epsil
| 2022-11-29T17:42:54Z | 6 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T17:42:32Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(epsil/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
ser-mei/borges-gpt-collab
|
ser-mei
| 2022-11-29T17:14:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-06T20:48:40Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: borges-gpt-collab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# borges-gpt-collab
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.2135 | 0.96 | 7 | 10.2022 |
| 10.3195 | 1.96 | 14 | 9.6343 |
| 9.9127 | 2.96 | 21 | 9.4637 |
| 9.7295 | 3.96 | 28 | 9.2993 |
| 9.527 | 4.96 | 35 | 9.0962 |
| 9.2648 | 5.96 | 42 | 8.8294 |
| 8.9309 | 6.96 | 49 | 8.5103 |
| 8.5639 | 7.96 | 56 | 8.1858 |
| 8.2034 | 8.96 | 63 | 7.8816 |
| 7.8665 | 9.96 | 70 | 7.6303 |
| 7.5715 | 10.96 | 77 | 7.4307 |
| 7.3259 | 11.96 | 84 | 7.2632 |
| 7.136 | 12.96 | 91 | 7.1494 |
| 6.9558 | 13.96 | 98 | 7.0957 |
| 6.8068 | 14.96 | 105 | 7.0199 |
| 6.6656 | 15.96 | 112 | 6.9554 |
| 6.5264 | 16.96 | 119 | 6.9324 |
| 6.3843 | 17.96 | 126 | 6.8940 |
| 6.2204 | 18.96 | 133 | 6.8799 |
| 6.0915 | 19.96 | 140 | 6.8788 |
| 5.9532 | 20.96 | 147 | 6.8719 |
| 5.8169 | 21.96 | 154 | 6.8647 |
| 5.6531 | 22.96 | 161 | 6.8865 |
| 5.5125 | 23.96 | 168 | 6.8940 |
| 5.3666 | 24.96 | 175 | 6.9248 |
| 5.2377 | 25.96 | 182 | 6.9421 |
| 5.1115 | 26.96 | 189 | 6.9631 |
| 4.9639 | 27.96 | 196 | 7.0135 |
| 4.824 | 28.96 | 203 | 7.0352 |
| 4.6886 | 29.96 | 210 | 7.0729 |
| 4.5538 | 30.96 | 217 | 7.1385 |
| 4.4126 | 31.96 | 224 | 7.1561 |
| 4.2486 | 32.96 | 231 | 7.1792 |
| 4.0955 | 33.96 | 238 | 7.2767 |
| 3.9333 | 34.96 | 245 | 7.2815 |
| 3.7914 | 35.96 | 252 | 7.3463 |
| 3.618 | 36.96 | 259 | 7.3864 |
| 3.4453 | 37.96 | 266 | 7.4394 |
| 3.2795 | 38.96 | 273 | 7.4730 |
| 3.0994 | 39.96 | 280 | 7.4880 |
| 2.9143 | 40.96 | 287 | 7.5567 |
| 2.741 | 41.96 | 294 | 7.5451 |
| 2.5698 | 42.96 | 301 | 7.5966 |
| 2.3855 | 43.96 | 308 | 7.6898 |
| 2.2059 | 44.96 | 315 | 7.6957 |
| 2.0634 | 45.96 | 322 | 7.7503 |
| 1.8719 | 46.96 | 329 | 7.8369 |
| 1.7059 | 47.96 | 336 | 7.8411 |
| 1.54 | 48.96 | 343 | 7.8316 |
| 1.3768 | 49.96 | 350 | 7.8630 |
| 1.2177 | 50.96 | 357 | 7.9360 |
| 1.0663 | 51.96 | 364 | 7.9886 |
| 0.9569 | 52.96 | 371 | 8.0187 |
| 0.8281 | 53.96 | 378 | 8.0274 |
| 0.7074 | 54.96 | 385 | 8.1010 |
| 0.6095 | 55.96 | 392 | 8.1594 |
| 0.5262 | 56.96 | 399 | 8.1010 |
| 0.4678 | 57.96 | 406 | 8.1440 |
| 0.4105 | 58.96 | 413 | 8.1638 |
| 0.3766 | 59.96 | 420 | 8.1534 |
| 0.3425 | 60.96 | 427 | 8.1980 |
| 0.321 | 61.96 | 434 | 8.2184 |
| 0.3061 | 62.96 | 441 | 8.2499 |
| 0.2852 | 63.96 | 448 | 8.1690 |
| 0.2698 | 64.96 | 455 | 8.2160 |
| 0.2628 | 65.96 | 462 | 8.2616 |
| 0.2619 | 66.96 | 469 | 8.2948 |
| 0.2544 | 67.96 | 476 | 8.3553 |
| 0.2414 | 68.96 | 483 | 8.3712 |
| 0.2177 | 69.96 | 490 | 8.3468 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+rocm5.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
SALT-NLP/FLANG-DistilBERT
|
SALT-NLP
| 2022-11-29T17:07:13Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"Financial Language Modelling",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-24T05:43:00Z |
---
language: "en"
tags:
- Financial Language Modelling
widget:
- text: "Stocks rallied and the British pound [MASK]."
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-DistilBERT
FLANG-DistilBERT is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the DistilBERT language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://paperswithcode.com/dataset/fin)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the work with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-DistilBERT related issues and questions.
---
license: afl-3.0
---
|
SALT-NLP/FLANG-SpanBERT
|
SALT-NLP
| 2022-11-29T17:06:55Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"Financial Language Modelling",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-24T05:41:56Z |
---
language: "en"
tags:
- Financial Language Modelling
widget:
- text: "Stocks rallied and the British pound [MASK]."
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-SpanBERT
FLANG-SpanBERT is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the SpanBERT language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://paperswithcode.com/dataset/fin)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the model with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-SpanBERT related issues and questions.
---
license: afl-3.0
---
|
SALT-NLP/FLANG-BERT
|
SALT-NLP
| 2022-11-29T17:06:37Z | 83 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"Financial Language Modelling",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-24T02:37:04Z |
---
language: "en"
tags:
- Financial Language Modelling
widget:
- text: "Stocks rallied and the British pound [MASK]."
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-BERT
FLANG-BERT is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the BERT language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://paperswithcode.com/dataset/fin)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the model with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-BERT related issues and questions.
---
license: afl-3.0
---
|
BKick/whisper-small_test3
|
BKick
| 2022-11-29T16:20:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-28T20:38:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-small_test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small_test3
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2153
- eval_wer: 13.6949
- eval_runtime: 1589.8456
- eval_samples_per_second: 2.734
- eval_steps_per_second: 0.342
- epoch: 0.53
- step: 300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 1000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Alred/bart-base-finetuned-summarization-cnn-ver1.1
|
Alred
| 2022-11-29T16:17:56Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-11-29T15:02:40Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: bart-base-finetuned-summarization-cnn-ver1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-summarization-cnn-ver1.1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3824
- Bertscore-mean-precision: 0.8904
- Bertscore-mean-recall: 0.8610
- Bertscore-mean-f1: 0.8753
- Bertscore-median-precision: 0.8893
- Bertscore-median-recall: 0.8606
- Bertscore-median-f1: 0.8744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|
| 2.4217 | 1.0 | 5742 | 2.3095 | 0.8824 | 0.8582 | 0.8700 | 0.8822 | 0.8559 | 0.8696 |
| 1.7335 | 2.0 | 11484 | 2.2855 | 0.8907 | 0.8610 | 0.8754 | 0.8907 | 0.8600 | 0.8746 |
| 1.3013 | 3.0 | 17226 | 2.3824 | 0.8904 | 0.8610 | 0.8753 | 0.8893 | 0.8606 | 0.8744 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
dulleto/sfe
|
dulleto
| 2022-11-29T16:11:29Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-29T16:11:29Z |
---
license: bigscience-openrail-m
---
|
kejian/debug-pt-conditional
|
kejian
| 2022-11-29T15:03:05Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T14:52:56Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: debug-pt-conditional
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug-pt-conditional
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 128,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 128,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'debug-pt-conditional',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 8,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 10,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/3my099dp
|
MatiasTamayo/videomae-base-finetuned-ucf101-subset
|
MatiasTamayo
| 2022-11-29T14:58:07Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2022-11-29T14:52:15Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
KPEKEP/rugpt_chitchat
|
KPEKEP
| 2022-11-29T14:48:36Z | 42 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"license:unlicense",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-29T14:48:34Z |
---
pipeline_tag: text-generation
tags:
- PyTorch
- Transformers
- gpt2
license: unlicense
language: ru
widget:
- text: >-
- У Джульетты было 7 пончиков, а потом она 3 съела. Сколько у нее осталось
пончиков? -
- text: >-
- Поглажено 4 манула. Осталось погладить 6. Сколько всего манулов надо
погладить? -
- text: '- Для начала скажи, чему равно пятью девять? -'
- text: '- ты чё такой борзый? -'
- text: '- Привет! Как ваше ничего? -'
duplicated_from: inkoziev/rugpt_chitchat
---
## Russian Chit-chat, Deductive and Common Sense reasoning model
Модель является ядром прототипа [диалоговой системы](https://github.com/Koziev/chatbot) с двумя основными функциями.
Первая функция - **генерация реплик чит-чата**. В качестве затравки подается история диалога (предшествующие несколько реплик, от 1 до 10).
```
- Привет, как дела?
- Привет, так себе.
- <<< эту реплику ожидаем от модели >>>
```
Вторая функция модели - вывод ответа на заданный вопрос, опираясь на дополнительные факты или на "здравый смысл". Предполагается, что релевантные факты извлекаются
из стороннего хранилища (базы знаний) с помощью другой модели, например [sbert_pq](https://huggingface.co/inkoziev/sbert_pq).
Используя указанный факт(ы) и текст вопроса, модель построит грамматичный и максимально краткий ответ, как это сделал бы
человек в подобной коммуникативной ситуации. Релевантные факты следует указывать перед текстом заданного вопроса так,
будто сам собеседник сказал их:
```
- Сегодня 15 сентября. Какой сейчас у нас месяц?
- Сентябрь
```
Модель не ожидает, что все найденные и добавленные в контекст диалога факты действительно имеют отношение к заданному вопросу. Поэтому
модель, извлекающая из базы знаний информацию, может жертвовать точностью в пользу полноте и добавлять что-то лишнее. Модель читчата
в этом случае сама выберет среди добавленных в контекст фактов необходимую фактуру и проигнорирует лишнее. Текущая версия модели
допускает до 5 фактов перед вопросом. Например:
```
- Стасу 16 лет. Стас живет в Подольске. У Стаса нет своей машины. Где живет Стас?
- в Подольске
```
В некоторых случаях модель может выполнять **силлогический вывод** ответа, опираясь на 2 предпосылки, связанные друг с другом. Выводимое из двух предпосылок следствие не фигурирует явно, а *как бы* используется для вывода ответа:
```
- Смертен ли Аристофан, если он был греческим философом, а все философы смертны?
- Да
```
Как можно видеть из приведенных примеров, формат подаваемой на вход модели фактической информации для выполнения вывода предельно естественный и свободный.
Кроме логического вывода, модель также умеет решать простые арифметические задачи в рамках 1-2 классов начальной школы, с двумя числовыми аргументами:
```
- Чему равно 2+8?
- 10
```
### Варианты модели и метрики
Выложенная на данный момент модель имеет 760 млн. параметров, т.е. уровня sberbank-ai/rugpt3large_based_on_gpt2. Далее приводится
результат замера точности решения арифметических задач на отложенном тестовом наборе сэмплов:
| base model | arith. accuracy |
| --------------------------------------- | --------------- |
| sberbank-ai/rugpt3large_based_on_gpt2 | 0.91 |
| sberbank-ai/rugpt3medium_based_on_gpt2 | 0.70 |
| sberbank-ai/rugpt3small_based_on_gpt2 | 0.58 |
| tinkoff-ai/ruDialoGPT-small | 0.44 |
| tinkoff-ai/ruDialoGPT-medium | 0.69 |
Цифра 0.91 в столбце "arith. accuracy" означает, что 91% тестовых задач решено полностью верно.
Любое отклонение сгенерированного ответа от эталонного рассматривается
как ошибка. Например, выдача ответа "120" вместо "119" тоже фиксируется как ошибка.
### Пример использования
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/rugpt_chitchat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'})
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
# На вход модели подаем последние 2-3 реплики диалога. Каждая реплика на отдельной строке, начинается с символа "-"
input_text = """<s>- Привет! Что делаешь?
- Привет :) В такси еду
-"""
encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id)
text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:]
text = text[: text.find('</s>')]
print(text)
```
### Контакты
Если у Вас есть какие-то вопросы по использованию этой модели, или предложения по ее улучшению - пишите мне mentalcomputing@gmail.com
### Citation:
```
@MISC{rugpt_chitchat,
author = {Ilya Koziev},
title = {Russian Chit-chat with Common sence Reasoning},
url = {https://huggingface.co/inkoziev/rugpt_chitchat},
year = 2022
}
```
|
mzhou08/t5-base-finetuned-qg-medium-hard-qns
|
mzhou08
| 2022-11-29T14:09:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T13:37:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-qg-medium-hard-qns
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-qg-medium-hard-qns
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5919
- Rouge1: 38.6117
- Rouge2: 21.3082
- Rougel: 35.7294
- Rougelsum: 35.4192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 73 | 1.8640 | 31.2085 | 11.6418 | 26.1137 | 26.2911 |
| No log | 2.0 | 146 | 1.6488 | 29.6798 | 10.9223 | 26.7442 | 26.9736 |
| No log | 3.0 | 219 | 1.6045 | 33.6703 | 11.7038 | 30.167 | 29.9192 |
| No log | 4.0 | 292 | 1.5812 | 36.6758 | 17.748 | 33.739 | 33.4974 |
| No log | 5.0 | 365 | 1.5879 | 33.3704 | 16.4099 | 31.7658 | 31.3874 |
| No log | 6.0 | 438 | 1.5786 | 34.1216 | 14.9588 | 30.9584 | 30.9277 |
| 1.7533 | 7.0 | 511 | 1.5804 | 34.8267 | 15.7046 | 32.0877 | 31.9317 |
| 1.7533 | 8.0 | 584 | 1.5861 | 33.2539 | 12.728 | 30.551 | 30.2299 |
| 1.7533 | 9.0 | 657 | 1.5911 | 38.4406 | 20.5922 | 36.4267 | 36.0426 |
| 1.7533 | 10.0 | 730 | 1.5827 | 33.3421 | 16.0455 | 29.974 | 29.5357 |
| 1.7533 | 11.0 | 803 | 1.5834 | 42.3363 | 24.6712 | 40.4291 | 40.0842 |
| 1.7533 | 12.0 | 876 | 1.5889 | 33.268 | 15.5319 | 30.6942 | 30.4347 |
| 1.7533 | 13.0 | 949 | 1.5911 | 42.1265 | 23.1983 | 39.5768 | 39.2304 |
| 1.2341 | 14.0 | 1022 | 1.5926 | 35.0279 | 15.825 | 32.0736 | 32.0093 |
| 1.2341 | 15.0 | 1095 | 1.5912 | 38.362 | 17.6108 | 35.3148 | 35.0558 |
| 1.2341 | 16.0 | 1168 | 1.5919 | 38.6117 | 21.3082 | 35.7294 | 35.4192 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Samael98/roberta-base-bne-finetuned-amazon_reviews_multi
|
Samael98
| 2022-11-29T14:08:39Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T13:46:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: train
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2313
- Accuracy: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1864 | 1.0 | 1250 | 0.2209 | 0.9317 |
| 0.1063 | 2.0 | 2500 | 0.2313 | 0.9337 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
oskrmiguel/roberta-base-bne-clasificacion-de-texto-supervisado
|
oskrmiguel
| 2022-11-29T14:05:15Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T13:42:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-clasificacion-de-texto-supervisado
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: train
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-clasificacion-de-texto-supervisado
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2263
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1934 | 1.0 | 1250 | 0.1700 | 0.9327 |
| 0.1031 | 2.0 | 2500 | 0.2263 | 0.9335 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
yesyesjaewook/jets-jaewook-ukten-ko
|
yesyesjaewook
| 2022-11-29T14:02:17Z | 6 | 0 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"ko",
"dataset:Jaewook-Ukten",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2022-11-27T13:55:18Z |
---
tags:
- espnet
- audio
- text-to-speech
language: ko
datasets:
- Jaewook-Ukten
license: cc-by-4.0
---
## ESPnet2 TTS model
### yesyesjaewook/jets-jaewook-ukten-ko
This model was trained by yesyesjaewook using jaewook_ukten recipe in [espnet](https://github.com/espnet/espnet/).
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
thliang01/sd-class-butterflies-32
|
thliang01
| 2022-11-29T13:42:36Z | 35 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T13:42:21Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(thliang01/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
kaizerkam/sd-class-comics-64
|
kaizerkam
| 2022-11-29T13:26:50Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T13:25:39Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of comic scenes.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(kaizerkam/sd-class-comics-64)
image = pipeline().images[0]
image
```
|
shivammehta25/sd-class-butterflies-32
|
shivammehta25
| 2022-11-29T12:46:27Z | 35 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T12:46:12Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(shivammehta007/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
pig4431/rtm_roBERTa_5E
|
pig4431
| 2022-11-29T12:34:52Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T11:02:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: rtm_roBERTa_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtm_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6545
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6955 | 0.09 | 50 | 0.6752 | 0.7867 |
| 0.5362 | 0.19 | 100 | 0.4314 | 0.8333 |
| 0.4065 | 0.28 | 150 | 0.4476 | 0.8533 |
| 0.3563 | 0.37 | 200 | 0.3454 | 0.8467 |
| 0.3729 | 0.47 | 250 | 0.3421 | 0.86 |
| 0.3355 | 0.56 | 300 | 0.3253 | 0.8467 |
| 0.338 | 0.66 | 350 | 0.3859 | 0.8733 |
| 0.2875 | 0.75 | 400 | 0.3537 | 0.8533 |
| 0.3477 | 0.84 | 450 | 0.3636 | 0.8467 |
| 0.3259 | 0.94 | 500 | 0.3115 | 0.88 |
| 0.3204 | 1.03 | 550 | 0.4295 | 0.8333 |
| 0.2673 | 1.12 | 600 | 0.3369 | 0.88 |
| 0.2479 | 1.22 | 650 | 0.3620 | 0.8667 |
| 0.2821 | 1.31 | 700 | 0.3582 | 0.8733 |
| 0.2355 | 1.4 | 750 | 0.3130 | 0.8867 |
| 0.2357 | 1.5 | 800 | 0.3229 | 0.86 |
| 0.2725 | 1.59 | 850 | 0.3035 | 0.88 |
| 0.2425 | 1.69 | 900 | 0.3146 | 0.8533 |
| 0.1977 | 1.78 | 950 | 0.4079 | 0.86 |
| 0.2557 | 1.87 | 1000 | 0.4132 | 0.8733 |
| 0.2395 | 1.97 | 1050 | 0.3336 | 0.86 |
| 0.1951 | 2.06 | 1100 | 0.5068 | 0.84 |
| 0.1631 | 2.15 | 1150 | 0.5209 | 0.8867 |
| 0.2192 | 2.25 | 1200 | 0.4766 | 0.8733 |
| 0.1725 | 2.34 | 1250 | 0.3962 | 0.8667 |
| 0.2215 | 2.43 | 1300 | 0.4133 | 0.8867 |
| 0.1602 | 2.53 | 1350 | 0.5564 | 0.8533 |
| 0.1986 | 2.62 | 1400 | 0.5826 | 0.86 |
| 0.1972 | 2.72 | 1450 | 0.5412 | 0.8667 |
| 0.2299 | 2.81 | 1500 | 0.4636 | 0.8733 |
| 0.2028 | 2.9 | 1550 | 0.5096 | 0.8667 |
| 0.2591 | 3.0 | 1600 | 0.3790 | 0.8467 |
| 0.1197 | 3.09 | 1650 | 0.5704 | 0.8467 |
| 0.174 | 3.18 | 1700 | 0.5904 | 0.8467 |
| 0.1499 | 3.28 | 1750 | 0.6066 | 0.86 |
| 0.1687 | 3.37 | 1800 | 0.6353 | 0.8533 |
| 0.1463 | 3.46 | 1850 | 0.6434 | 0.8467 |
| 0.1373 | 3.56 | 1900 | 0.6507 | 0.8533 |
| 0.1339 | 3.65 | 1950 | 0.6014 | 0.86 |
| 0.1488 | 3.75 | 2000 | 0.7245 | 0.84 |
| 0.1725 | 3.84 | 2050 | 0.6214 | 0.86 |
| 0.1443 | 3.93 | 2100 | 0.6446 | 0.8533 |
| 0.1619 | 4.03 | 2150 | 0.6223 | 0.8533 |
| 0.1153 | 4.12 | 2200 | 0.6579 | 0.8333 |
| 0.1159 | 4.21 | 2250 | 0.6760 | 0.8667 |
| 0.0948 | 4.31 | 2300 | 0.7172 | 0.8467 |
| 0.1373 | 4.4 | 2350 | 0.7346 | 0.8467 |
| 0.1463 | 4.49 | 2400 | 0.6453 | 0.8533 |
| 0.0758 | 4.59 | 2450 | 0.6579 | 0.86 |
| 0.16 | 4.68 | 2500 | 0.6556 | 0.8667 |
| 0.112 | 4.78 | 2550 | 0.6490 | 0.88 |
| 0.1151 | 4.87 | 2600 | 0.6525 | 0.8667 |
| 0.2152 | 4.96 | 2650 | 0.6545 | 0.8667 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AlekseyKorshuk/125m-dalio-book-handwritten-io-constant-1e-6-v2
|
AlekseyKorshuk
| 2022-11-29T12:29:49Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-29T10:31:18Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
metrics:
- accuracy
model-index:
- name: 125m-dalio-book-handwritten-io-constant-1e-6-v2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
type: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
metrics:
- name: Accuracy
type: accuracy
value: 0.23359387091781458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 125m-dalio-book-handwritten-io-constant-1e-6-v2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0859
- Accuracy: 0.2336
- Perplexity: 21.8880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Perplexity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|
| 3.3352 | 0.01 | 1 | 3.1738 | 0.2305 | 23.8988 |
| 3.3091 | 0.03 | 2 | 3.1738 | 0.2305 | 23.8988 |
| 3.3347 | 0.04 | 3 | 3.1738 | 0.2305 | 23.8988 |
| 3.1445 | 0.05 | 4 | 3.1738 | 0.2305 | 23.8988 |
| 2.8918 | 0.07 | 5 | 3.1738 | 0.2305 | 23.8988 |
| 3.2068 | 0.08 | 6 | 3.1738 | 0.2305 | 23.8988 |
| 3.6245 | 0.09 | 7 | 3.1719 | 0.2305 | 23.8522 |
| 3.2256 | 0.11 | 8 | 3.1719 | 0.2305 | 23.8522 |
| 2.9991 | 0.12 | 9 | 3.1699 | 0.2305 | 23.8056 |
| 3.3257 | 0.13 | 10 | 3.1680 | 0.2306 | 23.7592 |
| 3.1199 | 0.15 | 11 | 3.1660 | 0.2306 | 23.7128 |
| 3.3735 | 0.16 | 12 | 3.1660 | 0.2306 | 23.7128 |
| 3.0051 | 0.17 | 13 | 3.1641 | 0.2307 | 23.6665 |
| 3.2695 | 0.19 | 14 | 3.1621 | 0.2308 | 23.6204 |
| 3.2004 | 0.2 | 15 | 3.1602 | 0.2309 | 23.5743 |
| 3.2075 | 0.21 | 16 | 3.1582 | 0.2308 | 23.5283 |
| 3.321 | 0.23 | 17 | 3.1562 | 0.2308 | 23.4824 |
| 3.4026 | 0.24 | 18 | 3.1543 | 0.2309 | 23.4366 |
| 3.0383 | 0.25 | 19 | 3.1523 | 0.2309 | 23.3908 |
| 3.166 | 0.27 | 20 | 3.1504 | 0.2309 | 23.3452 |
| 3.144 | 0.28 | 21 | 3.1484 | 0.2310 | 23.2996 |
| 3.1624 | 0.29 | 22 | 3.1484 | 0.2310 | 23.2996 |
| 3.0332 | 0.31 | 23 | 3.1465 | 0.2310 | 23.2542 |
| 3.3745 | 0.32 | 24 | 3.1445 | 0.2311 | 23.2088 |
| 3.0823 | 0.33 | 25 | 3.1426 | 0.2312 | 23.1635 |
| 3.6021 | 0.35 | 26 | 3.1406 | 0.2312 | 23.1183 |
| 3.1125 | 0.36 | 27 | 3.1387 | 0.2313 | 23.0732 |
| 3.1406 | 0.37 | 28 | 3.1387 | 0.2314 | 23.0732 |
| 3.1736 | 0.39 | 29 | 3.1367 | 0.2314 | 23.0282 |
| 3.1104 | 0.4 | 30 | 3.1348 | 0.2315 | 22.9832 |
| 3.1301 | 0.41 | 31 | 3.1328 | 0.2316 | 22.9384 |
| 3.3376 | 0.43 | 32 | 3.1309 | 0.2315 | 22.8936 |
| 3.218 | 0.44 | 33 | 3.1309 | 0.2316 | 22.8936 |
| 3.0786 | 0.45 | 34 | 3.1289 | 0.2316 | 22.8490 |
| 3.0125 | 0.47 | 35 | 3.1270 | 0.2317 | 22.8044 |
| 3.2634 | 0.48 | 36 | 3.1270 | 0.2317 | 22.8044 |
| 2.9888 | 0.49 | 37 | 3.125 | 0.2318 | 22.7599 |
| 3.1624 | 0.51 | 38 | 3.1230 | 0.2318 | 22.7155 |
| 2.9807 | 0.52 | 39 | 3.1211 | 0.2319 | 22.6712 |
| 3.446 | 0.53 | 40 | 3.1211 | 0.2319 | 22.6712 |
| 3.1338 | 0.55 | 41 | 3.1191 | 0.2320 | 22.6269 |
| 3.1841 | 0.56 | 42 | 3.1191 | 0.2320 | 22.6269 |
| 3.1079 | 0.57 | 43 | 3.1172 | 0.2320 | 22.5828 |
| 3.0918 | 0.59 | 44 | 3.1152 | 0.2321 | 22.5387 |
| 3.0302 | 0.6 | 45 | 3.1152 | 0.2322 | 22.5387 |
| 3.1123 | 0.61 | 46 | 3.1133 | 0.2323 | 22.4947 |
| 2.9985 | 0.63 | 47 | 3.1113 | 0.2324 | 22.4508 |
| 3.3816 | 0.64 | 48 | 3.1113 | 0.2324 | 22.4508 |
| 3.0813 | 0.65 | 49 | 3.1094 | 0.2324 | 22.4070 |
| 3.2024 | 0.67 | 50 | 3.1094 | 0.2325 | 22.4070 |
| 3.0178 | 0.68 | 51 | 3.1074 | 0.2325 | 22.3633 |
| 3.1646 | 0.69 | 52 | 3.1074 | 0.2326 | 22.3633 |
| 3.0046 | 0.71 | 53 | 3.1055 | 0.2327 | 22.3197 |
| 3.0266 | 0.72 | 54 | 3.1055 | 0.2327 | 22.3197 |
| 3.3857 | 0.73 | 55 | 3.1035 | 0.2327 | 22.2761 |
| 3.064 | 0.75 | 56 | 3.1035 | 0.2328 | 22.2761 |
| 3.176 | 0.76 | 57 | 3.1016 | 0.2328 | 22.2327 |
| 3.1851 | 0.77 | 58 | 3.1016 | 0.2329 | 22.2327 |
| 3.0811 | 0.79 | 59 | 3.0996 | 0.2329 | 22.1893 |
| 3.0205 | 0.8 | 60 | 3.0996 | 0.2330 | 22.1893 |
| 3.26 | 0.81 | 61 | 3.0977 | 0.2330 | 22.1460 |
| 3.2922 | 0.83 | 62 | 3.0977 | 0.2331 | 22.1460 |
| 3.5349 | 0.84 | 63 | 3.0957 | 0.2331 | 22.1028 |
| 3.3525 | 0.85 | 64 | 3.0957 | 0.2331 | 22.1028 |
| 3.135 | 0.87 | 65 | 3.0938 | 0.2331 | 22.0596 |
| 3.1707 | 0.88 | 66 | 3.0938 | 0.2332 | 22.0596 |
| 3.0127 | 0.89 | 67 | 3.0918 | 0.2332 | 22.0166 |
| 3.0952 | 0.91 | 68 | 3.0918 | 0.2332 | 22.0166 |
| 3.1023 | 0.92 | 69 | 3.0898 | 0.2334 | 21.9736 |
| 3.3821 | 0.93 | 70 | 3.0898 | 0.2334 | 21.9736 |
| 3.1118 | 0.95 | 71 | 3.0879 | 0.2334 | 21.9308 |
| 3.1143 | 0.96 | 72 | 3.0879 | 0.2335 | 21.9308 |
| 3.1118 | 0.97 | 73 | 3.0879 | 0.2335 | 21.9308 |
| 3.0596 | 0.99 | 74 | 3.0859 | 0.2336 | 21.8880 |
| 3.1033 | 1.0 | 75 | 3.0859 | 0.2336 | 21.8880 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nlp-tlp/mwo-ner
|
nlp-tlp
| 2022-11-29T12:00:39Z | 4 | 3 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:mwo_ner",
"region:us"
] |
token-classification
| 2022-11-29T11:58:19Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- mwo_ner
widget:
- text: "replace seal on pump"
---
## MWO NER Test
A flair-based NER model for MWOs. There are three classes: `Item`, `Activity`, and `Observation`.
|
clp/segformer-b0-scene-parse-150
|
clp
| 2022-11-29T11:47:29Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T15:53:44Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3118
- Mean Iou: 0.0859
- Mean Accuracy: 0.1493
- Overall Accuracy: 0.5430
- Per Category Iou: [0.4898642376841085, 0.502026813829342, 0.9487341030299479, 0.44331193050176815, 0.28502594514455154, 0.5132976114794296, 0.8390207156308851, 0.0, 0.30530825819472024, 0.0, 0.06594624784212842, 0.0, 0.03397963180571876, 0.0, 0.0007459827819109256, 0.0, nan, 0.04554975143210437, 0.0, 0.07792795056021705, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
- Per Category Accuracy: [0.8215553632658342, 0.819071257846768, 0.9731147245348802, 0.8672811704363634, 0.9004683840749415, 0.594073476114797, 0.9732440887086908, 0.0, 0.40956851614311834, 0.0, 0.5229850345614389, 0.0, 0.034648027958062905, nan, 0.0007464041475862904, 0.0, nan, 0.0476077438413251, 0.0, 0.5009150608246313, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.2849 | 1.0 | 20 | 4.2070 | 0.0194 | 0.0679 | 0.3746 | [0.3949829725229674, 0.4135772915291814, 0.0, 0.26980840849544657, 0.1282559786684443, 0.15076540186066723, 0.00908901592761032, 0.0, 0.013565775517419566, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0003970617431010522, 0.008885447023579041, 0.0, 0.0, 0.0, 0.005040122024006897, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.002655312914892643, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9524952286989713, 0.755535418800725, 0.0, 0.48244304323326054, 0.9011709601873537, 0.16045676614279827, 0.011822582618269517, 0.0, 0.013613165579542043, 0.0, 0.0, 0.0, 0.0, nan, 0.0004034617013979948, 0.07362999240057418, nan, 0.0, 0.0, 0.04090860157175153, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0034295175023651846, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.7699 | 2.0 | 40 | 3.9727 | 0.0380 | 0.1002 | 0.4224 | [0.43442101571739283, 0.35923049538654755, 0.6190543160517142, 0.3355717837774341, 0.10588647687723779, 0.31387526278906797, 0.038367652125468235, 0.0, 0.04485789722234148, 0.0, 0.07154189015637985, 0.0, 0.0, 0.0, 0.004233122680308957, 0.0003664849512116909, 0.0, 0.0, 0.0, 0.0284352014981458, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0007874615794955165, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9416870188264217, 0.6175399889685604, 0.926106819752938, 0.704992945957396, 0.9903981264637002, 0.35779874372493126, 0.04573495744569302, 0.0, 0.045626483993340794, 0.0, 0.12710556338500772, 0.0, 0.0, nan, 0.004256520949748845, 0.0013510090348729208, nan, 0.0, 0.0, 0.24846592744105933, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0009165089877010407, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.7161 | 3.0 | 60 | 3.6079 | 0.0535 | 0.1182 | 0.4695 | [0.46147636301683304, 0.42190388170055454, 0.5905298672673311, 0.34255470866251286, 0.10127362853686882, 0.2992112324204293, 0.616176968407381, 0.0, 0.031915928772988225, 0.0, 0.05093061049274829, 0.0, 0.0, 0.0, 0.006854943777434808, 0.021351935880581825, nan, 0.0002887139107611549, 0.0, 0.052485352485352486, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9419053665006146, 0.7625719013474116, 0.9179916443749813, 0.6481040461001222, 1.0, 0.32631618778537375, 0.779355481379214, 0.0, 0.03222619469992631, 0.0, 0.11146902892423327, 0.0, 0.0, nan, 0.006899195093905711, 0.08502913113231444, nan, 0.00029013029487788154, 0.0, 0.2989557541177737, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.8074 | 4.0 | 80 | 3.4130 | 0.0568 | 0.1189 | 0.4832 | [0.475136642179407, 0.45819879963242616, 0.6245854607342572, 0.31596148184808626, 0.12156252316359054, 0.31925064356714705, 0.5921768947963801, 0.0, 0.052846247434062785, 0.0, 0.03856766297226889, 0.0, 0.0, 0.0, 0.005636392708666453, 0.0001625685836212152, nan, 0.0, 0.0, 0.06471839249046642, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9399179983179142, 0.8159062852940404, 0.9651498301824412, 0.6092060397412009, 0.9601873536299765, 0.35716808354986, 0.8782527393788994, 0.0, 0.05381949182609645, 0.0, 0.16790819408093416, 0.0, 0.0, nan, 0.005688809989711727, 0.0003377522587182302, nan, 0.0, 0.0, 0.19000968887931963, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.3666 | 5.0 | 100 | 3.4479 | 0.0696 | 0.1338 | 0.5077 | [0.46280186689480507, 0.47811968526761967, 0.6189516129032258, 0.39204509433188267, 0.12150226270680849, 0.48382140822535485, 0.7870309951060359, 0.0, 0.09098262661206216, 0.0, 0.02553216306059369, 0.0, 0.0, 0.0, 0.007265179034769071, 0.008072378426861131, nan, 0.0, 0.0, 0.140759523528718, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7732317881865821, 0.8144485593465185, 0.9596345165459409, 0.8648009443779486, 0.977751756440281, 0.6712873035493555, 0.8431345135527166, 0.0, 0.0976174231052646, 0.0, 0.1469028924233273, 0.0, 0.0, nan, 0.007343002965443505, 0.01747867938866841, nan, 0.0, 0.0, 0.5139412207987942, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.8715 | 6.0 | 120 | 3.2604 | 0.0728 | 0.1299 | 0.5149 | [0.50879365259482, 0.4533002292169846, 0.900409470239023, 0.35700422307361396, 0.15217811822965027, 0.47367637662639406, 0.6628619419365922, 0.0, 0.12773258835090318, 0.0, 0.05044635946127251, 0.0, 0.0, 0.0, 0.0007838250663236595, 0.025494428563094335, nan, 0.0005788864330070519, 0.0, 0.07236342520212623, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9246377045998577, 0.7931605074462217, 0.9550359171650987, 0.7270640786762256, 0.9211943793911007, 0.548577651085156, 0.9103650757588997, 0.0, 0.13551486040228158, 0.0, 0.3280316757264613, 0.0, 0.0, nan, 0.0007867503177260898, 0.0737988685299333, nan, 0.0005802605897557631, 0.0, 0.17439982775325655, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.4302 | 7.0 | 140 | 3.1432 | 0.0594 | 0.1213 | 0.4632 | [0.45221752264588605, 0.37219674969901695, 0.605783851771199, 0.3125698077576481, 0.12471020243040969, 0.42464017248412095, 0.6040006848533273, 0.0, 0.06926821236351662, 0.0, 0.0440532531292848, 0.0, 0.0, 0.0, 0.0, 0.0015676977784575383, nan, 0.0, 0.0, 0.07605052848672338, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8683060263958077, 0.618740314658682, 0.9698987105888011, 0.6848796699612953, 0.9385245901639344, 0.4902163584840611, 0.9247741213890005, 0.0, 0.07348052727818565, 0.0, 0.3342057580028186, 0.0, 0.0, nan, 0.0, 0.0034619606518618592, nan, 0.0, 0.0, 0.15878996662719347, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.4766 | 8.0 | 160 | 3.0017 | 0.0731 | 0.1267 | 0.5143 | [0.4688651010981108, 0.44508907466943504, 0.8341699394002445, 0.3667585998450989, 0.15656454924159374, 0.5731453244361243, 0.7303214047877747, 0.0, 0.10742113264918282, 0.0, 0.027347890608437567, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.008315844700944387, 0.0, 0.08233957978421351, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8567558387785469, 0.7443437606702913, 0.9639926662859547, 0.798571093643958, 0.9197892271662763, 0.7182714865921647, 0.9103650757588997, 0.0, 0.1110722960617887, 0.0, 0.20777129051741494, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.008361027588753494, 0.0, 0.09365916675637852, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.7613 | 9.0 | 180 | 2.9787 | 0.0675 | 0.1268 | 0.4974 | [0.4946951488018788, 0.43475820007042393, 0.8041630056802009, 0.27142083984459003, 0.13245957905008665, 0.563133587635599, 0.5246769633385517, 0.0, 0.17340080548145823, 0.0, 0.02205332499422154, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.014390357940672243, 0.0, 0.14129719051799824, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.872157436760044, 0.6875055813831324, 0.976556160019236, 0.6229562813884331, 0.9218969555035129, 0.6597924707583899, 0.981047167997763, 0.0, 0.21715018694904614, 0.0, 0.10885175491577746, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.014612016669304215, 0.0, 0.2772096027559479, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.423 | 10.0 | 200 | 2.8971 | 0.0705 | 0.1221 | 0.4786 | [0.45285400219846106, 0.292995708500935, 0.8659067385015526, 0.41603214969939956, 0.1778944797264289, 0.46283203885294827, 0.9158788235968208, 0.0, 0.11377199379602905, 0.0, 0.02756293067079173, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.006541312879973321, 0.0, 0.006434223111033823, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.916192906126674, 0.4576313923252699, 0.9806287758107661, 0.9056938257589779, 0.8528103044496487, 0.524885850508312, 0.9777179706751018, 0.0, 0.12091918888676619, 0.0, 0.35514395007046506, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.006725747744896344, 0.0, 0.0079664118850253, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.9947 | 11.0 | 220 | 2.7385 | 0.0643 | 0.1201 | 0.4760 | [0.4625717328755147, 0.37213716519109774, 0.8277628134602982, 0.2659240653368483, 0.15398480224484362, 0.527751424867652, 0.4902700616280028, 0.0, 0.17549662897009846, 0.0, 0.03796166432912576, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.02023508790019075, 0.0, 0.007948046912862267, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8824581904638675, 0.5921939432143514, 0.9836945087313276, 0.5800315066859162, 0.8803278688524591, 0.6127788569074106, 0.987792943150242, 0.0, 0.21227040746704512, 0.0, 0.24521844171532112, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.020704752861739728, 0.0, 0.00882764560232533, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.5021 | 12.0 | 240 | 2.7417 | 0.0718 | 0.1267 | 0.4906 | [0.45723768505572465, 0.38457056157271563, 0.9111909002931318, 0.36509902170864206, 0.1882208839440645, 0.5653003453339632, 0.7384534282231228, 0.0, 0.13432339471333826, 0.0, 0.03152366835052954, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.025990476413877226, 0.0, 0.002576370997423629, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8224813191434301, 0.6344469834265752, 0.9763307384809594, 0.7870090448044816, 0.8637002341920375, 0.6490165905670056, 0.979220915398193, 0.0, 0.1543298490761715, 0.0, 0.43386349909402055, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.02922403333860843, 0.0, 0.003014318010550113, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.1942 | 13.0 | 260 | 2.7584 | 0.0676 | 0.1218 | 0.4845 | [0.4504119105257257, 0.40036808030610466, 0.8516919118998241, 0.32959773438753465, 0.1681251367315686, 0.4346956386731514, 0.658089415906469, 0.0, 0.1361765416368657, 0.0, 0.04049711603766891, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.04199944918755164, 0.0, 0.004621848739495798, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8772097593323414, 0.657665537257374, 0.9751284902768177, 0.726665103671804, 0.8998829039812647, 0.5061762653145313, 0.9669789063455724, 0.0, 0.15692803144018996, 0.0, 0.26810281189181934, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.048267130875138474, 0.0, 0.007105178167725266, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.9015 | 14.0 | 280 | 2.7956 | 0.0691 | 0.1297 | 0.4652 | [0.46908042480668444, 0.3684401690587303, 0.9327478223310057, 0.2938101507593599, 0.13839169684473168, 0.5286547501595961, 0.573426750047988, 0.0, 0.23750374988851683, 0.0, 0.03078130611368424, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.020949198194378633, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6516060684479524, 0.6154624011766869, 0.9735805957139851, 0.6516989342842923, 0.9352459016393443, 0.7659788266357223, 0.9919347791894584, 0.0, 0.3197838486940859, 0.0, 0.55794913093081, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.022155404336129135, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.1112 | 15.0 | 300 | 2.5442 | 0.0756 | 0.1348 | 0.5229 | [0.49677059303217863, 0.4302482870914963, 0.8032426287041843, 0.4219673428099668, 0.16821276689509287, 0.500942291040595, 0.8748079268775643, 0.0, 0.2223202963773341, 0.0, 0.05334747004087555, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.03619629085567389, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8350302451963512, 0.7176240380322013, 0.9768266658651679, 0.8819116249799485, 0.8939110070257611, 0.6202542821825887, 0.9651002254417085, 0.0, 0.2928358942168609, 0.0, 0.5132541440171801, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.041594134092947196, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.3631 | 16.0 | 320 | 2.6116 | 0.0761 | 0.1291 | 0.4780 | [0.4773444435989143, 0.30408899790354993, 0.9460832250813714, 0.3578894375553185, 0.2547144028423066, 0.5841460034676791, 0.7330094544692665, 0.0, 0.23688384443079752, 0.0, 0.024772124309757743, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.03830117124478807, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7609153619719221, 0.47736453654821004, 0.9741065793032971, 0.8099562772752886, 0.8730679156908665, 0.7620771423526147, 0.9782597298194718, 0.0, 0.29266122649491005, 0.0, 0.477484732568284, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0482143799124334, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.0519 | 17.0 | 340 | 2.5895 | 0.0709 | 0.1269 | 0.4954 | [0.48482850272259875, 0.38933025954926664, 0.874342238072808, 0.34184587293666174, 0.181434395580537, 0.4821329973241969, 0.6515304786613667, 0.0, 0.21672806399768857, 0.0, 0.04150789065390832, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.025378816152870045, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8710737853399754, 0.6431828329787513, 0.9763457665835111, 0.7269365712005857, 0.880679156908665, 0.5727151181857169, 0.9567379109068349, 0.0, 0.2620397914904069, 0.0, 0.42909871820683176, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.026902990979585376, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.1177 | 18.0 | 360 | 2.5314 | 0.0793 | 0.1292 | 0.5039 | [0.4653830828650731, 0.3945759961048446, 0.9324341886759758, 0.4001519221337092, 0.29972535127174305, 0.4581277542714083, 0.8735896186653674, 0.0, 0.21039307302160773, 0.0, 0.03642280275104697, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.05377909589287319, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8311869217830109, 0.638557507945263, 0.9703946379730095, 0.9295336105592642, 0.8817330210772834, 0.526248076486466, 0.9870764229915591, 0.0, 0.26446330613247454, 0.0, 0.3507818267230387, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.07891544020678377, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.8557 | 19.0 | 380 | 2.5372 | 0.0830 | 0.1361 | 0.5226 | [0.5031437202609337, 0.43567175985678, 0.9378157906298971, 0.3999662626976136, 0.1953096101205889, 0.5999859316277106, 0.896640684170664, 0.0, 0.2707113789325403, 0.0, 0.02527710616384589, 0.0, 0.007388916625062406, 0.0, 0.0, 0.0, nan, 0.04184551718065553, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7330558161350844, 0.6871273605967484, 0.9623245469027081, 0.9362421490356733, 0.927400468384075, 0.7530965414595999, 0.9930882018839238, 0.0, 0.3776261564913621, 0.0, 0.3781625394268841, 0.0, 0.007388916625062406, nan, 0.0, 0.0, nan, 0.050139790051168434, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.7307 | 20.0 | 400 | 2.4994 | 0.0777 | 0.1317 | 0.5046 | [0.4800192398950199, 0.3449591461292408, 0.9396956273847639, 0.4272217614198487, 0.268896641357138, 0.4711749125273863, 0.8287388184697303, 0.0, 0.2517840493437567, 0.0, 0.04121483255095933, 0.0, 0.002596105841238143, 0.0, 0.0, 0.0, nan, 0.05892144437929143, 0.0, 0.0013361541031065582, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9018790030406936, 0.5466813752528038, 0.9659763758227886, 0.8792010628365067, 0.9206088992974238, 0.5452309477561111, 0.9900735743870257, 0.0, 0.30659643568679895, 0.0, 0.43896382793101135, 0.0, 0.002596105841238143, nan, 0.0, 0.0, nan, 0.08215962441314555, 0.0, 0.002583701151900097, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.7527 | 21.0 | 420 | 2.4304 | 0.0807 | 0.1338 | 0.5327 | [0.5015856972981559, 0.43082591968073747, 0.9418403911157535, 0.3963885377075013, 0.2506914330590364, 0.5804233460278098, 0.7429052091886804, 0.0, 0.2571854237623848, 0.0, 0.05346003315615867, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.042945515108256005, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.880929756744517, 0.7100044651065059, 0.9756544738661297, 0.8111696548660554, 0.8597189695550351, 0.6888448828233394, 0.9723877597385575, 0.0, 0.32602276138751674, 0.0, 0.4198375947922958, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0476077438413251, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.9816 | 22.0 | 440 | 2.4334 | 0.0824 | 0.1374 | 0.5165 | [0.4732090059520191, 0.48237301785754877, 0.9438233486493606, 0.36582122198257794, 0.33411702080468436, 0.5619725061466855, 0.700576076744301, 0.0, 0.28868093138237455, 0.0, 0.03287401235949254, 0.0, 0.043111067465383485, 0.0, 0.0, 0.0, nan, 0.05716521350324167, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6924512356860969, 0.7802931211094476, 0.9692975864867301, 0.7897319463810499, 0.8819672131147541, 0.7639859404824971, 0.990388144212789, 0.0, 0.3773423214431921, 0.0, 0.5126501577075364, 0.0, 0.04383424862705941, nan, 0.0, 0.0, nan, 0.06744210581843119, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.5813 | 23.0 | 460 | 2.4229 | 0.0811 | 0.1348 | 0.5258 | [0.4807211786494602, 0.49459326575865226, 0.938036321539686, 0.39272092584681345, 0.2735519637135966, 0.47704798693133726, 0.7735578328901824, 0.0, 0.2279823899472586, 0.0, 0.051354439182700266, 0.0, 0.05645479668377418, 0.0, 0.0, 0.0, nan, 0.05121797646688442, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8175887138513295, 0.7979302918078428, 0.9741666917135042, 0.8394063910037306, 0.8898126463700234, 0.575834783851736, 0.9939882211076353, 0.0, 0.30838132147048386, 0.0, 0.4162807865243943, 0.0, 0.05711432850723914, nan, 0.0, 0.0, nan, 0.06865537796064779, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.7373 | 24.0 | 480 | 2.4194 | 0.0833 | 0.1335 | 0.5209 | [0.47295856989088453, 0.4010758246373534, 0.9478356084385722, 0.4552020300605114, 0.3144253866576763, 0.47478980784006647, 0.8815378441540047, 0.0, 0.23782469737163758, 0.0, 0.05128840994656988, 0.0, 0.03667553978112984, 0.0, 0.0, 0.0, nan, 0.059630358900921866, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9061125218347674, 0.6521419378562235, 0.9756544738661297, 0.8920422995767574, 0.8736533957845434, 0.5429942063351917, 0.9789412977752923, 0.0, 0.28675527414644797, 0.0, 0.45802295147976646, 0.0, 0.037144283574638046, nan, 0.0, 0.0, nan, 0.07029065780450493, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.855 | 25.0 | 500 | 2.3374 | 0.0807 | 0.1361 | 0.5333 | [0.48604188185600444, 0.4597550350021532, 0.9293649430223901, 0.3959473663050158, 0.17281708714356747, 0.5975194939881001, 0.7701785811214243, 0.0, 0.22781434599156117, 0.0, 0.05470044278442122, 0.0, 0.07640671273445213, 0.0, 0.0, 0.0, nan, 0.02489414513067601, 0.0, 0.0003542151604088655, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8514507989907485, 0.7234575683555275, 0.9755943614559226, 0.8506558408706704, 0.9360655737704918, 0.7097575742287026, 0.9918736128344489, 0.0, 0.29470811386152124, 0.0, 0.3656130461042883, 0.0, 0.07728407388916625, nan, 0.0, 0.0, nan, 0.02698211742364298, 0.0, 0.0007535795026375283, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.7691 | 26.0 | 520 | 2.3883 | 0.0849 | 0.1364 | 0.5267 | [0.474469192247138, 0.45106000902680193, 0.9462431454906078, 0.4263770051529127, 0.38456560427115355, 0.4726483862293509, 0.7962471970327331, 0.0, 0.26753188481254314, 0.0, 0.059773452481696854, 0.0, 0.05490234185210378, 0.0, 0.0, 0.0, nan, 0.07469007808708696, 0.0, 0.006923837784371909, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8627361389661642, 0.7402227300186484, 0.9750533497640588, 0.8430259580541537, 0.8350117096018735, 0.5468664598101293, 0.9866919487600706, 0.0, 0.34497966758549165, 0.0, 0.5166767331051607, 0.0, 0.056415376934598103, nan, 0.0, 0.0, nan, 0.08930737985968244, 0.0, 0.024868123587038434, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.1586 | 27.0 | 540 | 2.3779 | 0.0854 | 0.1406 | 0.5312 | [0.4678501563643622, 0.5013837697819475, 0.9380973066898349, 0.41811471015546997, 0.27226958694693515, 0.578128105346164, 0.8283608980534332, 0.0, 0.27534003499046283, 0.0, 0.03832732681196728, 0.0, 0.07413157384048127, 0.0, 0.0, 0.0, nan, 0.03650256622707082, 0.0, 0.012569581612497755, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7107742446787864, 0.8174900848370236, 0.973595623816537, 0.8629253505427295, 0.8675644028103044, 0.7582805680986857, 0.9842802467625522, 0.0, 0.361654976665484, 0.0, 0.539561103281659, 0.0, 0.0762855716425362, nan, 0.0, 0.0, nan, 0.04183151342512001, 0.0, 0.037678975131876416, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.4882 | 28.0 | 560 | 2.3883 | 0.0807 | 0.1330 | 0.5090 | [0.44901952206920415, 0.39232311084385646, 0.9421616696316223, 0.42997326597550106, 0.2785992790406827, 0.44117270081577875, 0.7924737017866088, 0.0, 0.23357056159947687, 0.0, 0.07039609536324386, 0.0, 0.0980335173883561, 0.0, 0.0, 0.0, nan, 0.05874853925422288, 0.0, 0.008242107942973524, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9130976256712169, 0.6169096209912537, 0.9789606564275195, 0.8513920937138815, 0.8868852459016393, 0.4920368641894335, 0.9953251428671293, 0.0, 0.28465380311672717, 0.0, 0.42782363599758405, 0.0, 0.10104842735896155, nan, 0.0, 0.0, nan, 0.0729282059397584, 0.0, 0.027882441597588545, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.5824 | 29.0 | 580 | 2.3286 | 0.0870 | 0.1422 | 0.5448 | [0.49555070795060346, 0.47669886943444867, 0.9465301980853222, 0.429631907145671, 0.23039130181987325, 0.6139492231693006, 0.8638062252198879, 0.0, 0.2764489478441842, 0.0, 0.05525274862179587, 0.0, 0.08505129926167418, 0.0, 0.0, 0.0, nan, 0.04205075812106718, 0.0, 0.007407788466484901, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8170509316167432, 0.7459905970110051, 0.9672838207447928, 0.9195921406037273, 0.9279859484777517, 0.734681264347519, 0.9937522937383129, 0.0, 0.35889850167844767, 0.0, 0.4762096503590363, 0.0, 0.08856714927608587, nan, 0.0, 0.0, nan, 0.04776599672944031, 0.0, 0.031004413822801162, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.5912 | 30.0 | 600 | 2.2626 | 0.0859 | 0.1400 | 0.5506 | [0.49475158849729506, 0.5110290319658362, 0.9474920364634558, 0.43933854351977947, 0.264796217689892, 0.5247187330936232, 0.8601239748970084, 0.0, 0.2863141923823105, 0.0, 0.06945163324267606, 0.0, 0.021489403842345017, 0.0, 0.0, 0.0, nan, 0.03598436786276202, 0.0, 0.01033003818191328, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.851810668305622, 0.8445381241299609, 0.9700039073066634, 0.8837296347939109, 0.8984777517564403, 0.5994424964052371, 0.9760402649376977, 0.0, 0.37699298599929043, 0.0, 0.49526877390779145, 0.0, 0.021667498751872193, nan, 0.0, 0.0, nan, 0.03740043255789418, 0.0, 0.04252341479168909, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.2659 | 31.0 | 620 | 2.3314 | 0.0821 | 0.1372 | 0.5285 | [0.47959682999872194, 0.47351398814605233, 0.9477783135346806, 0.39772012056152584, 0.25233860342555997, 0.4977334076638695, 0.7849239425607517, 0.0, 0.2803309133884856, 0.0, 0.06148684892400287, 0.0, 0.026432587040142026, 0.0, 0.0, 0.0, nan, 0.04319110519813742, 0.0, 0.02313470205307962, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8421305880830692, 0.7648569852651486, 0.9747978720206787, 0.8369261649453158, 0.8970725995316159, 0.5705792823928088, 0.9824889463658447, 0.0, 0.36252285690892716, 0.0, 0.4538621569022213, 0.0, 0.02675986020968547, nan, 0.0, 0.0, nan, 0.04795062509890805, 0.0, 0.09947249434815374, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.1962 | 32.0 | 640 | 2.3454 | 0.0823 | 0.1363 | 0.5250 | [0.47474492138216845, 0.4474584311384102, 0.9481941804357362, 0.40227477381729926, 0.26351167843563283, 0.49494855699587714, 0.8014021173388096, 0.0, 0.28657288639780826, 0.0, 0.06005103430190925, 0.0, 0.032823829120125415, 0.0, 0.0, 0.0, nan, 0.04381559844824714, 0.0, 0.022763889610727093, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8467239761920166, 0.718480287868043, 0.9745273661747468, 0.8562661697988261, 0.9088992974238876, 0.5602574775274758, 0.9888851994896979, 0.0, 0.3859665402145138, 0.0, 0.3995704986242534, 0.0, 0.03344982526210684, nan, 0.0, 0.0, nan, 0.04855726117001635, 0.0, 0.09430509204435354, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.3758 | 33.0 | 660 | 2.2960 | 0.0858 | 0.1381 | 0.5521 | [0.5224921581254285, 0.5076405148598259, 0.9404880298640839, 0.40783730221277903, 0.3364309496595935, 0.5956182247041231, 0.7330045437207653, 0.0, 0.30502471233599465, 0.0, 0.054505055661064276, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.057240453824569965, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8136200426991007, 0.8610248732697712, 0.9806137477082144, 0.7981967975057892, 0.7927400468384075, 0.7251288648957729, 0.9571485992904702, 0.0, 0.429164051199476, 0.0, 0.48728273270250316, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.06187687925304637, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.1937 | 34.0 | 680 | 2.3717 | 0.0809 | 0.1382 | 0.5162 | [0.46788815264908856, 0.39713524992973664, 0.9420111667197038, 0.4486386060669676, 0.2132772688034307, 0.5013255182231519, 0.8557263450623428, 0.0, 0.27594406482049544, 0.0, 0.057764068612699365, 0.0, 0.06538386020027488, 0.0, 0.0, 0.0, nan, 0.03816454915505119, 0.0, 0.02353654837731271, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8601260755644692, 0.6272187639534579, 0.9787051786841393, 0.87901597133961, 0.8968384074941452, 0.5804049679204191, 0.9841054857482393, 0.0, 0.35598373406839334, 0.0, 0.5152674317159922, 0.0, 0.06650024962556166, nan, 0.0, 0.0, nan, 0.040802869652371156, 0.0, 0.1253095058671547, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.5121 | 35.0 | 700 | 2.3246 | 0.0880 | 0.1432 | 0.5554 | [0.5131815824963825, 0.5392289711975226, 0.9466915273149031, 0.4283665694793061, 0.32593553969284017, 0.5269490633089107, 0.8233152953838795, 0.0, 0.30573839098707556, 0.0, 0.07293464323245567, 0.0, 0.021460506706408346, 0.0, 0.000403437285673942, 0.0, nan, 0.04180831826401447, 0.0, 0.0274542272735826, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8223984278967458, 0.9082735797021512, 0.9690421087433501, 0.8514702434570156, 0.8822014051522248, 0.6059761358189754, 0.9625225004805927, 0.0, 0.4246827324581753, 0.0, 0.5076840480504664, 0.0, 0.021567648527209188, nan, 0.0004034617013979948, 0.0, nan, 0.04573508466529514, 0.0, 0.15706749919259338, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.8519 | 36.0 | 720 | 2.3376 | 0.0842 | 0.1419 | 0.5225 | [0.46978904879856803, 0.4126084563774106, 0.9491212653778559, 0.4236789043334405, 0.2236540258857515, 0.5790747855376033, 0.8381056863279266, 0.0, 0.26879812193451924, 0.0, 0.04799207015708248, 0.0, 0.07114199041002055, 0.0, 0.0, 0.0, nan, 0.043645116606489995, 0.0, 0.05208707360861759, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8405111761661384, 0.6478764478764478, 0.9739112139701241, 0.8616667283638323, 0.9247072599531616, 0.6939153906309125, 0.9946435749113088, 0.0, 0.3149914030730602, 0.0, 0.46788806120394605, 0.0, 0.072591113330005, nan, 0.0, 0.0, nan, 0.05069367515957166, 0.0, 0.24986543223167187, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.4294 | 37.0 | 740 | 2.2841 | 0.0839 | 0.1441 | 0.5415 | [0.4954846291829034, 0.4962055595459022, 0.9514013206162876, 0.42174959529692424, 0.2305590278787174, 0.5754330741060775, 0.7389902340151097, 0.0, 0.28239909124140666, 0.0, 0.0626513819303892, 0.0, 0.024224519940915804, 0.0, 0.0, 0.0, nan, 0.042740019564913585, 0.0, 0.04342717990344769, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8097565019085204, 0.8020171775273817, 0.9743921132517809, 0.8144149257783098, 0.9286885245901639, 0.7055699906662294, 0.9812306670627916, 0.0, 0.39216178597745693, 0.0, 0.5068787329709415, 0.0, 0.02456315526709935, nan, 0.0, 0.0, nan, 0.04839900828190114, 0.0, 0.2188610184088707, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.459 | 38.0 | 760 | 2.3317 | 0.0867 | 0.1431 | 0.5364 | [0.48030892247498846, 0.4902823326267501, 0.9503766450743031, 0.4450063131445763, 0.2943483275663206, 0.5136606089555695, 0.8482513022217497, 0.0, 0.27528882635446444, 0.0, 0.06326279885549861, 0.0, 0.047208220240403255, 0.0, 0.0, 0.0, nan, 0.060309323519102696, 0.0, 0.03801530978390878, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8180314744128874, 0.799466813752528, 0.9745423942772985, 0.8712421284699514, 0.8964871194379391, 0.5967979280711049, 0.9761538595970011, 0.0, 0.36950956578696, 0.0, 0.5252667606200926, 0.0, 0.048627059410883675, nan, 0.0, 0.0, nan, 0.07261170016352798, 0.0, 0.2052965873613952, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.034 | 39.0 | 780 | 2.3242 | 0.0843 | 0.1413 | 0.5302 | [0.4710699494256834, 0.4498673330662481, 0.9501882167556683, 0.4472577527454994, 0.2698644793152639, 0.5192405498658997, 0.887438278016587, 0.0, 0.2855318739559738, 0.0, 0.05453332753345803, 0.0, 0.04646858256210424, 0.0, 0.0, 0.0, nan, 0.04317150187487665, 0.0, 0.04550834280940336, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8402523937374652, 0.7196412155595829, 0.9786901505815876, 0.9050768541026558, 0.8860655737704918, 0.5966465696290877, 0.9705352929868405, 0.0, 0.3564094866406485, 0.0, 0.504798335682169, 0.0, 0.04762855716425362, nan, 0.0, 0.0, nan, 0.046157092366935694, 0.0, 0.2111099149531704, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.5324 | 40.0 | 800 | 2.2868 | 0.0883 | 0.1486 | 0.5472 | [0.49940400402662244, 0.5083495805389489, 0.9490884123181207, 0.4509435293966673, 0.2786019330186559, 0.5677884018378244, 0.8314473654804562, 0.0, 0.31029173032849505, 0.0, 0.05001421201913482, 0.0, 0.028731123749754853, 0.0, 0.0, 0.0, nan, 0.040415210998567375, 0.0, 0.07729100529100529, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7953698162644757, 0.8252252252252252, 0.9763307384809594, 0.8672153601263558, 0.8708430913348946, 0.672847136382365, 0.9752625784240052, 0.0, 0.4256706967604596, 0.0, 0.5549963089725521, 0.0, 0.029256115826260608, nan, 0.0, 0.0, nan, 0.042411774014875774, 0.0, 0.39315319194746473, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 0.9781 | 41.0 | 820 | 2.3017 | 0.0840 | 0.1446 | 0.5409 | [0.48699415491479947, 0.49647550426649484, 0.9501249433736172, 0.4438547243769686, 0.25326284487031225, 0.5011356078509064, 0.8192399537057384, 0.0, 0.2892233406202705, 0.0, 0.06541450159698718, 0.0, 0.03028817878847285, 0.0, 0.0010690872415532022, 0.0, nan, 0.04215851602023609, 0.0, 0.07201478580386807, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8351495277220676, 0.8013868095500749, 0.9770971717110998, 0.8630322922964919, 0.8975409836065574, 0.5955744473314666, 0.9772723300886038, 0.0, 0.39429600720504354, 0.0, 0.46043889671834104, 0.0, 0.030853719420868696, nan, 0.001069173508704686, 0.0, nan, 0.046157092366935694, 0.0, 0.35235224459037573, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.8622 | 42.0 | 840 | 2.2950 | 0.0857 | 0.1470 | 0.5461 | [0.4913800969050347, 0.4812997463766939, 0.9505381368926793, 0.4417351172128826, 0.27227900950187156, 0.5758524840465098, 0.8246946843046753, 0.0, 0.28934631315573683, 0.0, 0.06457821914751172, 0.0, 0.023750862323839557, 0.0, 0.0011094525356033405, 0.0, nan, 0.04595292627064958, 0.0, 0.08056667693255312, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8383843080804814, 0.7785569826386153, 0.9781791950948273, 0.8723979220394615, 0.8858313817330211, 0.6802679044423703, 0.9789150836231454, 0.0, 0.375579269124751, 0.0, 0.4660761022750151, 0.0, 0.024063904143784322, nan, 0.0011095196788444856, 0.0, nan, 0.04974415783088041, 0.0, 0.4224351383356658, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.2804 | 43.0 | 860 | 2.2951 | 0.0849 | 0.1478 | 0.5377 | [0.48451599766768166, 0.48109465504526955, 0.949467190948824, 0.45194919436160397, 0.2604415174221296, 0.49844030171455667, 0.8417267812231743, 0.0, 0.28899516544132386, 0.0, 0.07183873428904149, 0.0, 0.04780876494023904, 0.0, 0.0005042864346949068, 0.0, nan, 0.04418214972155143, 0.0, 0.07652985991644139, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8467118457656725, 0.7779371207942637, 0.9761353731477863, 0.866429749550639, 0.9076112412177986, 0.56566433742842, 0.9767655231470963, 0.0, 0.3934990857236429, 0.0, 0.48023622575666064, 0.0, 0.0491263105341987, nan, 0.0005043271267474935, 0.0, nan, 0.047291238065094686, 0.0, 0.5028528366885564, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 0.9332 | 44.0 | 880 | 2.2954 | 0.0855 | 0.1481 | 0.5415 | [0.48976540558400794, 0.4756783947143979, 0.9481545039411303, 0.4618954166858103, 0.27559142571544926, 0.5034935597610996, 0.8391252946181824, 0.0, 0.31266797442155014, 0.0, 0.06981689635173575, 0.0, 0.032793323514973, 0.0, 0.00171453929320639, 0.0, nan, 0.045068804344987855, 0.0, 0.07658063398579594, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8390959597593324, 0.7749323667691015, 0.979787202067867, 0.868362927407115, 0.8716627634660421, 0.5883681037309856, 0.9737246814980515, 0.0, 0.4419693785649955, 0.0, 0.5089591302597141, 0.0, 0.03334997503744384, nan, 0.001714712230941478, 0.0, nan, 0.045524080814474864, 0.0, 0.47593928302293037, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.324 | 45.0 | 900 | 2.2862 | 0.0861 | 0.1485 | 0.5382 | [0.4806012442350636, 0.46105107975440246, 0.9500672396655557, 0.4599660155102145, 0.2762265512265512, 0.5208319873658519, 0.865308936539415, 0.0, 0.3128679841831633, 0.0, 0.06213835289671949, 0.0, 0.046143250688705235, 0.0, 0.00020171050508310473, 0.0, nan, 0.04695333092149465, 0.0, 0.07882262154117517, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8332430290483276, 0.7474929740235863, 0.9767665534549608, 0.888488542836342, 0.8966042154566745, 0.6100964489627743, 0.9760490029884134, 0.0, 0.4115826533118638, 0.0, 0.5296959935574793, 0.0, 0.046829755366949576, nan, 0.0002017308506989974, 0.0, nan, 0.04792424961755552, 0.0, 0.4618365809021423, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.4788 | 46.0 | 920 | 2.2889 | 0.0847 | 0.1507 | 0.5451 | [0.4934466469350711, 0.49289445845878427, 0.9496988084245702, 0.44836450261034844, 0.23945292272076518, 0.5718315301391036, 0.852962747914311, 0.0, 0.3047983204022455, 0.0, 0.05879457369952657, 0.0, 0.03952530404080031, 0.0, 8.068908478405583e-05, 0.0, nan, 0.03758519476642953, 0.0, 0.08181665453323626, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8018737465226111, 0.7954692301631077, 0.9737759610471581, 0.8809943937842162, 0.9204918032786885, 0.6844302615978406, 0.9755684101990528, 0.0, 0.4033459785486204, 0.0, 0.5450640896584121, 0.0, 0.04023964053919121, nan, 8.069234027959896e-05, 0.0, nan, 0.039853352323679904, 0.0, 0.4718484228657552, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.0849 | 47.0 | 940 | 2.3190 | 0.0847 | 0.1499 | 0.5375 | [0.48270265167103205, 0.4883099968500236, 0.9489863037726748, 0.453687342170703, 0.26604904256784684, 0.5300531090789863, 0.8582729222561327, 0.0, 0.3025795526473707, 0.0, 0.057744191168373524, 0.0, 0.061194895591647334, 0.0, 0.0, 0.0, nan, 0.0461902785576524, 0.0, 0.07613617021276596, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8062831564986738, 0.785829853176792, 0.9756695019686814, 0.871238015325576, 0.9045667447306791, 0.6185178645005592, 0.9755596721483372, 0.0, 0.40182855271417267, 0.0, 0.5630494597678009, 0.0, 0.06320519221168247, nan, 0.0, 0.0, nan, 0.04990241071899562, 0.0, 0.48153730218538054, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.6687 | 48.0 | 960 | 2.2997 | 0.0839 | 0.1493 | 0.5419 | [0.4884039519989931, 0.5023855202911851, 0.9487134477260426, 0.4387315758616788, 0.2578810853950519, 0.5168101008425782, 0.8481733943976995, 0.0, 0.30145030552941604, 0.0, 0.06553419599907698, 0.0, 0.04092146189735614, 0.0, 6.051315152493142e-05, 0.0, nan, 0.03923789388905668, 0.0, 0.08363174912213608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8159248237044705, 0.8095238095238095, 0.974647590995161, 0.8772267535362759, 0.90807962529274, 0.602675680902769, 0.9741965362366963, 0.0, 0.4074124614502879, 0.0, 0.49553721226763303, 0.0, 0.04203694458312531, nan, 6.051925520969922e-05, 0.0, nan, 0.04111937542860157, 0.0, 0.5179244267413069, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.5107 | 49.0 | 980 | 2.3102 | 0.0833 | 0.1492 | 0.5368 | [0.48352497928524046, 0.48689002364782025, 0.9489120566960494, 0.44220181960314686, 0.2353657811850003, 0.5099299471402703, 0.8485805611101012, 0.0, 0.292445292371914, 0.0, 0.0640117658387975, 0.0, 0.062292844609085594, 0.0, 0.0001815211472136504, 0.0, nan, 0.040059637287969886, 0.0, 0.08392424840753396, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8282109238532703, 0.777104509757571, 0.9738961858675723, 0.8748329035097461, 0.9275175644028103, 0.5860809094960605, 0.9789500358260079, 0.0, 0.39157228241587294, 0.0, 0.4907053217904839, 0.0, 0.06380429355966051, nan, 0.00018155776562909766, 0.0, nan, 0.043229413936804344, 0.0, 0.5262138012703197, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.3671 | 50.0 | 1000 | 2.3118 | 0.0859 | 0.1493 | 0.5430 | [0.4898642376841085, 0.502026813829342, 0.9487341030299479, 0.44331193050176815, 0.28502594514455154, 0.5132976114794296, 0.8390207156308851, 0.0, 0.30530825819472024, 0.0, 0.06594624784212842, 0.0, 0.03397963180571876, 0.0, 0.0007459827819109256, 0.0, nan, 0.04554975143210437, 0.0, 0.07792795056021705, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8215553632658342, 0.819071257846768, 0.9731147245348802, 0.8672811704363634, 0.9004683840749415, 0.594073476114797, 0.9732440887086908, 0.0, 0.40956851614311834, 0.0, 0.5229850345614389, 0.0, 0.034648027958062905, nan, 0.0007464041475862904, 0.0, nan, 0.0476077438413251, 0.0, 0.5009150608246313, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
autoevaluate/binary-classification-not-evaluated
|
autoevaluate
| 2022-11-29T11:07:52Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T11:01:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.175 | 1.0 | 4210 | 0.3009 | 0.8968 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
supermy/poetry
|
supermy
| 2022-11-29T10:53:20Z | 115 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"zh",
"dataset:poetry",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-29T00:56:18Z |
---
language: zh
datasets: poetry
inference:
parameters:
max_length: 108
num_return_sequences: 1
do_sample: True
widget:
- text: "物换 星移 几度 秋"
example_title: "滕王阁1"
- text: "秋水 共 长天 一色"
example_title: "滕王阁 2"
- text: "萍水 相逢,尽是 他乡 之 客。"
example_title: "滕王阁 3"
---
# 古诗词
## Model description
古诗词AI生成
## How to use
使用 pipeline 调用模型:
```python
from transformers import AutoTokenizer, GPT2LMHeadModel, TextGenerationPipeline
model_checkpoint = "supermy/poetry"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = GPT2LMHeadModel.from_pretrained(model_checkpoint)
text_generator = TextGenerationPipeline(model, tokenizer)
text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id
print(text_generator("举头 望 明月,", max_length=100, do_sample=True))
print(text_generator("物换 星移 几度 秋,", max_length=100, do_sample=True))
>>> print(text_generator("举头 望 明月,", max_length=100, do_sample=True))
[{'generated_text': '举头 望 明月, 何以 喻 无言 。 顾影 若为 舞 , 啸 风清 独 伤 。 四时 别有 意 , 千古 得 从容 。 赏音 我非 此 , 何如 鸥鹭 群 。 崎 山有 佳色 , 落落 样 相宜 。 不嫌 雪霜 温 , 宁 受 四时 肥 。 老 态 如 偷 面 , 冬 心 似 相知 。 春风 不可 恃 , 触 动 春 何为 。 岁晚 忽然 老 , 花前 岁月深 。 可笑 一场 梦 , 婵娟 乍 自 心 。 列 名 多 岁月 , 森 列 尽 林峦 。 试问 影 非 笑'}]
>>> print(text_generator("物换 星移 几度 秋,", max_length=100, do_sample=True))
[{'generated_text': '物换 星移 几度 秋, 消长 随时 向 一丘 。 渔者 下 逢 勾漏 令 , 漏声 高出 景阳 丘 。 天津 大尹 昔 从游 , 大尹 来时 春复 秋 。 旗鼓 日 严 宣 使 从 , 联镳 歌笑 又 风流 。 冈峦 比 并 瑶 溪 水 , 叠嶂 高 盘 黼黻 洲 。 花木 芳菲 三月 天 , 莺花 暖 翠 几 流年 。 一从 别后 多 携手 , 肠断 酒阑 怀 凛然 。 北阙 人称 似梦中 , 西山 别样 梦魂 香 。 多君 观国 亲 圭璧 , 能 预 陇西 称 巨 良 。 刷羽 刷羽'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("supermy/poetry")
model = AutoModelForCausalLM.from_pretrained("supermy/poetry")
```
## Training data
非常全的古诗词数据,收录了从先秦到现代的共计85万余首古诗词。
## 统计信息
| 朝代 | 诗词数 | 作者数 |
|-----------------------|--------|--------|
| 宋 | 287114 | 9446 |
| 明 | 236957 | 4439 |
| 清 | 90089 | 8872 |
| 唐 | 49195 | 2736 |
| 元 | 37375 | 1209 |
| 近现代 | 28419 | 790 |
| 当代 | 28219 | 177 |
| 明末清初 | 17700 | 176 |
| 元末明初 | 15736 | 79 |
| 清末民国初 | 15367 | 99 |
| 清末近现代初 | 12464 | 48 |
| 宋末元初 | 12058 | 41 |
| 南北朝 | 4586 | 434 |
| 近现代末当代初 | 3426 | 23 |
| 魏晋 | 3020 | 251 |
| 金末元初 | 3019 | 17 |
| 金 | 2741 | 253 |
| 民国末当代初 | 1948 | 9 |
| 隋 | 1170 | 84 |
| 唐末宋初 | 1118 | 44 |
| 先秦 | 570 | 8 |
| 隋末唐初 | 472 | 40 |
| 汉 | 363 | 83 |
| 宋末金初 | 234 | 9 |
| 辽 | 22 | 7 |
| 秦 | 2 | 2 |
| 魏晋末南北朝初 | 1 | 1 |
| 总和 | 853385 | 29377 |
```
```
## Training procedure
模型:[GPT2](https://huggingface.co/gpt2)
训练环境:英伟达16G显卡
bpe分词:"vocab_size"=50000
```
***** Running training *****
Num examples = 16431
Num Epochs = 680
Instantaneous batch size per device = 24
Total train batch size (w. parallel, distributed & accumulation) = 192
Gradient Accumulation steps = 8
Total optimization steps = 57800
Number of trainable parameters = 124242432
GPT-2 size: 124.2M parameters
0%| | 0/57800 [00:00<?, ?it/s]You're using a PreTrainedTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
9%|▊ | 5000/57800 [6:58:57<72:53:18, 4.97s/it]***** Running Evaluation *****
Num examples = 1755
Batch size = 24
{'loss': 3.1345, 'learning_rate': 0.0004939065828881268, 'epoch': 58.82}
9%|▊ | 5000/57800 [6:59:14<72:53:18, Saving model checkpoint to poetry-trainer/checkpoint-5000
Configuration saved in poetry-trainer/checkpoint-5000/config.json
Model weights saved in poetry-trainer/checkpoint-5000/pytorch_model.bin
tokenizer config file saved in poetry-trainer/checkpoint-5000/tokenizer_config.json
Special tokens file saved in poetry-trainer/checkpoint-5000/special_tokens_map.json
17%|█▋ | 10000/57800 [13:55:32<65:40:41, 4.95s/it]***** Running Evaluation *****
Num examples = 1755
Batch size = 24
{'eval_loss': 11.14090633392334, 'eval_runtime': 16.8326, 'eval_samples_per_second': 104.262, 'eval_steps_per_second': 4.396, 'epoch': 58.82}
{'loss': 0.2511, 'learning_rate': 0.00046966687938531824, 'epoch': 117.64}
17%|█▋ | 10000/57800 [13:55:48<65:40:41Saving model checkpoint to poetry-trainer/checkpoint-10000
..........
95%|█████████▌| 55000/57800 [76:06:46<3:59:33, 5.13s/it]***** Running Evaluation *****
Num examples = 1755
Batch size = 24
{'eval_loss': 14.860174179077148, 'eval_runtime': 16.7826, 'eval_samples_per_second': 104.572, 'eval_steps_per_second': 4.409, 'epoch': 588.23}
{'loss': 0.0083, 'learning_rate': 3.0262183266589473e-06, 'epoch': 647.06}
95%|█████████▌| 55000/57800 [76:07:03<3:59:33,Saving model checkpoint to poetry-trainer/checkpoint-55000
{'eval_loss': 14.830656051635742, 'eval_runtime': 16.7365, 'eval_samples_per_second': 104.86, 'eval_steps_per_second': 4.421, 'epoch': 647.06}
{'train_runtime': 287920.5857, 'train_samples_per_second': 38.806, 'train_steps_per_second': 0.201, 'train_loss': 0.33751299874592816, 'epoch': 679.99}
100%|██████████| 57800/57800 [79:58:40<00:00, 4.93s/it]
```
```
### entry and citation info
```
```
|
huggingtweets/mullen_usa-nasdaq
|
huggingtweets
| 2022-11-29T10:30:31Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-29T10:24:49Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mullen_usa-nasdaq/1669717561312/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521140484512620544/Ev6EIPlD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433904015834705921/tRPvxdFF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nasdaq & Mullen Automotive</div>
<div style="text-align: center; font-size: 14px;">@mullen_usa-nasdaq</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nasdaq & Mullen Automotive.
| Data | Nasdaq | Mullen Automotive |
| --- | --- | --- |
| Tweets downloaded | 3250 | 963 |
| Retweets | 663 | 188 |
| Short tweets | 31 | 121 |
| Tweets kept | 2556 | 654 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/352xmu00/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mullen_usa-nasdaq's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x3hx0rfr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x3hx0rfr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mullen_usa-nasdaq')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
LuisQ/LuisQ_sd-class-butterflies-32
|
LuisQ
| 2022-11-29T10:18:33Z | 35 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T16:17:40Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(LuisQ/LuisQ_sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
SayaEndo/distilbert-base-uncased-finetuned-squad-d5716d28
|
SayaEndo
| 2022-11-29T08:56:00Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-29T08:44:02Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
premsuresh/bart-finetuned-mathqa-moh
|
premsuresh
| 2022-11-29T08:42:54Z | 172 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T08:24:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-mathqa-moh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-mathqa-moh
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
thivy/t5-base-finetuned-en-to-no
|
thivy
| 2022-11-29T08:21:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T16:16:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: t5-base-finetuned-en-to-no
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
args: en-no
metrics:
- name: Bleu
type: bleu
value: 4.8513
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-en-to-no
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9566
- Bleu: 4.8513
- Gen Len: 17.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 280
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|
| 3.3949 | 1.0 | 788 | 2.7553 | 0.9274 | 18.1314 |
| 2.8659 | 2.0 | 1576 | 2.5367 | 1.2755 | 18.1543 |
| 2.7244 | 3.0 | 2364 | 2.3900 | 1.6351 | 18.0343 |
| 2.5228 | 4.0 | 3152 | 2.2902 | 1.7125 | 18.0543 |
| 2.4201 | 5.0 | 3940 | 2.2039 | 1.7217 | 18.0914 |
| 2.3168 | 6.0 | 4728 | 2.1429 | 2.0474 | 18.08 |
| 2.1856 | 7.0 | 5516 | 2.0772 | 2.228 | 18.0686 |
| 2.12 | 8.0 | 6304 | 2.0333 | 2.1694 | 17.98 |
| 2.0519 | 9.0 | 7092 | 1.9931 | 2.257 | 17.9914 |
| 1.9856 | 10.0 | 7880 | 1.9540 | 2.489 | 18.04 |
| 1.9164 | 11.0 | 8668 | 1.9266 | 2.5762 | 17.9629 |
| 1.8864 | 12.0 | 9456 | 1.9036 | 2.8294 | 17.9857 |
| 1.8276 | 13.0 | 10244 | 1.8695 | 2.9018 | 17.98 |
| 1.7715 | 14.0 | 11032 | 1.8584 | 3.04 | 17.9886 |
| 1.7302 | 15.0 | 11820 | 1.8487 | 2.9588 | 18.0057 |
| 1.6768 | 16.0 | 12608 | 1.8155 | 3.1968 | 17.9943 |
| 1.6564 | 17.0 | 13396 | 1.8137 | 3.3315 | 17.9657 |
| 1.6039 | 18.0 | 14184 | 1.7863 | 3.4057 | 18.0629 |
| 1.5735 | 19.0 | 14972 | 1.7945 | 3.6905 | 17.9571 |
| 1.5319 | 20.0 | 15760 | 1.7830 | 3.5128 | 17.9714 |
| 1.4993 | 21.0 | 16548 | 1.7745 | 3.4125 | 18.0057 |
| 1.4622 | 22.0 | 17336 | 1.7655 | 3.3974 | 17.9543 |
| 1.448 | 23.0 | 18124 | 1.7599 | 3.75 | 17.9057 |
| 1.3995 | 24.0 | 18912 | 1.7557 | 3.6852 | 17.8286 |
| 1.373 | 25.0 | 19700 | 1.7478 | 3.5797 | 17.9343 |
| 1.3513 | 26.0 | 20488 | 1.7558 | 3.8526 | 17.8457 |
| 1.3291 | 27.0 | 21276 | 1.7485 | 3.7037 | 17.9143 |
| 1.3002 | 28.0 | 22064 | 1.7480 | 3.7433 | 17.96 |
| 1.2655 | 29.0 | 22852 | 1.7578 | 4.0584 | 17.8914 |
| 1.2354 | 30.0 | 23640 | 1.7514 | 4.2106 | 17.8686 |
| 1.2224 | 31.0 | 24428 | 1.7576 | 3.9906 | 17.9 |
| 1.1999 | 32.0 | 25216 | 1.7627 | 4.1242 | 17.92 |
| 1.1672 | 33.0 | 26004 | 1.7548 | 4.1584 | 17.8286 |
| 1.1547 | 34.0 | 26792 | 1.7446 | 4.1721 | 17.8143 |
| 1.1313 | 35.0 | 27580 | 1.7613 | 4.3958 | 17.8457 |
| 1.08 | 36.0 | 28368 | 1.7628 | 4.342 | 17.8829 |
| 1.0927 | 37.0 | 29156 | 1.7685 | 4.4468 | 17.8971 |
| 1.0751 | 38.0 | 29944 | 1.7731 | 4.4297 | 17.8886 |
| 1.0492 | 39.0 | 30732 | 1.7641 | 4.5174 | 17.8714 |
| 1.036 | 40.0 | 31520 | 1.7643 | 4.4578 | 17.84 |
| 1.0172 | 41.0 | 32308 | 1.7820 | 4.5795 | 17.8429 |
| 0.9966 | 42.0 | 33096 | 1.7830 | 4.3455 | 17.8743 |
| 0.9812 | 43.0 | 33884 | 1.7890 | 4.3988 | 17.8486 |
| 0.9624 | 44.0 | 34672 | 1.7953 | 4.5418 | 17.8143 |
| 0.9485 | 45.0 | 35460 | 1.8046 | 4.5402 | 17.8143 |
| 0.9383 | 46.0 | 36248 | 1.8010 | 4.5572 | 17.76 |
| 0.9175 | 47.0 | 37036 | 1.8153 | 4.5916 | 17.7943 |
| 0.8877 | 48.0 | 37824 | 1.8133 | 4.5799 | 17.7857 |
| 0.8877 | 49.0 | 38612 | 1.8254 | 4.6511 | 17.7657 |
| 0.8595 | 50.0 | 39400 | 1.8229 | 4.7338 | 17.7657 |
| 0.8533 | 51.0 | 40188 | 1.8402 | 4.7568 | 17.7571 |
| 0.8414 | 52.0 | 40976 | 1.8406 | 4.7573 | 17.8429 |
| 0.8191 | 53.0 | 41764 | 1.8499 | 4.6985 | 17.76 |
| 0.8228 | 54.0 | 42552 | 1.8629 | 4.7603 | 17.7114 |
| 0.7987 | 55.0 | 43340 | 1.8638 | 4.5511 | 17.8 |
| 0.7877 | 56.0 | 44128 | 1.8673 | 4.5068 | 17.7771 |
| 0.7829 | 57.0 | 44916 | 1.8862 | 4.6033 | 17.7943 |
| 0.7571 | 58.0 | 45704 | 1.8874 | 4.6694 | 17.7486 |
| 0.7542 | 59.0 | 46492 | 1.8996 | 4.7531 | 17.7571 |
| 0.7301 | 60.0 | 47280 | 1.8950 | 4.6951 | 17.7514 |
| 0.73 | 61.0 | 48068 | 1.9035 | 4.7867 | 17.7343 |
| 0.7065 | 62.0 | 48856 | 1.9127 | 4.5863 | 17.7257 |
| 0.7015 | 63.0 | 49644 | 1.9418 | 4.9026 | 17.8086 |
| 0.6921 | 64.0 | 50432 | 1.9322 | 4.8127 | 17.7943 |
| 0.6714 | 65.0 | 51220 | 1.9382 | 4.5343 | 17.7286 |
| 0.6599 | 66.0 | 52008 | 1.9508 | 4.5273 | 17.7343 |
| 0.6529 | 67.0 | 52796 | 1.9577 | 4.6274 | 17.7743 |
| 0.647 | 68.0 | 53584 | 1.9789 | 4.5575 | 17.7571 |
| 0.627 | 69.0 | 54372 | 1.9795 | 4.319 | 17.7371 |
| 0.6279 | 70.0 | 55160 | 1.9788 | 4.6788 | 17.7486 |
| 0.5867 | 71.0 | 55948 | 2.0100 | 4.557 | 17.7714 |
| 0.5985 | 72.0 | 56736 | 2.0256 | 4.6005 | 17.8229 |
| 0.5939 | 73.0 | 57524 | 2.0336 | 4.7289 | 17.8 |
| 0.5727 | 74.0 | 58312 | 2.0328 | 4.5894 | 17.7229 |
| 0.5702 | 75.0 | 59100 | 2.0436 | 4.7621 | 17.78 |
| 0.5744 | 76.0 | 59888 | 2.0662 | 4.6161 | 17.8057 |
| 0.5554 | 77.0 | 60676 | 2.0586 | 4.6424 | 17.8057 |
| 0.5436 | 78.0 | 61464 | 2.0532 | 4.5742 | 17.7886 |
| 0.5359 | 79.0 | 62252 | 2.0680 | 4.8312 | 17.7886 |
| 0.5291 | 80.0 | 63040 | 2.0858 | 4.6342 | 17.8457 |
| 0.5034 | 81.0 | 63828 | 2.0861 | 4.7405 | 17.8257 |
| 0.5155 | 82.0 | 64616 | 2.1003 | 4.3956 | 17.7571 |
| 0.4989 | 83.0 | 65404 | 2.1072 | 4.339 | 17.7914 |
| 0.4903 | 84.0 | 66192 | 2.1113 | 4.3804 | 17.8143 |
| 0.4836 | 85.0 | 66980 | 2.1202 | 4.5776 | 17.8371 |
| 0.4794 | 86.0 | 67768 | 2.1277 | 4.6548 | 17.7686 |
| 0.4689 | 87.0 | 68556 | 2.1360 | 4.6453 | 17.7571 |
| 0.4623 | 88.0 | 69344 | 2.1460 | 4.7885 | 17.7771 |
| 0.4551 | 89.0 | 70132 | 2.1610 | 4.5342 | 17.7686 |
| 0.4405 | 90.0 | 70920 | 2.1649 | 4.5593 | 17.8057 |
| 0.4478 | 91.0 | 71708 | 2.1518 | 4.4945 | 17.8314 |
| 0.4265 | 92.0 | 72496 | 2.1873 | 4.453 | 17.8086 |
| 0.4191 | 93.0 | 73284 | 2.1808 | 4.6432 | 17.8057 |
| 0.4169 | 94.0 | 74072 | 2.1871 | 4.5543 | 17.82 |
| 0.4087 | 95.0 | 74860 | 2.2109 | 4.8367 | 17.7971 |
| 0.4054 | 96.0 | 75648 | 2.2092 | 4.7079 | 17.8171 |
| 0.3872 | 97.0 | 76436 | 2.2103 | 4.6996 | 17.7943 |
| 0.3884 | 98.0 | 77224 | 2.2111 | 4.9398 | 17.8314 |
| 0.3837 | 99.0 | 78012 | 2.2316 | 4.7849 | 17.8143 |
| 0.3777 | 100.0 | 78800 | 2.2298 | 4.7595 | 17.8343 |
| 0.3719 | 101.0 | 79588 | 2.2404 | 4.6768 | 17.8457 |
| 0.364 | 102.0 | 80376 | 2.2658 | 4.5789 | 17.8229 |
| 0.3549 | 103.0 | 81164 | 2.2790 | 4.6549 | 17.8029 |
| 0.3598 | 104.0 | 81952 | 2.2953 | 4.7411 | 17.8486 |
| 0.346 | 105.0 | 82740 | 2.2812 | 4.7529 | 17.7657 |
| 0.3376 | 106.0 | 83528 | 2.2997 | 4.5128 | 17.7886 |
| 0.3363 | 107.0 | 84316 | 2.2938 | 4.6983 | 17.7914 |
| 0.3368 | 108.0 | 85104 | 2.2909 | 4.4977 | 17.8257 |
| 0.3243 | 109.0 | 85892 | 2.3100 | 4.5156 | 17.8286 |
| 0.3197 | 110.0 | 86680 | 2.3310 | 4.7516 | 17.7943 |
| 0.3165 | 111.0 | 87468 | 2.3354 | 4.608 | 17.8114 |
| 0.3128 | 112.0 | 88256 | 2.3334 | 4.7388 | 17.8314 |
| 0.3038 | 113.0 | 89044 | 2.3343 | 4.6356 | 17.7914 |
| 0.3055 | 114.0 | 89832 | 2.3553 | 4.6694 | 17.7971 |
| 0.2977 | 115.0 | 90620 | 2.3530 | 4.6176 | 17.8086 |
| 0.2925 | 116.0 | 91408 | 2.3687 | 4.6855 | 17.8886 |
| 0.2794 | 117.0 | 92196 | 2.3856 | 4.5948 | 17.84 |
| 0.2913 | 118.0 | 92984 | 2.3844 | 4.7569 | 17.7943 |
| 0.2812 | 119.0 | 93772 | 2.3973 | 4.6009 | 17.7629 |
| 0.2731 | 120.0 | 94560 | 2.4074 | 4.7287 | 17.8086 |
| 0.2781 | 121.0 | 95348 | 2.4083 | 4.7944 | 17.8571 |
| 0.2708 | 122.0 | 96136 | 2.4414 | 4.7454 | 17.8829 |
| 0.2607 | 123.0 | 96924 | 2.4202 | 4.5074 | 17.8486 |
| 0.2617 | 124.0 | 97712 | 2.4371 | 4.6055 | 17.8629 |
| 0.2527 | 125.0 | 98500 | 2.4314 | 4.5891 | 17.8 |
| 0.2528 | 126.0 | 99288 | 2.4548 | 4.8362 | 17.8571 |
| 0.2522 | 127.0 | 100076 | 2.4461 | 4.6966 | 17.8514 |
| 0.2434 | 128.0 | 100864 | 2.4492 | 4.5774 | 17.8514 |
| 0.2381 | 129.0 | 101652 | 2.4720 | 4.4607 | 17.86 |
| 0.2411 | 130.0 | 102440 | 2.4820 | 4.484 | 17.8371 |
| 0.2352 | 131.0 | 103228 | 2.4954 | 4.8091 | 17.8457 |
| 0.2275 | 132.0 | 104016 | 2.4863 | 4.7008 | 17.8743 |
| 0.2244 | 133.0 | 104804 | 2.5089 | 4.8076 | 17.8571 |
| 0.2251 | 134.0 | 105592 | 2.5085 | 4.7374 | 17.8029 |
| 0.2242 | 135.0 | 106380 | 2.4979 | 4.851 | 17.8171 |
| 0.2217 | 136.0 | 107168 | 2.5122 | 4.6295 | 17.8314 |
| 0.2111 | 137.0 | 107956 | 2.5131 | 4.6315 | 17.8229 |
| 0.2078 | 138.0 | 108744 | 2.5216 | 4.6177 | 17.8229 |
| 0.2113 | 139.0 | 109532 | 2.5292 | 4.5603 | 17.8257 |
| 0.21 | 140.0 | 110320 | 2.5494 | 4.6128 | 17.7971 |
| 0.1994 | 141.0 | 111108 | 2.5435 | 4.9231 | 17.8714 |
| 0.2018 | 142.0 | 111896 | 2.5605 | 4.827 | 17.8314 |
| 0.1971 | 143.0 | 112684 | 2.5624 | 4.8075 | 17.78 |
| 0.1959 | 144.0 | 113472 | 2.5666 | 4.6358 | 17.84 |
| 0.1916 | 145.0 | 114260 | 2.5740 | 4.6628 | 17.8257 |
| 0.1939 | 146.0 | 115048 | 2.5730 | 4.8445 | 17.8286 |
| 0.1832 | 147.0 | 115836 | 2.5918 | 4.8198 | 17.8571 |
| 0.1884 | 148.0 | 116624 | 2.6013 | 4.7955 | 17.8257 |
| 0.1777 | 149.0 | 117412 | 2.5996 | 4.7503 | 17.8114 |
| 0.1711 | 150.0 | 118200 | 2.5971 | 4.5452 | 17.8514 |
| 0.1843 | 151.0 | 118988 | 2.6075 | 4.817 | 17.8143 |
| 0.1747 | 152.0 | 119776 | 2.6161 | 4.5231 | 17.8257 |
| 0.1698 | 153.0 | 120564 | 2.6225 | 4.7232 | 17.82 |
| 0.1685 | 154.0 | 121352 | 2.6285 | 4.7105 | 17.8229 |
| 0.1685 | 155.0 | 122140 | 2.6443 | 4.4228 | 17.8686 |
| 0.1695 | 156.0 | 122928 | 2.6356 | 4.5458 | 17.8657 |
| 0.1649 | 157.0 | 123716 | 2.6418 | 4.5955 | 17.8286 |
| 0.1643 | 158.0 | 124504 | 2.6565 | 4.5943 | 17.8457 |
| 0.1573 | 159.0 | 125292 | 2.6434 | 4.762 | 17.8429 |
| 0.1573 | 160.0 | 126080 | 2.6615 | 4.5916 | 17.8229 |
| 0.1558 | 161.0 | 126868 | 2.6529 | 4.527 | 17.8371 |
| 0.1545 | 162.0 | 127656 | 2.6697 | 4.705 | 17.7886 |
| 0.1563 | 163.0 | 128444 | 2.6747 | 4.6848 | 17.8086 |
| 0.1529 | 164.0 | 129232 | 2.6711 | 4.5149 | 17.8171 |
| 0.151 | 165.0 | 130020 | 2.6807 | 4.6484 | 17.8543 |
| 0.1471 | 166.0 | 130808 | 2.6909 | 4.7488 | 17.8657 |
| 0.1465 | 167.0 | 131596 | 2.6889 | 4.6446 | 17.8086 |
| 0.1345 | 168.0 | 132384 | 2.6935 | 4.6107 | 17.7971 |
| 0.1447 | 169.0 | 133172 | 2.6971 | 4.4718 | 17.86 |
| 0.1426 | 170.0 | 133960 | 2.7083 | 4.6878 | 17.84 |
| 0.1402 | 171.0 | 134748 | 2.7053 | 4.7539 | 17.8286 |
| 0.1382 | 172.0 | 135536 | 2.7140 | 4.7697 | 17.8343 |
| 0.1367 | 173.0 | 136324 | 2.7221 | 4.6764 | 17.8429 |
| 0.1365 | 174.0 | 137112 | 2.7364 | 4.7535 | 17.8343 |
| 0.1277 | 175.0 | 137900 | 2.7232 | 4.7312 | 17.8343 |
| 0.1331 | 176.0 | 138688 | 2.7292 | 4.8578 | 17.8171 |
| 0.1332 | 177.0 | 139476 | 2.7565 | 4.7861 | 17.8 |
| 0.1291 | 178.0 | 140264 | 2.7577 | 4.8903 | 17.7686 |
| 0.1298 | 179.0 | 141052 | 2.7474 | 4.7653 | 17.8171 |
| 0.1268 | 180.0 | 141840 | 2.7466 | 4.7403 | 17.8143 |
| 0.123 | 181.0 | 142628 | 2.7517 | 4.7989 | 17.8171 |
| 0.1267 | 182.0 | 143416 | 2.7634 | 4.7267 | 17.84 |
| 0.1246 | 183.0 | 144204 | 2.7620 | 4.8103 | 17.8343 |
| 0.1221 | 184.0 | 144992 | 2.7686 | 4.968 | 17.8429 |
| 0.1202 | 185.0 | 145780 | 2.7624 | 4.806 | 17.7914 |
| 0.1222 | 186.0 | 146568 | 2.7735 | 4.8647 | 17.82 |
| 0.1187 | 187.0 | 147356 | 2.7775 | 4.5615 | 17.8229 |
| 0.1175 | 188.0 | 148144 | 2.7703 | 4.824 | 17.82 |
| 0.121 | 189.0 | 148932 | 2.7824 | 4.8669 | 17.78 |
| 0.114 | 190.0 | 149720 | 2.7807 | 4.8833 | 17.8257 |
| 0.1146 | 191.0 | 150508 | 2.7869 | 4.9505 | 17.7857 |
| 0.1133 | 192.0 | 151296 | 2.7900 | 4.9474 | 17.7257 |
| 0.1137 | 193.0 | 152084 | 2.8008 | 4.8476 | 17.7371 |
| 0.1098 | 194.0 | 152872 | 2.7971 | 4.736 | 17.7543 |
| 0.1072 | 195.0 | 153660 | 2.7956 | 4.7635 | 17.8057 |
| 0.1106 | 196.0 | 154448 | 2.8019 | 4.6805 | 17.7657 |
| 0.1077 | 197.0 | 155236 | 2.8134 | 4.6501 | 17.8029 |
| 0.1076 | 198.0 | 156024 | 2.8222 | 4.5361 | 17.82 |
| 0.1054 | 199.0 | 156812 | 2.8173 | 4.8964 | 17.78 |
| 0.1045 | 200.0 | 157600 | 2.8248 | 4.9418 | 17.7771 |
| 0.1083 | 201.0 | 158388 | 2.8214 | 4.8408 | 17.7829 |
| 0.1035 | 202.0 | 159176 | 2.8277 | 4.66 | 17.8 |
| 0.1033 | 203.0 | 159964 | 2.8342 | 4.616 | 17.8114 |
| 0.1013 | 204.0 | 160752 | 2.8392 | 4.7213 | 17.8371 |
| 0.1012 | 205.0 | 161540 | 2.8313 | 4.7918 | 17.8 |
| 0.1021 | 206.0 | 162328 | 2.8372 | 4.8182 | 17.8371 |
| 0.0979 | 207.0 | 163116 | 2.8500 | 4.759 | 17.8657 |
| 0.0985 | 208.0 | 163904 | 2.8458 | 4.6711 | 17.8171 |
| 0.1006 | 209.0 | 164692 | 2.8468 | 4.7997 | 17.8286 |
| 0.0994 | 210.0 | 165480 | 2.8426 | 4.7327 | 17.8571 |
| 0.0981 | 211.0 | 166268 | 2.8565 | 4.7288 | 17.8457 |
| 0.0985 | 212.0 | 167056 | 2.8608 | 4.8843 | 17.8457 |
| 0.0933 | 213.0 | 167844 | 2.8656 | 4.7052 | 17.8143 |
| 0.0963 | 214.0 | 168632 | 2.8650 | 4.8149 | 17.7771 |
| 0.092 | 215.0 | 169420 | 2.8569 | 4.6251 | 17.8 |
| 0.0958 | 216.0 | 170208 | 2.8688 | 4.7479 | 17.7714 |
| 0.094 | 217.0 | 170996 | 2.8657 | 4.7716 | 17.8229 |
| 0.0926 | 218.0 | 171784 | 2.8741 | 4.6749 | 17.8143 |
| 0.0924 | 219.0 | 172572 | 2.8727 | 4.8438 | 17.82 |
| 0.0932 | 220.0 | 173360 | 2.8749 | 4.6733 | 17.84 |
| 0.0899 | 221.0 | 174148 | 2.8774 | 4.6198 | 17.8286 |
| 0.0925 | 222.0 | 174936 | 2.8796 | 4.6945 | 17.8286 |
| 0.0904 | 223.0 | 175724 | 2.8872 | 4.6184 | 17.82 |
| 0.0886 | 224.0 | 176512 | 2.8974 | 4.74 | 17.7743 |
| 0.0898 | 225.0 | 177300 | 2.8879 | 4.5856 | 17.8229 |
| 0.0874 | 226.0 | 178088 | 2.8880 | 4.582 | 17.8171 |
| 0.0877 | 227.0 | 178876 | 2.8941 | 4.64 | 17.8057 |
| 0.0892 | 228.0 | 179664 | 2.8975 | 4.7271 | 17.8114 |
| 0.0857 | 229.0 | 180452 | 2.8957 | 4.6847 | 17.7943 |
| 0.088 | 230.0 | 181240 | 2.8950 | 4.7799 | 17.8086 |
| 0.0885 | 231.0 | 182028 | 2.9061 | 4.699 | 17.7829 |
| 0.0863 | 232.0 | 182816 | 2.9085 | 4.7863 | 17.7771 |
| 0.0853 | 233.0 | 183604 | 2.9083 | 4.7545 | 17.7857 |
| 0.0838 | 234.0 | 184392 | 2.9067 | 4.6354 | 17.7829 |
| 0.0835 | 235.0 | 185180 | 2.9139 | 4.5979 | 17.8371 |
| 0.0865 | 236.0 | 185968 | 2.9094 | 4.7646 | 17.8314 |
| 0.0853 | 237.0 | 186756 | 2.9127 | 4.6967 | 17.7971 |
| 0.082 | 238.0 | 187544 | 2.9205 | 4.7171 | 17.8029 |
| 0.0811 | 239.0 | 188332 | 2.9204 | 4.6172 | 17.7971 |
| 0.0837 | 240.0 | 189120 | 2.9202 | 4.6729 | 17.8057 |
| 0.0803 | 241.0 | 189908 | 2.9190 | 4.9057 | 17.8143 |
| 0.0813 | 242.0 | 190696 | 2.9236 | 4.7919 | 17.8429 |
| 0.0814 | 243.0 | 191484 | 2.9307 | 4.7492 | 17.8286 |
| 0.0822 | 244.0 | 192272 | 2.9238 | 4.7454 | 17.8429 |
| 0.0823 | 245.0 | 193060 | 2.9269 | 4.8462 | 17.8257 |
| 0.0803 | 246.0 | 193848 | 2.9293 | 4.738 | 17.8286 |
| 0.0806 | 247.0 | 194636 | 2.9280 | 4.8432 | 17.78 |
| 0.0757 | 248.0 | 195424 | 2.9371 | 4.8563 | 17.8171 |
| 0.0774 | 249.0 | 196212 | 2.9330 | 4.7717 | 17.8057 |
| 0.079 | 250.0 | 197000 | 2.9373 | 4.7938 | 17.8371 |
| 0.0784 | 251.0 | 197788 | 2.9397 | 4.8316 | 17.82 |
| 0.0801 | 252.0 | 198576 | 2.9378 | 4.9071 | 17.8314 |
| 0.0795 | 253.0 | 199364 | 2.9366 | 4.8581 | 17.8343 |
| 0.077 | 254.0 | 200152 | 2.9372 | 4.8495 | 17.7971 |
| 0.0787 | 255.0 | 200940 | 2.9447 | 4.8479 | 17.8086 |
| 0.077 | 256.0 | 201728 | 2.9380 | 4.8716 | 17.84 |
| 0.0765 | 257.0 | 202516 | 2.9410 | 4.8944 | 17.7571 |
| 0.0762 | 258.0 | 203304 | 2.9423 | 4.7536 | 17.7971 |
| 0.0772 | 259.0 | 204092 | 2.9485 | 4.8251 | 17.8343 |
| 0.0761 | 260.0 | 204880 | 2.9401 | 4.7726 | 17.82 |
| 0.0766 | 261.0 | 205668 | 2.9427 | 4.8626 | 17.8286 |
| 0.0766 | 262.0 | 206456 | 2.9428 | 5.0326 | 17.8143 |
| 0.074 | 263.0 | 207244 | 2.9463 | 5.0095 | 17.8286 |
| 0.0758 | 264.0 | 208032 | 2.9497 | 4.987 | 17.8029 |
| 0.0778 | 265.0 | 208820 | 2.9534 | 4.9829 | 17.8086 |
| 0.0748 | 266.0 | 209608 | 2.9521 | 4.9309 | 17.8286 |
| 0.0759 | 267.0 | 210396 | 2.9519 | 4.9294 | 17.84 |
| 0.0738 | 268.0 | 211184 | 2.9521 | 4.9953 | 17.8486 |
| 0.077 | 269.0 | 211972 | 2.9521 | 4.8414 | 17.8486 |
| 0.0759 | 270.0 | 212760 | 2.9533 | 4.8158 | 17.8286 |
| 0.0725 | 271.0 | 213548 | 2.9534 | 4.8427 | 17.8457 |
| 0.0749 | 272.0 | 214336 | 2.9512 | 4.8769 | 17.8314 |
| 0.0745 | 273.0 | 215124 | 2.9520 | 4.8782 | 17.8257 |
| 0.0723 | 274.0 | 215912 | 2.9546 | 4.8465 | 17.8229 |
| 0.0748 | 275.0 | 216700 | 2.9567 | 4.8704 | 17.8343 |
| 0.072 | 276.0 | 217488 | 2.9569 | 4.8633 | 17.8371 |
| 0.0747 | 277.0 | 218276 | 2.9578 | 4.8667 | 17.8457 |
| 0.0722 | 278.0 | 219064 | 2.9566 | 4.8686 | 17.8371 |
| 0.0733 | 279.0 | 219852 | 2.9563 | 4.846 | 17.84 |
| 0.0713 | 280.0 | 220640 | 2.9566 | 4.8513 | 17.84 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
ShishckovA/results
|
ShishckovA
| 2022-11-29T07:33:29Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T07:31:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.505 | 1.0 | 26878 | 0.4409 |
| 0.4063 | 2.0 | 53756 | 0.3390 |
| 0.358 | 3.0 | 80634 | 0.2967 |
| 0.3383 | 4.0 | 107512 | 0.2777 |
| 0.3289 | 5.0 | 134390 | 0.2721 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mlxen/electra-contrastdata-squad
|
mlxen
| 2022-11-29T07:16:20Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-28T07:16:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-contrastdata-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-contrastdata-squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
nagais/sd-class-butterflies-32
|
nagais
| 2022-11-29T07:06:12Z | 32 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T06:51:12Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(nagais/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
jl8771/sd-class-butterflies-32
|
jl8771
| 2022-11-29T05:41:50Z | 37 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T05:41:45Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(jl8771/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
manter/momoko
|
manter
| 2022-11-29T05:21:52Z | 0 | 8 | null |
[
"doi:10.57967/hf/0147",
"license:unknown",
"region:us"
] | null | 2022-11-29T03:32:48Z |
---
license: unknown
---
This was a stable diffusion based model that was based off of anythingv3 and momoko which I still don't know the orgin of.
(personal story: How I fond this was from going to a outdated stable diffusion web ui link and hitting generate. It came out good so I googled it and found this.)
Sorce: https://www.kaggle.com/code/inmine/novelai-with-webui-stable-diffusion-version/data, https://www.kaggle.com/datasets/inmine/momoko
btw here is a prompt (prompt:Masterpiece, best quality,)(negitive prompt:lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewerdigits, cropped, worst quality, low quality, normal quality, ipeg artifacts, signature, watermark,username, blurry)
That's what I found work's the best, The main thing it generates is woman so be warned.
|
Shubham09/whisper63filescheck
|
Shubham09
| 2022-11-29T05:12:22Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-29T05:07:16Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper63filescheck
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper63filescheck
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0638
- Wer: 23.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1324 | 14.29 | 100 | 1.0638 | 23.7647 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Urigavilan03/Tiempo
|
Urigavilan03
| 2022-11-29T05:12:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-29T05:09:08Z |
un reloj de bolsillo antiguo en medio de unas hojas escritas en cursiva desenfocada
|
laroy23/ddpm-butterflies-128
|
laroy23
| 2022-11-29T04:33:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-28T13:56:37Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: ./cifar-10-batches-py
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `./cifar-10-batches-py` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/laroy23/ddpm-butterflies-128/tensorboard?#scalars)
|
elRivx/gAWoman
|
elRivx
| 2022-11-29T04:33:34Z | 0 | 2 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-29T04:22:28Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# gAWoman
This is my second Stable Diffusion custom model that bring to you a generic woman generated with non-licenced images.
The magic word is: gAWoman
If you enjoy my work, please consider supporting me:
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/B5XkfuG.png width=30% height=30%>
<img src=https://imgur.com/N8lNtZo.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
NSandra/distilbert-base-uncased-finetuned-ner
|
NSandra
| 2022-11-29T04:09:17Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-29T03:55:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2393
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 1 | 1.5491 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 2.0 | 2 | 1.3278 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 3 | 1.2393 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
nhanv/cv_parser
|
nhanv
| 2022-11-29T04:00:56Z | 167 | 3 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-29T03:23:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: cv-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cv-ner
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0956
- Precision: 0.8906
- Recall: 0.9325
- F1: 0.9111
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 91 | 0.2049 | 0.6618 | 0.7362 | 0.6970 | 0.9534 |
| 0.5036 | 2.0 | 182 | 0.1156 | 0.7873 | 0.8630 | 0.8234 | 0.9722 |
| 0.1442 | 3.0 | 273 | 0.1078 | 0.8262 | 0.9039 | 0.8633 | 0.9771 |
| 0.0757 | 4.0 | 364 | 0.1179 | 0.8652 | 0.9059 | 0.8851 | 0.9780 |
| 0.0526 | 5.0 | 455 | 0.0907 | 0.888 | 0.9080 | 0.8979 | 0.9837 |
| 0.0342 | 6.0 | 546 | 0.0972 | 0.8926 | 0.9346 | 0.9131 | 0.9832 |
| 0.0245 | 7.0 | 637 | 0.1064 | 0.8937 | 0.9284 | 0.9107 | 0.9834 |
| 0.0188 | 8.0 | 728 | 0.0965 | 0.8980 | 0.9366 | 0.9169 | 0.9850 |
| 0.0159 | 9.0 | 819 | 0.0999 | 0.91 | 0.9305 | 0.9201 | 0.9846 |
| 0.0141 | 10.0 | 910 | 0.0956 | 0.8906 | 0.9325 | 0.9111 | 0.9851 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jeraldflowers/vit_model
|
jeraldflowers
| 2022-11-29T03:51:31Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-27T05:06:17Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
widget:
- src: https://huggingface.co/jeraldflowers/vit_model/blob/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/jeraldflowers/vit_model/blob/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0095
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1526 | 3.85 | 500 | 0.0095 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
UCSYNLP/MyanBERTa
|
UCSYNLP
| 2022-11-29T03:35:58Z | 297 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"MyanBERTa",
"Myanmar",
"BERT",
"RoBERTa",
"my",
"dataset:MyCorpus",
"dataset:Web",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-25T06:57:10Z |
---
language: my
tags:
- MyanBERTa
- Myanmar
- BERT
- RoBERTa
license: apache-2.0
datasets:
- MyCorpus
- Web
---
## Model description
This model is a BERT based Myanmar pre-trained language model.
MyanBERTa was pre-trained for 528K steps on a word segmented Myanmar dataset consisting of 5,992,299 sentences (136M words).
As the tokenizer, byte-leve BPE tokenizer of 30,522 subword units which is learned after word segmentation is applied.
Cite this work as:
```
Aye Mya Hlaing, Win Pa Pa, "MyanBERTa: A Pre-trained Language Model For
Myanmar", In Proceedings of 2022 International Conference on Communication and Computer Research (ICCR2022), November 2022, Seoul, Republic of Korea
```
[Download Paper](https://journal-home.s3.ap-northeast-2.amazonaws.com/site/iccr2022/abs/QOHFI-0004.pdf)
|
jeraldflowers/distilroberts-base-mrpc-glue-jeraldflowers
|
jeraldflowers
| 2022-11-29T02:57:36Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T05:30:00Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: distilroberts-base-mrpc-glue-jeraldflowers
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
- name: F1
type: f1
value: 0.8814814814814815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberts-base-mrpc-glue-jeraldflowers
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4990
- Accuracy: 0.8431
- F1: 0.8815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5289 | 1.09 | 500 | 0.5668 | 0.8211 | 0.8689 |
| 0.3675 | 2.18 | 1000 | 0.4990 | 0.8431 | 0.8815 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
neulab/omnitab-large-128shot-finetuned-wtq-128shot
|
neulab
| 2022-11-29T02:55:31Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-11-29T02:54:00Z |
---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-128shot-finetuned-wtq-128shot` (based on BART architecture) is initialized with `neulab/omnitab-large-128shot` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) in the 128-shot setting.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-128shot-finetuned-wtq-128shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-128shot-finetuned-wtq-128shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
neulab/omnitab-large-1024shot-finetuned-wtq-1024shot
|
neulab
| 2022-11-29T02:45:55Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-11-29T02:44:57Z |
---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-1024shot-finetuned-wtq-1024shot` (based on BART architecture) is initialized with `neulab/omnitab-large-1024shot` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) in the 1024-shot setting.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-1024shot-finetuned-wtq-1024shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-1024shot-finetuned-wtq-1024shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
neulab/omnitab-large-1024shot
|
neulab
| 2022-11-29T02:38:18Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-11-29T02:37:18Z |
---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-1024shot` (based on BART architecture) is initialized with `microsoft/tapex-large` and continuously pretrained on natural and synthetic data (SQL2NL model trained in the 1024-shot setting).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-1024shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-1024shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
SiddharthaM/hasoc19-xlm-roberta-base-sentiment-new
|
SiddharthaM
| 2022-11-29T02:13:32Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T00:44:19Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: hasoc19-xlm-roberta-base-sentiment-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hasoc19-xlm-roberta-base-sentiment-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3840
- Accuracy: 0.8726
- Precision: 0.8724
- Recall: 0.8726
- F1: 0.8725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4786 | 1.0 | 537 | 0.3999 | 0.8381 | 0.8391 | 0.8381 | 0.8363 |
| 0.349 | 2.0 | 1074 | 0.3443 | 0.8606 | 0.8603 | 0.8606 | 0.8603 |
| 0.2927 | 3.0 | 1611 | 0.3412 | 0.8669 | 0.8668 | 0.8669 | 0.8662 |
| 0.2471 | 4.0 | 2148 | 0.3408 | 0.8705 | 0.8708 | 0.8705 | 0.8706 |
| 0.2195 | 5.0 | 2685 | 0.3897 | 0.8726 | 0.8725 | 0.8726 | 0.8721 |
| 0.1849 | 6.0 | 3222 | 0.3840 | 0.8726 | 0.8724 | 0.8726 | 0.8725 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
neulab/omnitab-large-16shot-finetuned-wtq-16shot
|
neulab
| 2022-11-29T02:10:07Z | 52 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-11-29T01:48:24Z |
---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-16shot-finetuned-wtq-16shot` (based on BART architecture) is initialized with `neulab/omnitab-large-16shot` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) in the 16-shot setting.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-16shot-finetuned-wtq-16shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-16shot-finetuned-wtq-16shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
romendiratta/fin-unsupersvised-mt5-4000
|
romendiratta
| 2022-11-29T02:07:11Z | 4 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T01:55:24Z |
This model contains MT5 which has been trained via masked language modeling on a financial dataset in an unsupervised manner.
---
license: mit
---
|
alexziweiwang/retrain5_oneTimeTraining_MTL-1epoch
|
alexziweiwang
| 2022-11-29T02:00:29Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T01:43:16Z |
---
tags:
- generated_from_trainer
model-index:
- name: retrain5_oneTimeTraining_MTL-1epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain5_oneTimeTraining_MTL-1epoch
This model is a fine-tuned version of [alexziweiwang/exp21-uaspeech-foundation](https://huggingface.co/alexziweiwang/exp21-uaspeech-foundation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1861
- Acc: 0.285
- Wer: 1.1126
- Correct: 57
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 13.9337 | 0.01 | 1.2925 | 2 | 200 | 200 |
| 12.4373 | 0.04 | 10 | 13.7513 | 0.08 | 1.5296 | 16 | 200 | 200 |
| 12.4373 | 0.06 | 15 | 13.5517 | 0.125 | 2.1126 | 25 | 200 | 200 |
| 12.6667 | 0.08 | 20 | 13.3400 | 0.165 | 2.5791 | 33 | 200 | 200 |
| 12.6667 | 0.11 | 25 | 13.1141 | 0.205 | 3.6561 | 41 | 200 | 200 |
| 11.1856 | 0.13 | 30 | 12.8805 | 0.22 | 2.7451 | 44 | 200 | 200 |
| 11.1856 | 0.15 | 35 | 12.6423 | 0.245 | 2.5178 | 49 | 200 | 200 |
| 10.6635 | 0.17 | 40 | 12.4028 | 0.27 | 2.4308 | 54 | 200 | 200 |
| 10.6635 | 0.19 | 45 | 12.1660 | 0.3 | 2.1818 | 60 | 200 | 200 |
| 10.7952 | 0.21 | 50 | 11.9291 | 0.305 | 1.9348 | 61 | 200 | 200 |
| 10.7952 | 0.23 | 55 | 11.6945 | 0.31 | 1.6858 | 62 | 200 | 200 |
| 10.3867 | 0.25 | 60 | 11.4608 | 0.315 | 1.5237 | 63 | 200 | 200 |
| 10.3867 | 0.27 | 65 | 11.2313 | 0.315 | 1.3953 | 63 | 200 | 200 |
| 10.252 | 0.3 | 70 | 11.0102 | 0.315 | 1.3162 | 63 | 200 | 200 |
| 10.252 | 0.32 | 75 | 10.7918 | 0.315 | 1.2826 | 63 | 200 | 200 |
| 10.1788 | 0.34 | 80 | 10.5736 | 0.315 | 1.2628 | 63 | 200 | 200 |
| 10.1788 | 0.36 | 85 | 10.3607 | 0.32 | 1.2391 | 64 | 200 | 200 |
| 9.1361 | 0.38 | 90 | 10.1527 | 0.31 | 1.2253 | 62 | 200 | 200 |
| 9.1361 | 0.4 | 95 | 9.9507 | 0.31 | 1.2036 | 62 | 200 | 200 |
| 9.5447 | 0.42 | 100 | 9.7553 | 0.315 | 1.2095 | 63 | 200 | 200 |
| 9.5447 | 0.44 | 105 | 9.5599 | 0.31 | 1.2016 | 62 | 200 | 200 |
| 9.1579 | 0.46 | 110 | 9.3711 | 0.295 | 1.1996 | 59 | 200 | 200 |
| 9.1579 | 0.48 | 115 | 9.1892 | 0.295 | 1.1897 | 59 | 200 | 200 |
| 7.9217 | 0.51 | 120 | 9.0143 | 0.3 | 1.1858 | 60 | 200 | 200 |
| 7.9217 | 0.53 | 125 | 8.8493 | 0.305 | 1.1719 | 61 | 200 | 200 |
| 8.4439 | 0.55 | 130 | 8.6946 | 0.305 | 1.1739 | 61 | 200 | 200 |
| 8.4439 | 0.57 | 135 | 8.5492 | 0.31 | 1.1581 | 62 | 200 | 200 |
| 8.0639 | 0.59 | 140 | 8.4153 | 0.315 | 1.1502 | 63 | 200 | 200 |
| 8.0639 | 0.61 | 145 | 8.2872 | 0.32 | 1.1482 | 64 | 200 | 200 |
| 8.4173 | 0.63 | 150 | 8.1649 | 0.33 | 1.1443 | 66 | 200 | 200 |
| 8.4173 | 0.65 | 155 | 8.0500 | 0.325 | 1.1403 | 65 | 200 | 200 |
| 7.8991 | 0.67 | 160 | 7.9422 | 0.33 | 1.1364 | 66 | 200 | 200 |
| 7.8991 | 0.7 | 165 | 7.8410 | 0.32 | 1.1344 | 64 | 200 | 200 |
| 6.9206 | 0.72 | 170 | 7.7469 | 0.32 | 1.1304 | 64 | 200 | 200 |
| 6.9206 | 0.74 | 175 | 7.6601 | 0.325 | 1.1285 | 65 | 200 | 200 |
| 7.1911 | 0.76 | 180 | 7.5832 | 0.305 | 1.1206 | 61 | 200 | 200 |
| 7.1911 | 0.78 | 185 | 7.5163 | 0.305 | 1.1225 | 61 | 200 | 200 |
| 7.201 | 0.8 | 190 | 7.4565 | 0.305 | 1.1245 | 61 | 200 | 200 |
| 7.201 | 0.82 | 195 | 7.4049 | 0.295 | 1.1245 | 59 | 200 | 200 |
| 7.1507 | 0.84 | 200 | 7.3568 | 0.295 | 1.1225 | 59 | 200 | 200 |
| 7.1507 | 0.86 | 205 | 7.3139 | 0.3 | 1.1206 | 60 | 200 | 200 |
| 6.6223 | 0.89 | 210 | 7.2774 | 0.295 | 1.1186 | 59 | 200 | 200 |
| 6.6223 | 0.91 | 215 | 7.2469 | 0.295 | 1.1186 | 59 | 200 | 200 |
| 7.1645 | 0.93 | 220 | 7.2220 | 0.295 | 1.1166 | 59 | 200 | 200 |
| 7.1645 | 0.95 | 225 | 7.2041 | 0.29 | 1.1146 | 58 | 200 | 200 |
| 6.2562 | 0.97 | 230 | 7.1921 | 0.29 | 1.1146 | 58 | 200 | 200 |
| 6.2562 | 0.99 | 235 | 7.1861 | 0.285 | 1.1126 | 57 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
BunnyViking/bvSketchOutline
|
BunnyViking
| 2022-11-29T01:26:30Z | 0 | 12 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-28T02:53:16Z |
---
license: mit
---
Sketch Outline style - a scratchy concept-art like style to give the appearance of quickly rendered pencil and ink art.
The model is trained on humans, some animals, some structures and a few vehicles but it is best at humans and monsters.
NOTE - the model has been trained with some artistic nudes included and can generate unintended NSFW content on occasion.
Custom style trained off SD 1.5 DDLM
Token: bvSketchOutline
Not using the token (or using prompts like 'stroke' or 'outline') or placing the token at the start or end of the prompt will have different interesting effect.
Higher versions will improve the overall style at the cost of flexibility. The model will skew more toward humans at the higher versions. The higher versions will also create more monstrous animals.
I recommend a confidence between 7.5 and 12.5
v2 2000 - some outline and flexible CFG 7.5 is fine
7.5

12.5

v2 3000 - sketchy and flexible CFG 7.5 is fine
7.5

12.5

v2 4000 - sketchy outline and extra outline strokes. recommend increasing CFG to 12.5 so less flexible
7.5

12.5

v2 5000 - smoother outlines much less flexible, will start skewing strongly toward humans even at 7.5 CFG. At 12.5 CFG it will be sketchier with more outline strokes, almost like v2 2000 in look but at higher quality.
7.5

12.5

v2 6000 - very sketchy and scratchy at 7.5 CFG, more inky, may lose detail. At 12.5 is quite inky in its outlines.
7.5

12.5

v2 7000 - sketchy and many flowing outlines at 7.5 CFG. Can have compromised details. At 12.5 CFG the style becomes very inky and loses detail almost wet watercolour
7.5

12.5

|
alexziweiwang/retrain2_oneTimeTraining_MTL-1epoch
|
alexziweiwang
| 2022-11-29T01:04:58Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T00:47:27Z |
---
tags:
- generated_from_trainer
model-index:
- name: retrain2_oneTimeTraining_MTL-1epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain2_oneTimeTraining_MTL-1epoch
This model is a fine-tuned version of [alexziweiwang/exp21-uaspeech-foundation](https://huggingface.co/alexziweiwang/exp21-uaspeech-foundation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9312
- Acc: 0.265
- Wer: 1.0
- Correct: 53
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 13.6638 | 0.005 | 1.6126 | 1 | 200 | 200 |
| 12.2282 | 0.04 | 10 | 13.4030 | 0.005 | 1.4743 | 1 | 200 | 200 |
| 12.2282 | 0.06 | 15 | 13.1289 | 0.005 | 1.3953 | 1 | 200 | 200 |
| 12.3565 | 0.08 | 20 | 12.8538 | 0.005 | 1.3043 | 1 | 200 | 200 |
| 12.3565 | 0.11 | 25 | 12.5711 | 0.005 | 1.2095 | 1 | 200 | 200 |
| 10.7997 | 0.13 | 30 | 12.2891 | 0.005 | 1.1462 | 1 | 200 | 200 |
| 10.7997 | 0.15 | 35 | 12.0060 | 0.005 | 1.0909 | 1 | 200 | 200 |
| 10.1556 | 0.17 | 40 | 11.7183 | 0.005 | 1.0632 | 1 | 200 | 200 |
| 10.1556 | 0.19 | 45 | 11.4347 | 0.01 | 1.0395 | 2 | 200 | 200 |
| 10.3187 | 0.21 | 50 | 11.1549 | 0.01 | 1.0178 | 2 | 200 | 200 |
| 10.3187 | 0.23 | 55 | 10.8828 | 0.01 | 1.0099 | 2 | 200 | 200 |
| 9.8042 | 0.25 | 60 | 10.6161 | 0.01 | 1.0040 | 2 | 200 | 200 |
| 9.8042 | 0.27 | 65 | 10.3539 | 0.01 | 0.9980 | 2 | 200 | 200 |
| 9.6489 | 0.3 | 70 | 10.0954 | 0.015 | 1.0 | 3 | 200 | 200 |
| 9.6489 | 0.32 | 75 | 9.8456 | 0.025 | 1.0 | 5 | 200 | 200 |
| 9.6112 | 0.34 | 80 | 9.5980 | 0.045 | 1.0 | 9 | 200 | 200 |
| 9.6112 | 0.36 | 85 | 9.3535 | 0.055 | 1.0 | 11 | 200 | 200 |
| 8.4257 | 0.38 | 90 | 9.1168 | 0.085 | 1.0 | 17 | 200 | 200 |
| 8.4257 | 0.4 | 95 | 8.8920 | 0.105 | 1.0 | 21 | 200 | 200 |
| 8.7311 | 0.42 | 100 | 8.6739 | 0.11 | 1.0 | 22 | 200 | 200 |
| 8.7311 | 0.44 | 105 | 8.4607 | 0.135 | 1.0 | 27 | 200 | 200 |
| 8.3653 | 0.46 | 110 | 8.2551 | 0.165 | 1.0 | 33 | 200 | 200 |
| 8.3653 | 0.48 | 115 | 8.0573 | 0.17 | 1.0 | 34 | 200 | 200 |
| 7.1342 | 0.51 | 120 | 7.8700 | 0.175 | 1.0 | 35 | 200 | 200 |
| 7.1342 | 0.53 | 125 | 7.6908 | 0.185 | 1.0 | 37 | 200 | 200 |
| 7.5411 | 0.55 | 130 | 7.5221 | 0.205 | 1.0 | 41 | 200 | 200 |
| 7.5411 | 0.57 | 135 | 7.3628 | 0.22 | 1.0 | 44 | 200 | 200 |
| 7.2449 | 0.59 | 140 | 7.2131 | 0.23 | 1.0 | 46 | 200 | 200 |
| 7.2449 | 0.61 | 145 | 7.0735 | 0.23 | 1.0 | 46 | 200 | 200 |
| 7.5166 | 0.63 | 150 | 6.9396 | 0.25 | 1.0 | 50 | 200 | 200 |
| 7.5166 | 0.65 | 155 | 6.8186 | 0.25 | 1.0 | 50 | 200 | 200 |
| 7.0016 | 0.67 | 160 | 6.7015 | 0.25 | 1.0 | 50 | 200 | 200 |
| 7.0016 | 0.7 | 165 | 6.5904 | 0.25 | 1.0 | 50 | 200 | 200 |
| 6.0715 | 0.72 | 170 | 6.4879 | 0.255 | 1.0 | 51 | 200 | 200 |
| 6.0715 | 0.74 | 175 | 6.3980 | 0.26 | 1.0 | 52 | 200 | 200 |
| 6.312 | 0.76 | 180 | 6.3198 | 0.26 | 1.0 | 52 | 200 | 200 |
| 6.312 | 0.78 | 185 | 6.2532 | 0.26 | 1.0 | 52 | 200 | 200 |
| 6.3694 | 0.8 | 190 | 6.1952 | 0.26 | 1.0 | 52 | 200 | 200 |
| 6.3694 | 0.82 | 195 | 6.1453 | 0.26 | 1.0 | 52 | 200 | 200 |
| 6.2196 | 0.84 | 200 | 6.0993 | 0.26 | 1.0 | 52 | 200 | 200 |
| 6.2196 | 0.86 | 205 | 6.0556 | 0.265 | 1.0 | 53 | 200 | 200 |
| 5.7131 | 0.89 | 210 | 6.0181 | 0.265 | 1.0 | 53 | 200 | 200 |
| 5.7131 | 0.91 | 215 | 5.9873 | 0.265 | 1.0 | 53 | 200 | 200 |
| 6.1827 | 0.93 | 220 | 5.9619 | 0.265 | 1.0 | 53 | 200 | 200 |
| 6.1827 | 0.95 | 225 | 5.9460 | 0.265 | 1.0 | 53 | 200 | 200 |
| 5.3823 | 0.97 | 230 | 5.9359 | 0.265 | 1.0 | 53 | 200 | 200 |
| 5.3823 | 0.99 | 235 | 5.9312 | 0.265 | 1.0 | 53 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
dlwh/legal-xlm-base_128k
|
dlwh
| 2022-11-29T00:48:35Z | 4 | 2 |
transformers
|
[
"transformers",
"roberta",
"fill-mask",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-29T00:41:54Z |
---
license: apache-2.0
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
dataset:
- joelito/MultiLegalPile_Wikipedia_Filtered
---
Huggingface thinks this is a model, but it's just a tokenizer. Trained on https://huggingface.co/datasets/joelito/MultiLegalPile_Wikipedia_Filtered
|
Serhio/sd-fine-tune-v2
|
Serhio
| 2022-11-28T23:43:18Z | 34 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T23:41:46Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### sd-fine-tune-v2 on Stable Diffusion via Dreambooth
#### model by Serhio
This your the Stable Diffusion model fine-tuned the sd-fine-tune-v2 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **Bashkov Sergey**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
|
pig4431/TweetEval_BERT_5E
|
pig4431
| 2022-11-28T23:38:03Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T23:31:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_BERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5419
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6264 | 0.04 | 50 | 0.5266 | 0.74 |
| 0.5054 | 0.08 | 100 | 0.5959 | 0.6333 |
| 0.4732 | 0.12 | 150 | 0.3524 | 0.86 |
| 0.3916 | 0.16 | 200 | 0.3195 | 0.8667 |
| 0.3477 | 0.2 | 250 | 0.2878 | 0.8867 |
| 0.3116 | 0.24 | 300 | 0.2903 | 0.92 |
| 0.3039 | 0.28 | 350 | 0.2488 | 0.8933 |
| 0.2633 | 0.32 | 400 | 0.2530 | 0.92 |
| 0.2667 | 0.37 | 450 | 0.2125 | 0.9267 |
| 0.2604 | 0.41 | 500 | 0.2628 | 0.8867 |
| 0.278 | 0.45 | 550 | 0.2322 | 0.8867 |
| 0.2625 | 0.49 | 600 | 0.1903 | 0.92 |
| 0.2808 | 0.53 | 650 | 0.2400 | 0.8933 |
| 0.2396 | 0.57 | 700 | 0.2184 | 0.9067 |
| 0.2571 | 0.61 | 750 | 0.1906 | 0.9133 |
| 0.2676 | 0.65 | 800 | 0.2467 | 0.9067 |
| 0.2288 | 0.69 | 850 | 0.2038 | 0.9133 |
| 0.2959 | 0.73 | 900 | 0.1941 | 0.9 |
| 0.2619 | 0.77 | 950 | 0.2100 | 0.9333 |
| 0.2504 | 0.81 | 1000 | 0.1523 | 0.9333 |
| 0.2338 | 0.85 | 1050 | 0.1429 | 0.94 |
| 0.2529 | 0.89 | 1100 | 0.1269 | 0.94 |
| 0.2238 | 0.93 | 1150 | 0.1722 | 0.9333 |
| 0.2295 | 0.97 | 1200 | 0.1874 | 0.94 |
| 0.2089 | 1.01 | 1250 | 0.2214 | 0.9067 |
| 0.1406 | 1.06 | 1300 | 0.3410 | 0.9133 |
| 0.1587 | 1.1 | 1350 | 0.3330 | 0.9133 |
| 0.1732 | 1.14 | 1400 | 0.2716 | 0.9133 |
| 0.195 | 1.18 | 1450 | 0.3726 | 0.92 |
| 0.1777 | 1.22 | 1500 | 0.2430 | 0.9267 |
| 0.1433 | 1.26 | 1550 | 0.3011 | 0.9267 |
| 0.1333 | 1.3 | 1600 | 0.2489 | 0.9333 |
| 0.1516 | 1.34 | 1650 | 0.3340 | 0.9267 |
| 0.1774 | 1.38 | 1700 | 0.2497 | 0.8933 |
| 0.1608 | 1.42 | 1750 | 0.3234 | 0.9 |
| 0.1534 | 1.46 | 1800 | 0.3383 | 0.9133 |
| 0.1287 | 1.5 | 1850 | 0.3134 | 0.9133 |
| 0.1422 | 1.54 | 1900 | 0.3330 | 0.9 |
| 0.1578 | 1.58 | 1950 | 0.3281 | 0.9133 |
| 0.1786 | 1.62 | 2000 | 0.2939 | 0.9267 |
| 0.2019 | 1.66 | 2050 | 0.3535 | 0.9 |
| 0.1995 | 1.7 | 2100 | 0.3032 | 0.9067 |
| 0.159 | 1.75 | 2150 | 0.2598 | 0.9267 |
| 0.1493 | 1.79 | 2200 | 0.2391 | 0.9267 |
| 0.1748 | 1.83 | 2250 | 0.2258 | 0.92 |
| 0.1783 | 1.87 | 2300 | 0.2749 | 0.9133 |
| 0.1619 | 1.91 | 2350 | 0.2699 | 0.92 |
| 0.1378 | 1.95 | 2400 | 0.2776 | 0.9067 |
| 0.1529 | 1.99 | 2450 | 0.2235 | 0.9333 |
| 0.1071 | 2.03 | 2500 | 0.2841 | 0.9267 |
| 0.0812 | 2.07 | 2550 | 0.3178 | 0.9267 |
| 0.0464 | 2.11 | 2600 | 0.3567 | 0.92 |
| 0.1108 | 2.15 | 2650 | 0.2723 | 0.92 |
| 0.0845 | 2.19 | 2700 | 0.2774 | 0.9267 |
| 0.0795 | 2.23 | 2750 | 0.3027 | 0.9267 |
| 0.0403 | 2.27 | 2800 | 0.3566 | 0.9267 |
| 0.0664 | 2.31 | 2850 | 0.4015 | 0.92 |
| 0.0659 | 2.35 | 2900 | 0.4298 | 0.9067 |
| 0.1059 | 2.39 | 2950 | 0.4028 | 0.92 |
| 0.105 | 2.44 | 3000 | 0.3701 | 0.92 |
| 0.0808 | 2.48 | 3050 | 0.3206 | 0.9267 |
| 0.0811 | 2.52 | 3100 | 0.3644 | 0.9133 |
| 0.0458 | 2.56 | 3150 | 0.3781 | 0.9267 |
| 0.0764 | 2.6 | 3200 | 0.3749 | 0.9267 |
| 0.0567 | 2.64 | 3250 | 0.3995 | 0.92 |
| 0.0971 | 2.68 | 3300 | 0.3455 | 0.92 |
| 0.0579 | 2.72 | 3350 | 0.4508 | 0.92 |
| 0.0853 | 2.76 | 3400 | 0.4350 | 0.92 |
| 0.0577 | 2.8 | 3450 | 0.3804 | 0.9333 |
| 0.0732 | 2.84 | 3500 | 0.4387 | 0.92 |
| 0.0874 | 2.88 | 3550 | 0.3885 | 0.9333 |
| 0.1031 | 2.92 | 3600 | 0.3937 | 0.92 |
| 0.0335 | 2.96 | 3650 | 0.4963 | 0.8933 |
| 0.0913 | 3.0 | 3700 | 0.3827 | 0.9333 |
| 0.047 | 3.04 | 3750 | 0.4136 | 0.92 |
| 0.0531 | 3.08 | 3800 | 0.4362 | 0.92 |
| 0.0265 | 3.12 | 3850 | 0.4857 | 0.92 |
| 0.038 | 3.17 | 3900 | 0.4425 | 0.92 |
| 0.0294 | 3.21 | 3950 | 0.4347 | 0.92 |
| 0.0367 | 3.25 | 4000 | 0.4291 | 0.9333 |
| 0.0102 | 3.29 | 4050 | 0.5178 | 0.9267 |
| 0.0311 | 3.33 | 4100 | 0.4784 | 0.9267 |
| 0.0274 | 3.37 | 4150 | 0.5421 | 0.9267 |
| 0.0275 | 3.41 | 4200 | 0.5194 | 0.92 |
| 0.0795 | 3.45 | 4250 | 0.4788 | 0.92 |
| 0.0413 | 3.49 | 4300 | 0.4393 | 0.9267 |
| 0.0373 | 3.53 | 4350 | 0.4965 | 0.92 |
| 0.0303 | 3.57 | 4400 | 0.4284 | 0.9267 |
| 0.0248 | 3.61 | 4450 | 0.4476 | 0.9267 |
| 0.0557 | 3.65 | 4500 | 0.4690 | 0.92 |
| 0.0358 | 3.69 | 4550 | 0.4774 | 0.9133 |
| 0.0194 | 3.73 | 4600 | 0.4755 | 0.92 |
| 0.0473 | 3.77 | 4650 | 0.4637 | 0.92 |
| 0.0133 | 3.81 | 4700 | 0.4868 | 0.92 |
| 0.0204 | 3.86 | 4750 | 0.4886 | 0.9267 |
| 0.0338 | 3.9 | 4800 | 0.5101 | 0.9267 |
| 0.0424 | 3.94 | 4850 | 0.4812 | 0.9267 |
| 0.0237 | 3.98 | 4900 | 0.4837 | 0.9267 |
| 0.0372 | 4.02 | 4950 | 0.5000 | 0.9267 |
| 0.0254 | 4.06 | 5000 | 0.5210 | 0.92 |
| 0.024 | 4.1 | 5050 | 0.5272 | 0.92 |
| 0.0117 | 4.14 | 5100 | 0.5447 | 0.92 |
| 0.018 | 4.18 | 5150 | 0.5353 | 0.92 |
| 0.0097 | 4.22 | 5200 | 0.5415 | 0.9267 |
| 0.0151 | 4.26 | 5250 | 0.5447 | 0.9267 |
| 0.0118 | 4.3 | 5300 | 0.5285 | 0.9267 |
| 0.0004 | 4.34 | 5350 | 0.5399 | 0.9267 |
| 0.0102 | 4.38 | 5400 | 0.5552 | 0.9267 |
| 0.0012 | 4.42 | 5450 | 0.5689 | 0.92 |
| 0.02 | 4.46 | 5500 | 0.5619 | 0.9267 |
| 0.0056 | 4.5 | 5550 | 0.5784 | 0.92 |
| 0.0271 | 4.55 | 5600 | 0.5766 | 0.92 |
| 0.0191 | 4.59 | 5650 | 0.5662 | 0.92 |
| 0.0311 | 4.63 | 5700 | 0.5514 | 0.9267 |
| 0.0167 | 4.67 | 5750 | 0.5510 | 0.9267 |
| 0.0293 | 4.71 | 5800 | 0.5571 | 0.9267 |
| 0.0304 | 4.75 | 5850 | 0.5494 | 0.92 |
| 0.0161 | 4.79 | 5900 | 0.5469 | 0.9267 |
| 0.0017 | 4.83 | 5950 | 0.5468 | 0.9267 |
| 0.0176 | 4.87 | 6000 | 0.5426 | 0.9267 |
| 0.0094 | 4.91 | 6050 | 0.5402 | 0.9267 |
| 0.0041 | 4.95 | 6100 | 0.5416 | 0.9267 |
| 0.0281 | 4.99 | 6150 | 0.5419 | 0.9267 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
Pramodith/sd-class-butterflies-32
|
Pramodith
| 2022-11-28T23:19:08Z | 38 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T23:18:35Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(Pramodith/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
dogeplusplus/sd-class-butterflies-32
|
dogeplusplus
| 2022-11-28T23:02:51Z | 35 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T23:02:05Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(dogeplusplus/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
ali97/sd-class-butterflies-32
|
ali97
| 2022-11-28T22:31:50Z | 37 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T22:31:00Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ali97/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
kanixwang/my-awesome-setfit-model
|
kanixwang
| 2022-11-28T22:19:56Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-28T22:02:13Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
alryan1478/gpt-neo-125M-DOD-LOW
|
alryan1478
| 2022-11-28T22:19:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-28T21:59:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-DOD-LOW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-DOD-LOW
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 261 | 6.4768 |
| 6.8863 | 2.0 | 522 | 6.1056 |
| 6.8863 | 3.0 | 783 | 6.0427 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
futuredatascience/action-classifier-v1
|
futuredatascience
| 2022-11-28T22:17:56Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-28T22:17:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 105 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1050,
"warmup_steps": 105,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ThomasSimonini/ML-Agents-SnowballFight-1vs1-model
|
ThomasSimonini
| 2022-11-28T22:07:31Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Snowballfight-1vs1",
"region:us"
] |
reinforcement-learning
| 2022-11-28T21:26:07Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Snowballfight-1vs1
library_name: ml-agents
---
|
alryan1478/gpt-neo-125M-wikitext2
|
alryan1478
| 2022-11-28T21:57:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-22T20:55:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-wikitext2
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 259 | 6.4308 |
| 6.8563 | 2.0 | 518 | 6.0898 |
| 6.8563 | 3.0 | 777 | 6.0325 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
michaelmayo704/sd-class-butterflies-64
|
michaelmayo704
| 2022-11-28T21:39:43Z | 34 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T21:38:51Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(michaelmayo704/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
pig4431/YELP_DistilBERT_5E
|
pig4431
| 2022-11-28T21:37:46Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T21:21:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: YELP_DistilBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.9666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YELP_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1557
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6211 | 0.03 | 50 | 0.3873 | 0.8933 |
| 0.3252 | 0.06 | 100 | 0.2181 | 0.92 |
| 0.2241 | 0.1 | 150 | 0.1850 | 0.94 |
| 0.2645 | 0.13 | 200 | 0.1514 | 0.9467 |
| 0.2094 | 0.16 | 250 | 0.1850 | 0.92 |
| 0.2693 | 0.19 | 300 | 0.1504 | 0.9467 |
| 0.2524 | 0.22 | 350 | 0.1479 | 0.96 |
| 0.2538 | 0.26 | 400 | 0.1375 | 0.94 |
| 0.1937 | 0.29 | 450 | 0.1204 | 0.9467 |
| 0.1692 | 0.32 | 500 | 0.1396 | 0.9533 |
| 0.1987 | 0.35 | 550 | 0.1151 | 0.94 |
| 0.207 | 0.38 | 600 | 0.1705 | 0.94 |
| 0.2135 | 0.42 | 650 | 0.1189 | 0.9467 |
| 0.1847 | 0.45 | 700 | 0.1315 | 0.9533 |
| 0.169 | 0.48 | 750 | 0.1407 | 0.9533 |
| 0.1767 | 0.51 | 800 | 0.1675 | 0.9333 |
| 0.1899 | 0.54 | 850 | 0.0913 | 0.9467 |
| 0.1641 | 0.58 | 900 | 0.0954 | 0.96 |
| 0.1765 | 0.61 | 950 | 0.1237 | 0.9467 |
| 0.1663 | 0.64 | 1000 | 0.1029 | 0.9533 |
| 0.1238 | 0.67 | 1050 | 0.1267 | 0.96 |
| 0.2087 | 0.7 | 1100 | 0.1111 | 0.96 |
| 0.1354 | 0.74 | 1150 | 0.0916 | 0.9667 |
| 0.1937 | 0.77 | 1200 | 0.1059 | 0.96 |
| 0.2216 | 0.8 | 1250 | 0.1049 | 0.9467 |
| 0.1788 | 0.83 | 1300 | 0.1472 | 0.94 |
| 0.2138 | 0.86 | 1350 | 0.1234 | 0.9467 |
| 0.1555 | 0.9 | 1400 | 0.1386 | 0.94 |
| 0.1583 | 0.93 | 1450 | 0.1642 | 0.9467 |
| 0.1525 | 0.96 | 1500 | 0.1571 | 0.94 |
| 0.2049 | 0.99 | 1550 | 0.1257 | 0.9333 |
| 0.1266 | 1.02 | 1600 | 0.1677 | 0.94 |
| 0.1282 | 1.06 | 1650 | 0.1307 | 0.9533 |
| 0.1007 | 1.09 | 1700 | 0.1375 | 0.9533 |
| 0.0991 | 1.12 | 1750 | 0.1513 | 0.9533 |
| 0.1211 | 1.15 | 1800 | 0.1229 | 0.9667 |
| 0.1833 | 1.18 | 1850 | 0.1105 | 0.9733 |
| 0.1596 | 1.22 | 1900 | 0.1279 | 0.9533 |
| 0.1172 | 1.25 | 1950 | 0.1124 | 0.96 |
| 0.1137 | 1.28 | 2000 | 0.1407 | 0.9467 |
| 0.1135 | 1.31 | 2050 | 0.1377 | 0.96 |
| 0.096 | 1.34 | 2100 | 0.1022 | 0.9667 |
| 0.1203 | 1.38 | 2150 | 0.1719 | 0.9467 |
| 0.1289 | 1.41 | 2200 | 0.1254 | 0.9667 |
| 0.1392 | 1.44 | 2250 | 0.1086 | 0.9667 |
| 0.1319 | 1.47 | 2300 | 0.1511 | 0.9467 |
| 0.1161 | 1.5 | 2350 | 0.1758 | 0.9467 |
| 0.1402 | 1.54 | 2400 | 0.1369 | 0.96 |
| 0.1433 | 1.57 | 2450 | 0.1495 | 0.9667 |
| 0.1882 | 1.6 | 2500 | 0.1186 | 0.9467 |
| 0.1474 | 1.63 | 2550 | 0.1249 | 0.9533 |
| 0.0937 | 1.66 | 2600 | 0.1390 | 0.96 |
| 0.1231 | 1.7 | 2650 | 0.1467 | 0.96 |
| 0.1485 | 1.73 | 2700 | 0.1602 | 0.9533 |
| 0.1683 | 1.76 | 2750 | 0.1884 | 0.9533 |
| 0.1141 | 1.79 | 2800 | 0.1634 | 0.96 |
| 0.1351 | 1.82 | 2850 | 0.1212 | 0.9733 |
| 0.1298 | 1.86 | 2900 | 0.1224 | 0.96 |
| 0.1616 | 1.89 | 2950 | 0.1241 | 0.96 |
| 0.1159 | 1.92 | 3000 | 0.1532 | 0.9533 |
| 0.1101 | 1.95 | 3050 | 0.1105 | 0.96 |
| 0.0779 | 1.98 | 3100 | 0.1334 | 0.9533 |
| 0.1427 | 2.02 | 3150 | 0.1026 | 0.9733 |
| 0.0673 | 2.05 | 3200 | 0.1231 | 0.96 |
| 0.0901 | 2.08 | 3250 | 0.1077 | 0.9733 |
| 0.0532 | 2.11 | 3300 | 0.1385 | 0.9467 |
| 0.0984 | 2.14 | 3350 | 0.1432 | 0.9467 |
| 0.1006 | 2.18 | 3400 | 0.1183 | 0.9667 |
| 0.067 | 2.21 | 3450 | 0.1533 | 0.9533 |
| 0.0901 | 2.24 | 3500 | 0.1314 | 0.9733 |
| 0.0644 | 2.27 | 3550 | 0.1354 | 0.9667 |
| 0.076 | 2.3 | 3600 | 0.1548 | 0.96 |
| 0.0932 | 2.34 | 3650 | 0.1624 | 0.9667 |
| 0.0777 | 2.37 | 3700 | 0.1878 | 0.9533 |
| 0.106 | 2.4 | 3750 | 0.1721 | 0.96 |
| 0.0621 | 2.43 | 3800 | 0.1470 | 0.9667 |
| 0.0919 | 2.46 | 3850 | 0.1478 | 0.96 |
| 0.091 | 2.5 | 3900 | 0.1371 | 0.9667 |
| 0.0912 | 2.53 | 3950 | 0.1467 | 0.9667 |
| 0.0775 | 2.56 | 4000 | 0.1289 | 0.9733 |
| 0.1053 | 2.59 | 4050 | 0.1107 | 0.9733 |
| 0.063 | 2.62 | 4100 | 0.1031 | 0.9733 |
| 0.0859 | 2.66 | 4150 | 0.0953 | 0.98 |
| 0.084 | 2.69 | 4200 | 0.1216 | 0.9733 |
| 0.1215 | 2.72 | 4250 | 0.1025 | 0.9733 |
| 0.0675 | 2.75 | 4300 | 0.0992 | 0.9667 |
| 0.0608 | 2.78 | 4350 | 0.1288 | 0.96 |
| 0.0965 | 2.82 | 4400 | 0.1179 | 0.9667 |
| 0.061 | 2.85 | 4450 | 0.1178 | 0.9733 |
| 0.0821 | 2.88 | 4500 | 0.1188 | 0.9733 |
| 0.0802 | 2.91 | 4550 | 0.1423 | 0.9667 |
| 0.0901 | 2.94 | 4600 | 0.1367 | 0.96 |
| 0.1069 | 2.98 | 4650 | 0.1118 | 0.9733 |
| 0.0653 | 3.01 | 4700 | 0.1359 | 0.9533 |
| 0.0577 | 3.04 | 4750 | 0.1046 | 0.9667 |
| 0.0467 | 3.07 | 4800 | 0.1366 | 0.96 |
| 0.041 | 3.1 | 4850 | 0.1276 | 0.9667 |
| 0.0585 | 3.13 | 4900 | 0.1426 | 0.9667 |
| 0.0635 | 3.17 | 4950 | 0.1571 | 0.96 |
| 0.0395 | 3.2 | 5000 | 0.1527 | 0.96 |
| 0.034 | 3.23 | 5050 | 0.1323 | 0.9667 |
| 0.0405 | 3.26 | 5100 | 0.1377 | 0.96 |
| 0.0306 | 3.29 | 5150 | 0.1526 | 0.9667 |
| 0.0471 | 3.33 | 5200 | 0.1419 | 0.9667 |
| 0.0646 | 3.36 | 5250 | 0.1459 | 0.9667 |
| 0.0508 | 3.39 | 5300 | 0.1312 | 0.9667 |
| 0.0593 | 3.42 | 5350 | 0.1483 | 0.96 |
| 0.05 | 3.45 | 5400 | 0.1076 | 0.9733 |
| 0.0559 | 3.49 | 5450 | 0.1412 | 0.9667 |
| 0.0614 | 3.52 | 5500 | 0.1597 | 0.9667 |
| 0.0691 | 3.55 | 5550 | 0.1656 | 0.96 |
| 0.0472 | 3.58 | 5600 | 0.1556 | 0.9667 |
| 0.055 | 3.61 | 5650 | 0.1347 | 0.9667 |
| 0.0564 | 3.65 | 5700 | 0.1424 | 0.96 |
| 0.0567 | 3.68 | 5750 | 0.1448 | 0.9733 |
| 0.0645 | 3.71 | 5800 | 0.1290 | 0.9667 |
| 0.0361 | 3.74 | 5850 | 0.1367 | 0.9667 |
| 0.0546 | 3.77 | 5900 | 0.1406 | 0.9667 |
| 0.043 | 3.81 | 5950 | 0.1337 | 0.96 |
| 0.0148 | 3.84 | 6000 | 0.1475 | 0.9533 |
| 0.0922 | 3.87 | 6050 | 0.1318 | 0.9733 |
| 0.0671 | 3.9 | 6100 | 0.1446 | 0.9733 |
| 0.0295 | 3.93 | 6150 | 0.1217 | 0.9733 |
| 0.0503 | 3.97 | 6200 | 0.1133 | 0.9733 |
| 0.0457 | 4.0 | 6250 | 0.1145 | 0.9733 |
| 0.0487 | 4.03 | 6300 | 0.1119 | 0.9733 |
| 0.0491 | 4.06 | 6350 | 0.1274 | 0.9667 |
| 0.0417 | 4.09 | 6400 | 0.1377 | 0.9733 |
| 0.0595 | 4.13 | 6450 | 0.1271 | 0.9733 |
| 0.035 | 4.16 | 6500 | 0.1183 | 0.9733 |
| 0.0482 | 4.19 | 6550 | 0.1153 | 0.9733 |
| 0.0196 | 4.22 | 6600 | 0.1388 | 0.9733 |
| 0.028 | 4.25 | 6650 | 0.1310 | 0.9733 |
| 0.0193 | 4.29 | 6700 | 0.1460 | 0.9667 |
| 0.0233 | 4.32 | 6750 | 0.1233 | 0.9733 |
| 0.0316 | 4.35 | 6800 | 0.1220 | 0.9667 |
| 0.0132 | 4.38 | 6850 | 0.1350 | 0.9533 |
| 0.0415 | 4.41 | 6900 | 0.1547 | 0.9667 |
| 0.0157 | 4.45 | 6950 | 0.1562 | 0.9667 |
| 0.0186 | 4.48 | 7000 | 0.1424 | 0.9667 |
| 0.0012 | 4.51 | 7050 | 0.1421 | 0.9667 |
| 0.0223 | 4.54 | 7100 | 0.1475 | 0.9733 |
| 0.0455 | 4.57 | 7150 | 0.1457 | 0.96 |
| 0.0571 | 4.61 | 7200 | 0.1559 | 0.9667 |
| 0.0305 | 4.64 | 7250 | 0.1614 | 0.9667 |
| 0.0457 | 4.67 | 7300 | 0.1691 | 0.9667 |
| 0.022 | 4.7 | 7350 | 0.1622 | 0.9667 |
| 0.0338 | 4.73 | 7400 | 0.1560 | 0.9667 |
| 0.0365 | 4.77 | 7450 | 0.1553 | 0.9667 |
| 0.025 | 4.8 | 7500 | 0.1512 | 0.9667 |
| 0.0441 | 4.83 | 7550 | 0.1550 | 0.9667 |
| 0.0363 | 4.86 | 7600 | 0.1564 | 0.9667 |
| 0.0188 | 4.89 | 7650 | 0.1553 | 0.9667 |
| 0.0427 | 4.93 | 7700 | 0.1572 | 0.9733 |
| 0.0362 | 4.96 | 7750 | 0.1568 | 0.9667 |
| 0.0115 | 4.99 | 7800 | 0.1557 | 0.9667 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
rlarios/distilbert-base-uncased-finetuned-emotion
|
rlarios
| 2022-11-28T21:34:34Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-25T20:15:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
- name: F1
type: f1
value: 0.9322428116765227
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.9325
- F1: 0.9322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8372 | 1.0 | 250 | 0.3225 | 0.9045 | 0.9017 |
| 0.2534 | 2.0 | 500 | 0.2225 | 0.9325 | 0.9322 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/TUF_ALBERT_5E
|
pig4431
| 2022-11-28T21:34:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T21:32:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_ALBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2389
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5099 | 0.1 | 50 | 0.3861 | 0.8533 |
| 0.2985 | 0.2 | 100 | 0.2961 | 0.8933 |
| 0.2972 | 0.3 | 150 | 0.2335 | 0.9333 |
| 0.2835 | 0.4 | 200 | 0.1872 | 0.94 |
| 0.26 | 0.5 | 250 | 0.4147 | 0.9133 |
| 0.2986 | 0.59 | 300 | 0.2080 | 0.9267 |
| 0.2554 | 0.69 | 350 | 0.3984 | 0.9133 |
| 0.2306 | 0.79 | 400 | 0.2136 | 0.9333 |
| 0.2218 | 0.89 | 450 | 0.4455 | 0.8867 |
| 0.2113 | 0.99 | 500 | 0.2205 | 0.94 |
| 0.2541 | 1.09 | 550 | 0.1705 | 0.9333 |
| 0.1947 | 1.19 | 600 | 0.3264 | 0.8933 |
| 0.2409 | 1.29 | 650 | 0.2084 | 0.92 |
| 0.1968 | 1.39 | 700 | 0.2550 | 0.9267 |
| 0.172 | 1.49 | 750 | 0.2238 | 0.9467 |
| 0.1478 | 1.58 | 800 | 0.2501 | 0.9533 |
| 0.2199 | 1.68 | 850 | 0.2618 | 0.9133 |
| 0.1792 | 1.78 | 900 | 0.2109 | 0.9267 |
| 0.1831 | 1.88 | 950 | 0.2641 | 0.92 |
| 0.1534 | 1.98 | 1000 | 0.1924 | 0.94 |
| 0.1208 | 2.08 | 1050 | 0.2990 | 0.9333 |
| 0.1118 | 2.18 | 1100 | 0.4952 | 0.9 |
| 0.158 | 2.28 | 1150 | 0.1706 | 0.9533 |
| 0.1163 | 2.38 | 1200 | 0.1238 | 0.9733 |
| 0.1738 | 2.48 | 1250 | 0.1989 | 0.9467 |
| 0.1305 | 2.57 | 1300 | 0.4354 | 0.9067 |
| 0.1668 | 2.67 | 1350 | 0.1276 | 0.9667 |
| 0.1195 | 2.77 | 1400 | 0.2170 | 0.9533 |
| 0.1057 | 2.87 | 1450 | 0.2882 | 0.9333 |
| 0.1172 | 2.97 | 1500 | 0.1435 | 0.9667 |
| 0.0893 | 3.07 | 1550 | 0.1754 | 0.96 |
| 0.0582 | 3.17 | 1600 | 0.1858 | 0.96 |
| 0.0887 | 3.27 | 1650 | 0.4954 | 0.92 |
| 0.1166 | 3.37 | 1700 | 0.2356 | 0.9467 |
| 0.0518 | 3.47 | 1750 | 0.1910 | 0.96 |
| 0.0741 | 3.56 | 1800 | 0.1328 | 0.9733 |
| 0.072 | 3.66 | 1850 | 0.2769 | 0.9467 |
| 0.0534 | 3.76 | 1900 | 0.3501 | 0.94 |
| 0.0776 | 3.86 | 1950 | 0.3171 | 0.94 |
| 0.0537 | 3.96 | 2000 | 0.2138 | 0.9533 |
| 0.0683 | 4.06 | 2050 | 0.2934 | 0.94 |
| 0.015 | 4.16 | 2100 | 0.2233 | 0.9533 |
| 0.0236 | 4.26 | 2150 | 0.2673 | 0.9533 |
| 0.0357 | 4.36 | 2200 | 0.2279 | 0.96 |
| 0.0298 | 4.46 | 2250 | 0.3017 | 0.9467 |
| 0.0357 | 4.55 | 2300 | 0.2910 | 0.9467 |
| 0.0208 | 4.65 | 2350 | 0.2498 | 0.9533 |
| 0.0345 | 4.75 | 2400 | 0.2259 | 0.9667 |
| 0.0174 | 4.85 | 2450 | 0.2274 | 0.9667 |
| 0.0393 | 4.95 | 2500 | 0.2389 | 0.9533 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
anikethjr/PromoGen_K562_2080Ti_restart
|
anikethjr
| 2022-11-28T21:24:36Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"prophetnet",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-27T05:27:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: PromoGen_K562_2080Ti_restart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PromoGen_K562_2080Ti_restart
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.7676 | 0.49 | 2500 | 0.7383 |
| 0.7121 | 0.97 | 5000 | 0.6867 |
| 0.6914 | 1.46 | 7500 | 0.6705 |
| 0.6837 | 1.95 | 10000 | 0.6622 |
| 0.6778 | 2.44 | 12500 | 0.6558 |
| 0.6748 | 2.92 | 15000 | 0.6517 |
| 0.6676 | 3.41 | 17500 | 0.6433 |
| 0.6593 | 3.9 | 20000 | 0.6358 |
| 0.6584 | 4.38 | 22500 | 0.6320 |
| 0.6557 | 4.87 | 25000 | 0.6301 |
| 0.6523 | 5.36 | 27500 | 0.6257 |
| 0.6478 | 5.84 | 30000 | 0.6236 |
| 0.6393 | 6.33 | 32500 | 0.6145 |
| 0.6039 | 6.82 | 35000 | 0.5658 |
| 0.5616 | 7.31 | 37500 | 0.5376 |
| 0.5518 | 7.79 | 40000 | 0.5310 |
| 0.5509 | 8.28 | 42500 | 0.5273 |
| 0.5487 | 8.77 | 45000 | 0.5261 |
| 0.5479 | 9.25 | 47500 | 0.5249 |
| 0.546 | 9.74 | 50000 | 0.5242 |
| 0.5447 | 10.23 | 52500 | 0.5229 |
| 0.5439 | 10.71 | 55000 | 0.5220 |
| 0.5433 | 11.2 | 57500 | 0.5209 |
| 0.5394 | 11.69 | 60000 | 0.5162 |
| 0.5153 | 12.18 | 62500 | 0.4944 |
| 0.5137 | 12.66 | 65000 | 0.4932 |
| 0.514 | 13.15 | 67500 | 0.4924 |
| 0.5131 | 13.64 | 70000 | 0.4919 |
| 0.5104 | 14.12 | 72500 | 0.4914 |
| 0.5122 | 14.61 | 75000 | 0.4906 |
| 0.5089 | 15.1 | 77500 | 0.4901 |
| 0.5076 | 15.59 | 80000 | 0.4891 |
| 0.4986 | 16.07 | 82500 | 0.4721 |
| 0.4875 | 16.56 | 85000 | 0.4672 |
| 0.4887 | 17.05 | 87500 | 0.4669 |
| 0.4839 | 17.53 | 90000 | 0.4661 |
| 0.4849 | 18.02 | 92500 | 0.4654 |
| 0.4848 | 18.51 | 95000 | 0.4649 |
| 0.4831 | 18.99 | 97500 | 0.4646 |
| 0.4816 | 19.48 | 100000 | 0.4644 |
| 0.4808 | 19.97 | 102500 | 0.4637 |
| 0.4812 | 20.46 | 105000 | 0.4634 |
| 0.4813 | 20.94 | 107500 | 0.4633 |
| 0.4818 | 21.43 | 110000 | 0.4631 |
| 0.4813 | 21.92 | 112500 | 0.4629 |
| 0.4782 | 22.4 | 115000 | 0.4628 |
| 0.4804 | 22.89 | 117500 | 0.4626 |
| 0.4815 | 23.38 | 120000 | 0.4625 |
| 0.4812 | 23.87 | 122500 | 0.4625 |
| 0.4785 | 24.35 | 125000 | 0.4624 |
| 0.4795 | 24.84 | 127500 | 0.4624 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.0
- Tokenizers 0.13.0.dev0
|
Inayat/Fine_tune_whisper_small
|
Inayat
| 2022-11-28T21:14:32Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-14T19:18:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Fine_tune_whisper_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_tune_whisper_small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8238
- Wer: 42.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 900
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2994 | 3.92 | 200 | 0.6607 | 44.0797 |
| 0.0201 | 7.84 | 400 | 0.7371 | 42.6042 |
| 0.002 | 11.76 | 600 | 0.8027 | 42.5304 |
| 0.0011 | 15.69 | 800 | 0.8238 | 42.9362 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pig4431/TweetEval_DistilBERT_5E
|
pig4431
| 2022-11-28T21:09:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T21:03:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_DistilBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9133333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- Accuracy: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5747 | 0.04 | 50 | 0.4843 | 0.7333 |
| 0.4336 | 0.08 | 100 | 0.2888 | 0.8667 |
| 0.3437 | 0.12 | 150 | 0.2895 | 0.8667 |
| 0.3375 | 0.16 | 200 | 0.2864 | 0.8733 |
| 0.3072 | 0.2 | 250 | 0.2577 | 0.8867 |
| 0.3019 | 0.24 | 300 | 0.2574 | 0.8933 |
| 0.2662 | 0.28 | 350 | 0.2621 | 0.8867 |
| 0.283 | 0.32 | 400 | 0.2340 | 0.92 |
| 0.2949 | 0.37 | 450 | 0.2482 | 0.8933 |
| 0.3066 | 0.41 | 500 | 0.2537 | 0.9 |
| 0.2457 | 0.45 | 550 | 0.2473 | 0.9 |
| 0.295 | 0.49 | 600 | 0.2177 | 0.9133 |
| 0.2862 | 0.53 | 650 | 0.2215 | 0.9133 |
| 0.2603 | 0.57 | 700 | 0.2272 | 0.9133 |
| 0.2976 | 0.61 | 750 | 0.2298 | 0.9067 |
| 0.2823 | 0.65 | 800 | 0.2451 | 0.8933 |
| 0.2583 | 0.69 | 850 | 0.2645 | 0.8933 |
| 0.2694 | 0.73 | 900 | 0.2352 | 0.9 |
| 0.2433 | 0.77 | 950 | 0.2322 | 0.9133 |
| 0.2598 | 0.81 | 1000 | 0.2300 | 0.9 |
| 0.2701 | 0.85 | 1050 | 0.2162 | 0.9 |
| 0.2227 | 0.89 | 1100 | 0.2135 | 0.8933 |
| 0.2045 | 0.93 | 1150 | 0.2233 | 0.9133 |
| 0.2821 | 0.97 | 1200 | 0.2194 | 0.9 |
| 0.2342 | 1.01 | 1250 | 0.2488 | 0.88 |
| 0.2028 | 1.06 | 1300 | 0.2451 | 0.8867 |
| 0.1509 | 1.1 | 1350 | 0.3174 | 0.88 |
| 0.1888 | 1.14 | 1400 | 0.2537 | 0.9133 |
| 0.1825 | 1.18 | 1450 | 0.2559 | 0.9067 |
| 0.1721 | 1.22 | 1500 | 0.2511 | 0.92 |
| 0.2137 | 1.26 | 1550 | 0.2963 | 0.9133 |
| 0.2153 | 1.3 | 1600 | 0.2210 | 0.92 |
| 0.1989 | 1.34 | 1650 | 0.2231 | 0.9133 |
| 0.2155 | 1.38 | 1700 | 0.1991 | 0.9133 |
| 0.1912 | 1.42 | 1750 | 0.2146 | 0.92 |
| 0.1623 | 1.46 | 1800 | 0.2721 | 0.9 |
| 0.2236 | 1.5 | 1850 | 0.2301 | 0.9267 |
| 0.1907 | 1.54 | 1900 | 0.1988 | 0.92 |
| 0.1286 | 1.58 | 1950 | 0.2326 | 0.9 |
| 0.2147 | 1.62 | 2000 | 0.2432 | 0.9267 |
| 0.2018 | 1.66 | 2050 | 0.2162 | 0.9067 |
| 0.2073 | 1.7 | 2100 | 0.2153 | 0.9133 |
| 0.1498 | 1.75 | 2150 | 0.2335 | 0.92 |
| 0.1812 | 1.79 | 2200 | 0.2275 | 0.9267 |
| 0.1482 | 1.83 | 2250 | 0.2734 | 0.9 |
| 0.2233 | 1.87 | 2300 | 0.2454 | 0.9 |
| 0.1673 | 1.91 | 2350 | 0.2394 | 0.92 |
| 0.1555 | 1.95 | 2400 | 0.2725 | 0.92 |
| 0.2082 | 1.99 | 2450 | 0.2684 | 0.9133 |
| 0.1545 | 2.03 | 2500 | 0.3049 | 0.9067 |
| 0.1384 | 2.07 | 2550 | 0.2960 | 0.9133 |
| 0.1201 | 2.11 | 2600 | 0.3259 | 0.9 |
| 0.1348 | 2.15 | 2650 | 0.3091 | 0.9133 |
| 0.1046 | 2.19 | 2700 | 0.2916 | 0.9267 |
| 0.1506 | 2.23 | 2750 | 0.2910 | 0.9133 |
| 0.1481 | 2.27 | 2800 | 0.2855 | 0.9067 |
| 0.1318 | 2.31 | 2850 | 0.3075 | 0.9 |
| 0.1204 | 2.35 | 2900 | 0.3169 | 0.8933 |
| 0.1669 | 2.39 | 2950 | 0.3050 | 0.9067 |
| 0.1725 | 2.44 | 3000 | 0.2970 | 0.9133 |
| 0.1305 | 2.48 | 3050 | 0.3065 | 0.9 |
| 0.1508 | 2.52 | 3100 | 0.3079 | 0.9133 |
| 0.184 | 2.56 | 3150 | 0.3482 | 0.9067 |
| 0.1263 | 2.6 | 3200 | 0.3310 | 0.9 |
| 0.1282 | 2.64 | 3250 | 0.3520 | 0.8933 |
| 0.1217 | 2.68 | 3300 | 0.3158 | 0.9067 |
| 0.1203 | 2.72 | 3350 | 0.3351 | 0.92 |
| 0.1068 | 2.76 | 3400 | 0.3239 | 0.92 |
| 0.1517 | 2.8 | 3450 | 0.3247 | 0.92 |
| 0.113 | 2.84 | 3500 | 0.3269 | 0.9133 |
| 0.1276 | 2.88 | 3550 | 0.3162 | 0.92 |
| 0.1548 | 2.92 | 3600 | 0.3196 | 0.9133 |
| 0.1305 | 2.96 | 3650 | 0.3163 | 0.92 |
| 0.149 | 3.0 | 3700 | 0.3013 | 0.92 |
| 0.0816 | 3.04 | 3750 | 0.3097 | 0.9267 |
| 0.0884 | 3.08 | 3800 | 0.3028 | 0.92 |
| 0.0727 | 3.12 | 3850 | 0.3487 | 0.9133 |
| 0.1018 | 3.17 | 3900 | 0.3447 | 0.92 |
| 0.1266 | 3.21 | 3950 | 0.3589 | 0.9133 |
| 0.1216 | 3.25 | 4000 | 0.3464 | 0.92 |
| 0.091 | 3.29 | 4050 | 0.3454 | 0.92 |
| 0.0829 | 3.33 | 4100 | 0.3450 | 0.92 |
| 0.1084 | 3.37 | 4150 | 0.3670 | 0.92 |
| 0.0754 | 3.41 | 4200 | 0.3661 | 0.92 |
| 0.094 | 3.45 | 4250 | 0.3588 | 0.9067 |
| 0.0641 | 3.49 | 4300 | 0.3936 | 0.92 |
| 0.1138 | 3.53 | 4350 | 0.3616 | 0.92 |
| 0.0744 | 3.57 | 4400 | 0.3562 | 0.92 |
| 0.0697 | 3.61 | 4450 | 0.3532 | 0.9267 |
| 0.1083 | 3.65 | 4500 | 0.3451 | 0.9267 |
| 0.0701 | 3.69 | 4550 | 0.3307 | 0.92 |
| 0.0849 | 3.73 | 4600 | 0.3797 | 0.92 |
| 0.09 | 3.77 | 4650 | 0.3746 | 0.9267 |
| 0.0799 | 3.81 | 4700 | 0.3799 | 0.92 |
| 0.0589 | 3.86 | 4750 | 0.3805 | 0.92 |
| 0.0578 | 3.9 | 4800 | 0.3910 | 0.9133 |
| 0.0816 | 3.94 | 4850 | 0.3856 | 0.9133 |
| 0.1366 | 3.98 | 4900 | 0.3707 | 0.92 |
| 0.0846 | 4.02 | 4950 | 0.3802 | 0.92 |
| 0.0401 | 4.06 | 5000 | 0.3842 | 0.92 |
| 0.0851 | 4.1 | 5050 | 0.3773 | 0.9267 |
| 0.0514 | 4.14 | 5100 | 0.3922 | 0.9133 |
| 0.0909 | 4.18 | 5150 | 0.3893 | 0.92 |
| 0.0764 | 4.22 | 5200 | 0.3818 | 0.9133 |
| 0.1208 | 4.26 | 5250 | 0.4096 | 0.92 |
| 0.0689 | 4.3 | 5300 | 0.3940 | 0.9133 |
| 0.0524 | 4.34 | 5350 | 0.4020 | 0.9133 |
| 0.0733 | 4.38 | 5400 | 0.4002 | 0.9133 |
| 0.0699 | 4.42 | 5450 | 0.4013 | 0.9133 |
| 0.0712 | 4.46 | 5500 | 0.4037 | 0.9067 |
| 0.0557 | 4.5 | 5550 | 0.4121 | 0.92 |
| 0.0679 | 4.55 | 5600 | 0.4067 | 0.9133 |
| 0.0651 | 4.59 | 5650 | 0.4194 | 0.9133 |
| 0.0607 | 4.63 | 5700 | 0.4007 | 0.9133 |
| 0.0676 | 4.67 | 5750 | 0.4013 | 0.9133 |
| 0.0303 | 4.71 | 5800 | 0.3984 | 0.9133 |
| 0.0674 | 4.75 | 5850 | 0.4037 | 0.9133 |
| 0.0842 | 4.79 | 5900 | 0.4072 | 0.9133 |
| 0.0516 | 4.83 | 5950 | 0.4096 | 0.9133 |
| 0.0556 | 4.87 | 6000 | 0.4111 | 0.92 |
| 0.0277 | 4.91 | 6050 | 0.4079 | 0.9133 |
| 0.0629 | 4.95 | 6100 | 0.4053 | 0.9133 |
| 0.0426 | 4.99 | 6150 | 0.4043 | 0.9133 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
futuredatascience/to-classifier-v1
|
futuredatascience
| 2022-11-28T20:53:10Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-28T20:52:58Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 530,
"warmup_steps": 53,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
futuredatascience/from-classifier-v1
|
futuredatascience
| 2022-11-28T20:07:27Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-28T20:07:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 530,
"warmup_steps": 53,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
reubenjohn/stack-overflow-open-status-classifier-pt
|
reubenjohn
| 2022-11-28T20:01:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-16T03:44:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: stack-overflow-open-status-classifier-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stack-overflow-open-status-classifier-pt
This model is a fine-tuned version of [reubenjohn/stack-overflow-open-status-classifier-pt](https://huggingface.co/reubenjohn/stack-overflow-open-status-classifier-pt) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9448
- eval_runtime: 3.554
- eval_samples_per_second: 28.137
- eval_steps_per_second: 0.563
- epoch: 0.01
- step: 60
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 1
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
motmono/a2c-AntBulletEnv-v0
|
motmono
| 2022-11-28T19:58:24Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-28T19:57:12Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1539.68 +/- 213.96
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pig4431/TUF_roBERTa_5E
|
pig4431
| 2022-11-28T19:55:07Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T19:48:29Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_roBERTa_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2136
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4665 | 0.1 | 50 | 0.2587 | 0.9333 |
| 0.245 | 0.2 | 100 | 0.1355 | 0.96 |
| 0.2079 | 0.3 | 150 | 0.1454 | 0.9533 |
| 0.2098 | 0.4 | 200 | 0.1809 | 0.9533 |
| 0.1637 | 0.5 | 250 | 0.2299 | 0.94 |
| 0.1869 | 0.59 | 300 | 0.1324 | 0.9667 |
| 0.2202 | 0.69 | 350 | 0.1786 | 0.9467 |
| 0.2084 | 0.79 | 400 | 0.1541 | 0.9533 |
| 0.148 | 0.89 | 450 | 0.1790 | 0.9533 |
| 0.1945 | 0.99 | 500 | 0.1168 | 0.9667 |
| 0.1648 | 1.09 | 550 | 0.1153 | 0.96 |
| 0.1099 | 1.19 | 600 | 0.1239 | 0.96 |
| 0.1238 | 1.29 | 650 | 0.1486 | 0.9533 |
| 0.1067 | 1.39 | 700 | 0.1195 | 0.96 |
| 0.1324 | 1.49 | 750 | 0.1134 | 0.96 |
| 0.1128 | 1.58 | 800 | 0.1180 | 0.9667 |
| 0.1406 | 1.68 | 850 | 0.2081 | 0.9533 |
| 0.1516 | 1.78 | 900 | 0.1987 | 0.9533 |
| 0.1537 | 1.88 | 950 | 0.1644 | 0.96 |
| 0.0957 | 1.98 | 1000 | 0.1660 | 0.96 |
| 0.0699 | 2.08 | 1050 | 0.2057 | 0.9533 |
| 0.1007 | 2.18 | 1100 | 0.2336 | 0.9533 |
| 0.0677 | 2.28 | 1150 | 0.2399 | 0.9467 |
| 0.059 | 2.38 | 1200 | 0.2331 | 0.96 |
| 0.1051 | 2.48 | 1250 | 0.1974 | 0.9533 |
| 0.0778 | 2.57 | 1300 | 0.2857 | 0.9467 |
| 0.1099 | 2.67 | 1350 | 0.2641 | 0.9533 |
| 0.0747 | 2.77 | 1400 | 0.2219 | 0.9533 |
| 0.0874 | 2.87 | 1450 | 0.2780 | 0.9533 |
| 0.0675 | 2.97 | 1500 | 0.1993 | 0.96 |
| 0.052 | 3.07 | 1550 | 0.1918 | 0.96 |
| 0.0214 | 3.17 | 1600 | 0.2410 | 0.96 |
| 0.0512 | 3.27 | 1650 | 0.2353 | 0.96 |
| 0.0548 | 3.37 | 1700 | 0.2722 | 0.9533 |
| 0.0554 | 3.47 | 1750 | 0.1593 | 0.9733 |
| 0.0742 | 3.56 | 1800 | 0.2568 | 0.96 |
| 0.064 | 3.66 | 1850 | 0.2358 | 0.96 |
| 0.052 | 3.76 | 1900 | 0.2161 | 0.9667 |
| 0.0349 | 3.86 | 1950 | 0.2497 | 0.96 |
| 0.0868 | 3.96 | 2000 | 0.1834 | 0.9667 |
| 0.0445 | 4.06 | 2050 | 0.2441 | 0.9533 |
| 0.0388 | 4.16 | 2100 | 0.2136 | 0.9667 |
| 0.0484 | 4.26 | 2150 | 0.2114 | 0.9667 |
| 0.0263 | 4.36 | 2200 | 0.2325 | 0.96 |
| 0.0409 | 4.46 | 2250 | 0.2454 | 0.9533 |
| 0.0324 | 4.55 | 2300 | 0.2105 | 0.9667 |
| 0.0295 | 4.65 | 2350 | 0.2118 | 0.9667 |
| 0.0372 | 4.75 | 2400 | 0.2005 | 0.9667 |
| 0.0294 | 4.85 | 2450 | 0.2057 | 0.9667 |
| 0.0354 | 4.95 | 2500 | 0.2136 | 0.9667 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
altsoph/xlmr-AER
|
altsoph
| 2022-11-28T19:22:35Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"nlp",
"roberta",
"xlmr",
"classifier",
"aer",
"narrative",
"entity recognition",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-27T22:41:15Z |
---
language:
- en
thumbnail: https://raw.githubusercontent.com/altsoph/misc/main/imgs/aer_logo.png
tags:
- nlp
- roberta
- xlmr
- classifier
- aer
- narrative
- entity recognition
license: mit
---
An XLM-Roberta based language model fine-tuned for AER (Actionable Entities Recognition) -- recognition of entities that protagonists could interact with for further plot development.
We used 5K+ locations from 1K interactive text fiction games and extracted textual descriptions of locations and lists of actionable entities in them.
The resulting [BAER dataset is available here](https://github.com/altsoph/BAER). Then we used it to train this model.
The example of usage:
```py
from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
MODEL_NAME = "altsoph/xlmr-AER"
text = """This bedroom is extremely spare, with dirty laundry scattered haphazardly all over the floor. Cleaner clothing can be found in the dresser.
A bathroom lies to the south, while a door to the east leads to the living room."""
model = AutoModelForTokenClassification.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
pipe = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="simple", ignore_labels=['O','PAD'])
entities = pipe(text)
print(entities)
```
If you use the model, please cite the following:
```
@inproceedings{Tikhonov-etal-2022-AER,
title = "Actionable Entities Recognition Benchmark for Interactive Fiction",
author = "Alexey Tikhonov and Ivan P. Yamshchikov",
year = "2022",
}
```
|
essayproj/roberta-base-essay
|
essayproj
| 2022-11-28T19:08:54Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"feature-extraction",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-28T19:08:03Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: roberta-base-essay
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# roberta-base-essay
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
Dagar/t5-small-science-papers-NIPS
|
Dagar
| 2022-11-28T18:21:27Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-28T18:00:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-science-papers-NIPS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-science-papers-NIPS
This model is a fine-tuned version of [Dagar/t5-small-science-papers](https://huggingface.co/Dagar/t5-small-science-papers) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7566
- Rouge1: 15.7066
- Rouge2: 2.5654
- Rougel: 11.4679
- Rougelsum: 14.4017
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 318 | 5.1856 | 13.7172 | 2.0644 | 10.2189 | 12.838 | 19.0 |
| 5.4522 | 2.0 | 636 | 5.0383 | 15.6211 | 2.1808 | 11.3561 | 14.3054 | 19.0 |
| 5.4522 | 3.0 | 954 | 4.9486 | 15.1659 | 2.3308 | 11.1052 | 13.9456 | 19.0 |
| 5.1254 | 4.0 | 1272 | 4.8851 | 15.716 | 2.4099 | 11.4954 | 14.5099 | 19.0 |
| 4.9794 | 5.0 | 1590 | 4.8456 | 15.5507 | 2.4267 | 11.3867 | 14.3237 | 19.0 |
| 4.9794 | 6.0 | 1908 | 4.8073 | 15.8406 | 2.4254 | 11.6878 | 14.6154 | 19.0 |
| 4.8823 | 7.0 | 2226 | 4.7872 | 15.5554 | 2.4637 | 11.3401 | 14.3183 | 19.0 |
| 4.8338 | 8.0 | 2544 | 4.7680 | 15.4783 | 2.4888 | 11.3364 | 14.2031 | 19.0 |
| 4.8338 | 9.0 | 2862 | 4.7621 | 15.958 | 2.5662 | 11.6139 | 14.6576 | 19.0 |
| 4.7838 | 10.0 | 3180 | 4.7566 | 15.7066 | 2.5654 | 11.4679 | 14.4017 | 19.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.