modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 18:27:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 526
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 18:26:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
plasmo/voxel-ish
|
plasmo
| 2023-05-05T11:27:02Z | 67 | 34 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-24T14:01:22Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Jak's Voxel-ish Image Pack for Stable Diffusion
Another fantastic image pack brought to you by 143 training images through 8000 training steps, 20% Training text crafted by Jak_TheAI_Artist
Include Prompt trigger: "voxel-ish" to activate.
Tip: add "intricate detail" in prompt to make a semi-realistic image.
### UPDATE: Version 1.2 available [here](https://huggingface.co/plasmo/vox2)
Sample pictures of this concept:
voxel-ish








|
cansurav/bert-base-uncased-finetuned-cola-dropout-0.3
|
cansurav
| 2023-05-05T11:25:39Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T11:11:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-dropout-0.3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6036344190543846
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-dropout-0.3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2847
- Matthews Correlation: 0.6036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4995 | 1.0 | 535 | 0.5102 | 0.4897 |
| 0.3023 | 2.0 | 1070 | 0.4585 | 0.5848 |
| 0.1951 | 3.0 | 1605 | 0.6793 | 0.5496 |
| 0.145 | 4.0 | 2140 | 0.7694 | 0.5925 |
| 0.1024 | 5.0 | 2675 | 1.0057 | 0.5730 |
| 0.0691 | 6.0 | 3210 | 1.0275 | 0.5892 |
| 0.0483 | 7.0 | 3745 | 1.0272 | 0.5788 |
| 0.0404 | 8.0 | 4280 | 1.2537 | 0.5810 |
| 0.0219 | 9.0 | 4815 | 1.3020 | 0.5780 |
| 0.0224 | 10.0 | 5350 | 1.2847 | 0.6036 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chribeiro/ppo-SnowballTarget
|
chribeiro
| 2023-05-05T11:23:09Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-05-05T11:23:04Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: chribeiro/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
s3nh/zelda-botw-stable-diffusion
|
s3nh
| 2023-05-05T11:22:27Z | 37 | 17 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-09T11:05:43Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
### Arcane based Artwork Diffusion Model
I present you fine tuned model of stable-diffusion-v1-5, which heavily based of
work of great artworks from Legend of Zelda: Breath of The Wild.
Use the tokens **_botw style_** in your prompts for the effect.
Model was trained using the diffusers library, which based on Dreambooth implementation.
Training steps included:
- prior preservation loss
- train-text-encoder fine tuning
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "s3nh/s3nh/zelda-botw-stable-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Rain forest, botw style"
image = pipe(prompt).images[0]
image.save("./example_output.png")
```
# Gallery
## Grumpy cat, botw style
<img src = "https://huggingface.co/s3nh/zelda-botw-stable-diffusion/resolve/main/grumpy cat0.png">
<img src = "https://huggingface.co/s3nh/zelda-botw-stable-diffusion/resolve/main/grumpy cat1.png">
<img src = "https://huggingface.co/s3nh/zelda-botw-stable-diffusion/resolve/main/grumpy cat2.png">
<img src = "https://huggingface.co/s3nh/zelda-botw-stable-diffusion/resolve/main/grumpy cat3.png">
## Landscape, botw style




## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
s3nh/beksinski-style-stable-diffusion
|
s3nh
| 2023-05-05T11:22:06Z | 39 | 26 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-05T13:54:26Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
### Zdzislaw Beksinski Art Diffusion Model
I present you fine tuned model of stable-diffusion-v1-5, which heavily based of
work of great artist, Zdzislaw Beksinski.
Use the tokens **_beksinski style_** in your prompts for the effect.
Model was trained using the diffusers library, which based on Dreambooth implementation.
Training steps included:
- prior preservation loss
- train-text-encoder fine tuning
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "s3nh/beksinski-style-stable-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Bus riding to school, beksinski style"
image = pipe(prompt).images[0]
image.save("./example_output.png")
```
# Gallery
## Bus riding to school, beksinski style.



## Car traffic, beksinski style



## Eating breakfast on sunny day, beksinski style

## Dog drinking coffee, beksinski style

## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
plasmo/zombie-vector
|
plasmo
| 2023-05-05T11:20:13Z | 47 | 20 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-23T02:04:01Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
widget:
- text: "zombie_vector "
---
### Jak's Zombie Vector Pack for Stable Diffusion
Another fantastic image pack brought to you by 124 training images through 5000 training steps, 20% Training text crafted by Jak_TheAI_Artist
Include Prompt trigger: "zombie_vector" to activate.
Perfect for designing T-shirts and zombie vector art.
Sample pictures of this concept:




|
Bainbridge/gpt2-ear_01-hs_cn
|
Bainbridge
| 2023-05-05T11:18:38Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-03T14:39:06Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-ear_01-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-ear_01-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 73.2086 | 0.02 | 10 | 69.5757 |
| 45.7678 | 0.04 | 20 | 32.9873 |
| 13.2515 | 0.06 | 30 | 10.6430 |
| 6.5161 | 0.08 | 40 | 4.2683 |
| 2.5505 | 0.1 | 50 | 2.0421 |
| 1.1408 | 0.12 | 60 | 1.0782 |
| 0.7897 | 0.14 | 70 | 0.9155 |
| 0.7106 | 0.16 | 80 | 0.7515 |
| 0.4254 | 0.18 | 90 | 0.6416 |
| 0.398 | 0.2 | 100 | 0.6129 |
| 0.3089 | 0.22 | 110 | 0.6074 |
| 0.3197 | 0.24 | 120 | 0.5942 |
| 0.3142 | 0.26 | 130 | 0.6017 |
| 0.307 | 0.28 | 140 | 0.5854 |
| 0.2895 | 0.3 | 150 | 0.5731 |
| 0.276 | 0.32 | 160 | 0.5735 |
| 0.2107 | 0.34 | 170 | 0.5753 |
| 0.3173 | 0.36 | 180 | 0.5642 |
| 0.3139 | 0.38 | 190 | 0.5654 |
| 0.2725 | 0.4 | 200 | 0.5622 |
| 0.368 | 0.42 | 210 | 0.5616 |
| 0.3203 | 0.44 | 220 | 0.5600 |
| 0.2286 | 0.46 | 230 | 0.5616 |
| 0.2365 | 0.48 | 240 | 0.5612 |
| 0.248 | 0.5 | 250 | 0.5615 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BakkerHenk/glitch
|
BakkerHenk
| 2023-05-05T11:15:45Z | 33 | 1 |
diffusers
|
[
"diffusers",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-09T19:35:28Z |
---
license: mit
---
### Glitch on Stable Diffusion via Dreambooth
#### model by BakkerHenk
This your the Stable Diffusion model fine-tuned the Glitch concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo in sks glitched style**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:













|
jordiclive/alpaca_gpt4-dolly_15k-vicuna-lora-7b
|
jordiclive
| 2023-05-05T11:14:08Z | 0 | 2 | null |
[
"sft",
"text-generation",
"en",
"dataset:sahil2801/CodeAlpaca-20k",
"dataset:yahma/alpaca-cleaned",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"dataset:jeffwan/sharegpt_vicuna",
"dataset:qwedsacf/grade-school-math-instructions",
"dataset:vicgalle/alpaca-gpt4",
"license:mit",
"region:us"
] |
text-generation
| 2023-04-29T09:12:37Z |
---
license: mit
datasets:
- sahil2801/CodeAlpaca-20k
- yahma/alpaca-cleaned
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
- jeffwan/sharegpt_vicuna
- qwedsacf/grade-school-math-instructions
- vicgalle/alpaca-gpt4
language:
- en
tags:
- sft
pipeline_tag: text-generation
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?</s><|assistant|>
- text: <|prompter|>What's the Earth total population</s><|assistant|>
- text: <|prompter|>Write a story about future of AI development</s><|assistant|>
---
# LoRA Adapter for LLaMA 7B trained on more datasets than tloen/alpaca-lora-7b
This repo contains a low-rank adapter for **LLaMA-7b** fit on datasets part of the OpenAssistant project.
You can see sampling results [here](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-03-18_llama_30b_oasst_latcyr_400_sampling_noprefix_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2F8e90ce6504c159d4046991bf37757c108aed913f%2Fsampling_reports%2Foasst-sft%2Freport_file_jordiclive_alpaca_gpt4-dolly_15k-vicuna-lora-7b_full_lottery_no_prefix.json). Note the sampling params are not necessarily the optimum—they are OpenAssistant defaults for comparing models.
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Max Length: 2048
- Learning rate: 8e-6
- Lora _r_: 16
- Lora Alpha: 32
- Lora target modules: q_proj, k_proj, v_proj, o_proj
The model was trained with flash attention and gradient checkpointing.
## Dataset Details
- dolly15k:
val_split: 0.05
max_val_set: 300
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
val_split: 0.05
- vicuna:
val_split: 0.05
max_val_set: 800
fraction: 0.8
- dolly15k:
val_split: 0.05
max_val_set: 300
- grade_school_math_instructions:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
- alpaca_gpt4:
val_split: 0.02
max_val_set: 250
## Model Details
- **Developed** as part of the OpenAssistant Project
- **Model type:** PEFT Adapter for frozen LLaMA
- **Language:** English
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?</s><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
# Example Inference Code (Note several embeddings need to be loaded along with the LoRA weights), assumes on GPU and torch.float16:
```
from typing import List, NamedTuple
import torch
import transformers
from huggingface_hub import hf_hub_download
from peft import PeftModel
from transformers import GenerationConfig
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = transformers.AutoTokenizer.from_pretrained("jordiclive/alpaca_gpt4-dolly_15k-vicuna-lora-7b")
model = transformers.AutoModelForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf", torch_dtype=torch.float16
) # Load Base Model
model.resize_token_embeddings(
len(tokenizer)
) # This model repo also contains several embeddings for special tokens that need to be loaded.
model.config.eos_token_id = tokenizer.eos_token_id
model.config.bos_token_id = tokenizer.bos_token_id
model.config.pad_token_id = tokenizer.pad_token_id
lora_weights = "jordiclive/alpaca_gpt4-dolly_15k-vicuna-lora-7b"
model = PeftModel.from_pretrained(
model,
lora_weights,
torch_dtype=torch.float16,
) # Load Lora model
model.eos_token_id = tokenizer.eos_token_id
filename = hf_hub_download("jordiclive/alpaca_gpt4-dolly_15k-vicuna-lora-7b", "extra_embeddings.pt")
embed_weights = torch.load(
filename, map_location=torch.device("cuda" if torch.cuda.is_available() else "cpu")
) # Load embeddings for special tokens
model.base_model.model.model.embed_tokens.weight[32000:, :] = embed_weights.to(
model.base_model.model.model.embed_tokens.weight.dtype
).to(
device
) # Add special token embeddings
model = model.half().to(device)
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
)
def format_system_prompt(prompt, eos_token="</s>"):
return "{}{}{}{}".format(
"<|prompter|>",
prompt,
eos_token,
"<|assistant|>"
)
def generate(prompt, generation_config=generation_config, max_new_tokens=2048, device=device):
prompt = format_system_prompt(prompt) # OpenAssistant Prompt Format expected
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
eos_token_id=2,
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
print("Text generated:")
print(output)
return output
generate("What is a meme, and what's the history behind this word?")
generate("What's the Earth total population")
generate("Write a story about future of AI development")
```
|
usix79/poca-SoccerTwos
|
usix79
| 2023-05-05T11:08:40Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-05-05T11:08:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: usix79/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
IsakG/declension_error_detection
|
IsakG
| 2023-05-05T10:59:36Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"Icelandic",
"Fallbeyging",
"Declension",
"Inflection",
"GED",
"IceBERT",
"is",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T22:25:50Z |
---
language:
- is
tags:
- Icelandic
- Fallbeyging
- Declension
- Inflection
- GED
- IceBERT
---
Add an Icelandic sentence in to the text box, and the model will return a classification of either correct or incorrect declension
Bættu íslenskri setningu inn í textareitinn og líkanið mun skila flokkun með annað hvort rétta eða ranga beygingu
|
yagmurery/bert-base-uncased-finetuned-batchSize-cola-64
|
yagmurery
| 2023-05-05T10:50:35Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T10:44:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-batchSize-cola-64
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5961744294806522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-batchSize-cola-64
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0984
- Matthews Correlation: 0.5962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 1.2908 | 0.5651 |
| No log | 2.0 | 268 | 1.1057 | 0.5729 |
| No log | 3.0 | 402 | 1.0984 | 0.5962 |
| 0.0195 | 4.0 | 536 | 1.1799 | 0.5753 |
| 0.0195 | 5.0 | 670 | 1.2076 | 0.5804 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
psin/summarizing_news
|
psin
| 2023-05-05T10:37:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-05T09:57:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarizing_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarizing_news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5292
- Rouge1: 0.384
- Rouge2: 0.1554
- Rougel: 0.3376
- Rougelsum: 0.3377
- Gen Len: 18.8513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 63 | 3.0459 | 0.3393 | 0.1259 | 0.2985 | 0.2986 | 18.9927 |
| No log | 2.0 | 126 | 2.7214 | 0.3699 | 0.1458 | 0.3255 | 0.3257 | 18.9666 |
| No log | 3.0 | 189 | 2.5743 | 0.3805 | 0.153 | 0.3345 | 0.3347 | 18.8972 |
| No log | 4.0 | 252 | 2.5292 | 0.384 | 0.1554 | 0.3376 | 0.3377 | 18.8513 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
consolida/ateliersophie
|
consolida
| 2023-05-05T10:36:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-05T09:57:07Z |
ソフィー学習モデル
呼び出し呪文例
shs, 1girl, solo,jewelry, corset, blush, necklace, coat, ahoge, brown hair, head scarf, short hair, brown eyes, collared coat, closed mouth, blue coat, open coat, long sleeves, red eyes
|
pnparam/swlosof02_2
|
pnparam
| 2023-05-05T10:35:42Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-05T09:51:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: swlosof02_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swlosof02_2
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kindlytree/demo
|
kindlytree
| 2023-05-05T10:33:23Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:Linaqruf/anything-v3.0",
"base_model:adapter:Linaqruf/anything-v3.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-04T13:21:11Z |
---
license: creativeml-openrail-m
base_model: Linaqruf/anything-v3.0
instance_prompt: shanshui
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - kindlytree/lora-outputs
These are LoRA adaption weights for Linaqruf/anything-v3.0. The weights were trained on shanshui using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
cansurav/bert-base-uncased-finetuned-cola-learning_rate-0.0001
|
cansurav
| 2023-05-05T10:24:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T10:02:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-0.0001
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-0.0001
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7459
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6205 | 1.0 | 535 | 0.7459 | 0.0 |
| 0.6218 | 2.0 | 1070 | 0.6288 | 0.0 |
| 0.6166 | 3.0 | 1605 | 0.6181 | 0.0 |
| 0.6196 | 4.0 | 2140 | 0.6279 | 0.0 |
| 0.6137 | 5.0 | 2675 | 0.6202 | 0.0 |
| 0.6138 | 6.0 | 3210 | 0.6203 | 0.0 |
| 0.6074 | 7.0 | 3745 | 0.6184 | 0.0 |
| 0.6128 | 8.0 | 4280 | 0.6220 | 0.0 |
| 0.6073 | 9.0 | 4815 | 0.6183 | 0.0 |
| 0.6113 | 10.0 | 5350 | 0.6196 | 0.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
muhammadravi251001/fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-with-freeze-LR-1e-05
|
muhammadravi251001
| 2023-05-05T10:17:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-05T08:42:49Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-with-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-with-freeze-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3132
- Exact Match: 53.2628
- F1: 68.3641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.3129 | 0.5 | 19 | 3.9006 | 5.6437 | 16.4748 |
| 6.3129 | 1.0 | 38 | 2.8272 | 17.1076 | 30.0839 |
| 3.8917 | 1.5 | 57 | 2.4681 | 18.8713 | 32.8962 |
| 3.8917 | 2.0 | 76 | 2.2891 | 25.3968 | 38.0874 |
| 3.8917 | 2.5 | 95 | 2.1835 | 26.9841 | 39.5053 |
| 2.3963 | 3.0 | 114 | 2.0885 | 28.5714 | 42.0243 |
| 2.3963 | 3.5 | 133 | 1.9971 | 32.4515 | 45.4085 |
| 2.112 | 4.0 | 152 | 1.9124 | 34.3915 | 48.2893 |
| 2.112 | 4.5 | 171 | 1.8358 | 37.0370 | 50.6492 |
| 2.112 | 5.0 | 190 | 1.7545 | 40.7407 | 54.7031 |
| 1.8205 | 5.5 | 209 | 1.6432 | 44.4444 | 58.2669 |
| 1.8205 | 6.0 | 228 | 1.5589 | 46.9136 | 60.8052 |
| 1.8205 | 6.5 | 247 | 1.4861 | 48.1481 | 62.5185 |
| 1.573 | 7.0 | 266 | 1.4381 | 49.7354 | 64.1985 |
| 1.573 | 7.5 | 285 | 1.3944 | 51.6755 | 66.0223 |
| 1.387 | 8.0 | 304 | 1.3534 | 53.2628 | 67.6841 |
| 1.387 | 8.5 | 323 | 1.3384 | 53.0864 | 67.8619 |
| 1.387 | 9.0 | 342 | 1.3344 | 52.9101 | 68.0618 |
| 1.2998 | 9.5 | 361 | 1.3182 | 53.2628 | 68.4149 |
| 1.2998 | 10.0 | 380 | 1.3132 | 53.2628 | 68.3641 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
jangmin/whisper-small-ko-1159h
|
jangmin
| 2023-05-05T10:13:37Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-04T22:44:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ko-1159h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ko-1159h
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1752
- Wer: 10.4449
## Model description
The model was trained to transcript the audio sources into Korean text.
## Intended uses & limitations
More information needed
## Training and evaluation data
I downloaded all data from AI-HUB (https://aihub.or.kr/). Two datasets, in particular, caught my attention: "Instruction Audio Set" and "Noisy Conversation Audio Set".
I intentionally gathered 796 hours of audio from the first dataset and 363 hours of audio from the second dataset (This includes statistics for the training data only, and excludes information about the validation data.).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 18483
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0953 | 0.33 | 2053 | 0.2155 | 13.0432 |
| 0.0803 | 0.67 | 4106 | 0.1951 | 12.0399 |
| 0.0746 | 1.0 | 6159 | 0.1836 | 11.3995 |
| 0.0509 | 1.33 | 8212 | 0.1819 | 11.0396 |
| 0.0525 | 1.67 | 10265 | 0.1782 | 10.9039 |
| 0.0493 | 2.0 | 12318 | 0.1743 | 10.7255 |
| 0.034 | 2.33 | 14371 | 0.1784 | 10.7377 |
| 0.0326 | 2.67 | 16424 | 0.1765 | 10.5471 |
| 0.0293 | 3.0 | 18477 | 0.1752 | 10.4449 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
|
liuliu96/git-base-pokemon
|
liuliu96
| 2023-05-05T10:05:33Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"git",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-05-05T09:15:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0392
- Wer Score: 2.4636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 7.334 | 4.17 | 50 | 4.5690 | 13.9068 |
| 2.4021 | 8.33 | 100 | 0.4880 | 9.8480 |
| 0.1468 | 12.5 | 150 | 0.0350 | 0.4074 |
| 0.0179 | 16.67 | 200 | 0.0330 | 2.5888 |
| 0.006 | 20.83 | 250 | 0.0355 | 3.7037 |
| 0.0024 | 25.0 | 300 | 0.0373 | 4.7152 |
| 0.0017 | 29.17 | 350 | 0.0377 | 3.8314 |
| 0.0014 | 33.33 | 400 | 0.0385 | 3.2516 |
| 0.0012 | 37.5 | 450 | 0.0387 | 3.1609 |
| 0.0011 | 41.67 | 500 | 0.0390 | 2.6105 |
| 0.0011 | 45.83 | 550 | 0.0391 | 2.7650 |
| 0.0011 | 50.0 | 600 | 0.0392 | 2.4636 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yagmurery/bert-base-uncased-finetuned-dropout-cola-0.2
|
yagmurery
| 2023-05-05T10:03:38Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T09:20:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-dropout-cola-0.2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5957317644481708
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-dropout-cola-0.2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8150
- Matthews Correlation: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4985 | 1.0 | 535 | 0.5022 | 0.4978 |
| 0.3168 | 2.0 | 1070 | 0.4357 | 0.5836 |
| 0.2116 | 3.0 | 1605 | 0.6536 | 0.5365 |
| 0.149 | 4.0 | 2140 | 0.8150 | 0.5957 |
| 0.0911 | 5.0 | 2675 | 0.8846 | 0.5838 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cansurav/bert-base-uncased-finetuned-cola-learning_rate-8e-06
|
cansurav
| 2023-05-05T10:02:23Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T09:48:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-8e-06
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5752615459764325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-8e-06
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8389
- Matthews Correlation: 0.5753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5241 | 1.0 | 535 | 0.4659 | 0.5046 |
| 0.3755 | 2.0 | 1070 | 0.4412 | 0.5650 |
| 0.2782 | 3.0 | 1605 | 0.5524 | 0.5395 |
| 0.2154 | 4.0 | 2140 | 0.6437 | 0.5651 |
| 0.1669 | 5.0 | 2675 | 0.7709 | 0.5650 |
| 0.1503 | 6.0 | 3210 | 0.8389 | 0.5753 |
| 0.1151 | 7.0 | 3745 | 0.8964 | 0.5681 |
| 0.1082 | 8.0 | 4280 | 0.9767 | 0.5548 |
| 0.0816 | 9.0 | 4815 | 0.9978 | 0.5498 |
| 0.0809 | 10.0 | 5350 | 1.0170 | 0.5576 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kws/a2c-AntBulletEnv-v0
|
kws
| 2023-05-05T09:57:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-12T10:04:34Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1587.19 +/- 175.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Pika62/kogpt2-base-v2-finetuned-klue-ner
|
Pika62
| 2023-05-05T09:54:48Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-03T03:57:42Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.2122585806255
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4057
- F1: 0.2123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4952 | 1.0 | 876 | 0.4714 | 0.1416 |
| 0.354 | 2.0 | 1752 | 0.4263 | 0.1849 |
| 0.2812 | 3.0 | 2628 | 0.4057 | 0.2123 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
meltemtatli/bert-base-uncased-finetuned-cola-trying
|
meltemtatli
| 2023-05-05T09:48:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T22:09:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-trying
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5318380398617779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-trying
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4377
- Matthews Correlation: 0.5318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4603 | 1.0 | 535 | 0.4377 | 0.5318 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cansurav/bert-base-uncased-finetuned-cola-learning_rate-9e-06
|
cansurav
| 2023-05-05T09:47:52Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T09:33:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-9e-06
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5753593483598531
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-9e-06
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9848
- Matthews Correlation: 0.5754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5061 | 0.4717 |
| 0.3617 | 2.0 | 1070 | 0.4769 | 0.5701 |
| 0.2584 | 3.0 | 1605 | 0.5299 | 0.5625 |
| 0.1998 | 4.0 | 2140 | 0.6801 | 0.5629 |
| 0.1492 | 5.0 | 2675 | 0.8519 | 0.5446 |
| 0.1323 | 6.0 | 3210 | 0.9372 | 0.5624 |
| 0.103 | 7.0 | 3745 | 0.9424 | 0.5753 |
| 0.0949 | 8.0 | 4280 | 0.9848 | 0.5754 |
| 0.0718 | 9.0 | 4815 | 1.0474 | 0.5652 |
| 0.0629 | 10.0 | 5350 | 1.0657 | 0.5731 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cansurav/bert-base-uncased-finetuned-cola-learning_rate-4e-05
|
cansurav
| 2023-05-05T09:33:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T09:18:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-4e-05
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5732046470010711
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-4e-05
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3213
- Matthews Correlation: 0.5732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5002 | 1.0 | 535 | 0.5568 | 0.4891 |
| 0.2954 | 2.0 | 1070 | 0.5052 | 0.5210 |
| 0.1976 | 3.0 | 1605 | 0.7016 | 0.5033 |
| 0.1367 | 4.0 | 2140 | 0.9378 | 0.5628 |
| 0.0889 | 5.0 | 2675 | 1.0129 | 0.5470 |
| 0.0555 | 6.0 | 3210 | 1.1484 | 0.5575 |
| 0.0431 | 7.0 | 3745 | 1.1081 | 0.5527 |
| 0.028 | 8.0 | 4280 | 1.1268 | 0.5697 |
| 0.0192 | 9.0 | 4815 | 1.3071 | 0.5627 |
| 0.013 | 10.0 | 5350 | 1.3213 | 0.5732 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BlueAvenir/sti_security_class_model
|
BlueAvenir
| 2023-05-05T09:26:22Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-05T09:26:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 228 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 228,
"warmup_steps": 23,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
cansurav/bert-base-uncased-finetuned-cola-learning_rate-3e-05
|
cansurav
| 2023-05-05T09:18:51Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T18:07:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-3e-05
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5881177177003271
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-3e-05
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0201
- Matthews Correlation: 0.5881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4873 | 1.0 | 535 | 0.6048 | 0.4571 |
| 0.2844 | 2.0 | 1070 | 0.5333 | 0.5521 |
| 0.1893 | 3.0 | 1605 | 0.7435 | 0.5574 |
| 0.1362 | 4.0 | 2140 | 0.7142 | 0.5825 |
| 0.0924 | 5.0 | 2675 | 0.8334 | 0.5625 |
| 0.0596 | 6.0 | 3210 | 1.0201 | 0.5881 |
| 0.0496 | 7.0 | 3745 | 1.0777 | 0.5686 |
| 0.03 | 8.0 | 4280 | 1.2245 | 0.5630 |
| 0.0122 | 9.0 | 4815 | 1.3665 | 0.5701 |
| 0.0111 | 10.0 | 5350 | 1.4043 | 0.5778 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yagmurery/bert-base-uncased-finetuned-learningRate-2-cola-4e-05
|
yagmurery
| 2023-05-05T09:16:26Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T09:08:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-learningRate-2-cola-4e-05
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.539019545585709
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-learningRate-2-cola-4e-05
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2969
- Matthews Correlation: 0.5390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1286 | 1.0 | 535 | 0.9932 | 0.5235 |
| 0.0942 | 2.0 | 1070 | 1.1242 | 0.5229 |
| 0.1325 | 3.0 | 1605 | 0.9707 | 0.5203 |
| 0.0916 | 4.0 | 2140 | 1.0752 | 0.5313 |
| 0.0403 | 5.0 | 2675 | 1.2969 | 0.5390 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yagmurery/bert-base-uncased-finetuned-learningRate-2-cola-3e-05
|
yagmurery
| 2023-05-05T09:08:39Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T09:00:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-learningRate-2-cola-3e-05
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5907527969578087
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-learningRate-2-cola-3e-05
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8555
- Matthews Correlation: 0.5908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2022 | 1.0 | 535 | 0.9205 | 0.5285 |
| 0.1155 | 2.0 | 1070 | 0.8555 | 0.5908 |
| 0.1312 | 3.0 | 1605 | 0.9399 | 0.5496 |
| 0.0956 | 4.0 | 2140 | 1.0178 | 0.5577 |
| 0.048 | 5.0 | 2675 | 1.1525 | 0.5528 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
superqing/pangu-evolution
|
superqing
| 2023-05-05T09:08:09Z | 14 | 0 |
transformers
|
[
"transformers",
"gpt_pangu",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-03-31T06:39:43Z |
---
license: apache-2.0
---
## Introduction
PanGu-Alpha-Evolution is an enhanced version of Pangu-Alpha, which can better understand and process tasks, and better follow your task description. More technical details will be updated continuously, please pay attention.
[[Technical report](https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-Alpha/src/branch/master/PANGU-%ce%b1.pdf)]
### Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("superqing/pangu-evolution")
model = AutoModelForCausalLM.from_pretrained("superqing/pangu-evolution", trust_remote_code=True)
```
|
asenella/reproduce_jmvae_seed_2
|
asenella
| 2023-05-05T09:07:52Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-03T12:24:28Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
mHossain/bangla-para-v1-230000
|
mHossain
| 2023-05-05T08:48:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-05T07:10:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bangla-para-v1-230000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla-para-v1-230000
This model is a fine-tuned version of [mHossain/bangla-para-v1-200000](https://huggingface.co/mHossain/bangla-para-v1-200000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9594
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 18.258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2415 | 1.0 | 6750 | 0.9594 | 0.0 | 0.0 | 0.0 | 0.0 | 18.258 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BerserkerMother/ppo-LunarLander-v2
|
BerserkerMother
| 2023-05-05T08:40:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T08:40:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.99 +/- 15.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NightOcean/naruto-blip-captions
|
NightOcean
| 2023-05-05T08:11:20Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-05T03:50:54Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - NightOcean/naruto-blip-captions
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/naruto-blip-captions dataset. You can find some example images in the following.




|
DreamPerson/vae
|
DreamPerson
| 2023-05-05T08:08:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T07:15:01Z |
---
license: creativeml-openrail-m
---
|
mayank-mishra/starcoder-GPTQ-4bit-128g
|
mayank-mishra
| 2023-05-05T08:05:09Z | 0 | 16 | null |
[
"arxiv:2210.17323",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-05-05T07:57:55Z |
---
license: bigcode-openrail-m
---
# GPTQ-for-StarCoder
Visit [GPTQ-for-SantaCoder](https://github.com/mayank31398/GPTQ-for-SantaCoder) for instructions on how to use the model weights here.
If you want 8-bit weights, visit [starcoder-GPTQ-8bit-128g](https://huggingface.co/mayank31398/starcoder-GPTQ-8bit-128g).
## Results
| StarCoder | Bits | group-size | memory(MiB) | wikitext2 | ptb | c4 | stack | checkpoint size(MB) |
| -------------------------------------------------- | ---- | ---------- | ----------- | --------- | ---------- | ---------- | ---------- | ------------------- |
| FP32 | 32 | - | | 10.801 | 16.425 | 13.402 | 1.738 | 59195 |
| BF16 | 16 | - | | 10.807 | 16.424 | 13.408 | 1.739 | 29597 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 8 | 128 | | 10.805 | 15.453 | 13.408 | 1.739 | 16163 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | 128 | | 10.989 | 16.839 | 13.676 | 1.757 | 8877 |
# License
The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license).
# Acknowledgements
Thanks to everyone in BigCode who worked so hard to create these code models.
|
asenella/reproduce_jmvae_seed_1
|
asenella
| 2023-05-05T08:00:34Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-03T12:08:45Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
hohai/bert-finetuned-colab-ner2
|
hohai
| 2023-05-05T07:59:26Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-27T09:03:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hohai/bert-finetuned-colab-ner2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hohai/bert-finetuned-colab-ner2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0198
- Validation Loss: 0.0135
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1459 | 0.0333 | 0 |
| 0.0328 | 0.0170 | 1 |
| 0.0198 | 0.0135 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mayank-mishra/starcoderbase-GPTQ-8bit-128g
|
mayank-mishra
| 2023-05-05T07:58:54Z | 0 | 3 | null |
[
"arxiv:2210.17323",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-05-04T20:05:04Z |
---
license: bigcode-openrail-m
---
# GPTQ-for-StarCoder
Visit [GPTQ-for-SantaCoder](https://github.com/mayank31398/GPTQ-for-SantaCoder) for instructions on how to use the model weights here.
If you want 4-bit weights, visit [starcoderbase-GPTQ-4bit-128g](https://huggingface.co/mayank31398/starcoderbase-GPTQ-4bit-128g).
## Results
| StarCoderBase | Bits | group-size | memory(MiB) | wikitext2 | ptb | c4 | stack | checkpoint size(MB) |
| -------------------------------------------------- | ---- | ---------- | ----------- | --------- | ---------- | ---------- | ---------- | ------------------- |
| FP32 | 32 | - | | 10.172 | 15.756 | 12.736 | 1.692 | 59195 |
| BF16 | 16 | - | | 10.173 | 15.765 | 12.745 | 1.692 | 29597 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 8 | 128 | | 10.174 | 15.767 | 12.739 | 1.692 | 16163 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | 128 | | 10.387 | 16.056 | 13.005 | 1.708 | 8877 |
# License
The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license).
# Acknowledgements
Thanks to everyone in BigCode who worked so hard to create these code models.
|
VISHWAJITT21/finetuning-sentiment-model-3000-samples
|
VISHWAJITT21
| 2023-05-05T07:42:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:twitter-sentiment-analysis",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T07:14:03Z |
---
tags:
- generated_from_trainer
datasets:
- twitter-sentiment-analysis
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the twitter-sentiment-analysis dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MingMingBang98/kogpt2-base-v2-finetuned-klue-ner
|
MingMingBang98
| 2023-05-05T07:41:11Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-05T07:28:09Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.37298165525403665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4076
- F1: 0.3730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6084 | 1.0 | 876 | 0.5353 | 0.2118 |
| 0.3911 | 2.0 | 1752 | 0.4691 | 0.3041 |
| 0.2855 | 3.0 | 2628 | 0.4076 | 0.3730 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GregoRio123/ykk
|
GregoRio123
| 2023-05-05T07:32:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T07:30:40Z |
---
license: creativeml-openrail-m
---
|
Bisht0538/sumarrizer
|
Bisht0538
| 2023-05-05T07:28:38Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-05-05T07:03:05Z |
---
license: openrail
---
transformer
youtube_transcript_api
summerizer
pipeline
|
DataVare/datavare-mbox-to-pst-converter
|
DataVare
| 2023-05-05T07:09:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-05T07:08:16Z |
An advanced automatic tool called DataVare MBOX to PST Conversion Tool exports MBOX files to the PST file format in just a few simple steps. The best tool for converting many MBOX files at once without losing or damaging any data is this one. The migration tool offers a 100 percent safe and secure environment for batch MBOX email conversion to PST file format. Versions of MS Outlook, including 2003, 2007, 2010, 2013, 2016, and 2019, can access it. All Windows operating system versions are compatible with the user interface, which is particularly user-friendly. The migration tool offers a 100 percent safe and secure environment for batch MBOX email conversion to PST file format. In order to transfer every email from MBOX to PST, both technical and non-technical users can use the software's different conversion formats. Both the Mac and Windows versions of this software are compatible. It gives a number of sophisticated capabilities along with a few simple instructions that everyone may follow. Before making a purchase, users can test the tool's capabilities and usability with a free demo version provided by the product.
Read more :- https://www.datavare.com/software/mbox-to-pst-converter-expert.html
|
Morrira/Mybeautifulgirl
|
Morrira
| 2023-05-05T06:55:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T06:55:55Z |
---
license: creativeml-openrail-m
---
|
SHENMU007/neunit_BASE_V5.1
|
SHENMU007
| 2023-05-05T06:55:17Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-05-05T02:05:26Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
shichen/13
|
shichen
| 2023-05-05T06:48:30Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-05T03:59:52Z |
---
license: bigscience-openrail-m
---
|
DrishtiSharma/LunarLander-v2-CleanRL
|
DrishtiSharma
| 2023-05-05T06:26:04Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T06:25:57Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -155.35 +/- 78.63
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
botp/stable-diffusion-v1-5-inpainting
|
botp
| 2023-05-05T06:23:14Z | 4,054 | 10 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] |
text-to-image
| 2023-05-05T06:23:14Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: false
library_name: diffusers
extra_gated_prompt: >-
One more step before getting this model.
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
By clicking on "Access repository" below, you accept that your *contact
information* (email address and username) can be shared with the model authors
as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
duplicated_from: runwayml/stable-diffusion-inpainting
---
Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.
The **Stable-Diffusion-Inpainting** was initialized with the weights of the [Stable-Diffusion-v-1-2](https://steps/huggingface.co/CompVis/stable-diffusion-v-1-2-original). First 595k steps regular training, then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning to improve classifier-free [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
[](https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
:-------------------------:|:-------------------------:|
## Examples:
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```python
from diffusers import StableDiffusionInpaintPipeline
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
#image and mask_image should be PIL images.
#The mask structure is white for inpainting and black for keeping as is
image = pipe(prompt=prompt, image=image, mask_image=mask_image).images[0]
image.save("./yellow_cat_on_park_bench.png")
```
**How it works:**
`image` | `mask_image`
:-------------------------:|:-------------------------:|
<img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="300"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="300"/>
`prompt` | `Output`
:-------------------------:|:-------------------------:|
<span style="position: relative;bottom: 150px;">Face of a yellow cat, high resolution, sitting on a park bench</span> | <img src="https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/test.png" alt="drawing" width="300"/>
### Original GitHub Repository
1. Download the weights [sd-v1-5-inpainting.ckpt](https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt)
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/runwayml/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide six checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`, `sd-v1-4.ckpt`, `sd-v1-5.ckpt` and `sd-v1-5-inpainting.ckpt`
which were trained as follows,
- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- `sd-v1-4.ckpt`: Resumed from stable-diffusion-v1-2.225,000 steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- `sd-v1-5.ckpt`: Resumed from sd-v1-2.ckpt. 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling.
- `sd-v1-5-inpaint.ckpt`: Resumed from sd-v1-2.ckpt. 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. Then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Inpainting Evaluation
To assess the performance of the inpainting model, we used the same evaluation
protocol as in our [LDM paper](https://arxiv.org/abs/2112.10752). Since the
Stable Diffusion Inpainting Model acccepts a text input, we simply used a fixed
prompt of `photograph of a beautiful empty scene, highest quality settings`.
| Model | FID | LPIPS |
|-----------------------------|------|------------------|
| Stable Diffusion Inpainting | 1.00 | 0.141 (+- 0.082) |
| Latent Diffusion Inpainting | 1.50 | 0.137 (+- 0.080) |
| CoModGAN | 1.82 | 0.15 |
| LaMa | 2.21 | 0.134 (+- 0.080) |
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
Sjdan/switch_loso_m07_1
|
Sjdan
| 2023-05-05T06:19:33Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-05T04:54:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: switch_loso_m07_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch_loso_m07_1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/reproduce_jmvae_seed_8
|
asenella
| 2023-05-05T06:16:11Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-03T12:00:25Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
rsonavane/flan-t5-xl-alpaca-dolly-lora-peft
|
rsonavane
| 2023-05-05T06:11:03Z | 5 | 1 |
peft
|
[
"peft",
"pytorch",
"t5",
"adapter",
"flan-t5",
"lora",
"text2text-generation",
"en",
"ja",
"de",
"fr",
"multilingual",
"dataset:yahma/alpaca-cleaned",
"dataset:databricks/databricks-dolly-15k",
"dataset:samsum",
"8-bit",
"region:us"
] |
text2text-generation
| 2023-05-04T22:08:55Z |
---
datasets:
- yahma/alpaca-cleaned
- databricks/databricks-dolly-15k
- samsum
pipeline_tag: text2text-generation
tags:
- t5
- adapter
- flan-t5
- peft
- lora
language:
- en
- ja
- de
- fr
- multilingual
---
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load peft config for pre-trained checkpoint etc.
peft_model_id = "rsonavane/flan-t5-xl-alpaca-dolly-lora-peft"
config = PeftConfig.from_pretrained(peft_model_id)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0})
```
## Prompt generation
```python
def generate_prompt(instruction: str, input_ctxt: str = "") -> str:
if input_ctxt:
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input_ctxt}
### Response:"""
else:
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:"""
```
## Inference
```python
input_ctxt = ""
instruction = ""
input_text = generate_prompt(instruction, input_ctxt)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
## Training Details
Intended for conversation analysis, closed qna and summarization.
Trained on instructions from doll-15k, alpaca-52k and samsum dataset.
|
Dyoltay/ppo-LunarLander-v2
|
Dyoltay
| 2023-05-05T06:10:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T06:10:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.87 +/- 22.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anitha67/my_awesome_model
|
anitha67
| 2023-05-05T06:07:48Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T11:53:34Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: anitha67/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# anitha67/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0657
- Validation Loss: 0.2130
- Train Accuracy: 0.9325
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2542 | 0.2212 | 0.9096 | 0 |
| 0.1335 | 0.1956 | 0.9249 | 1 |
| 0.0657 | 0.2130 | 0.9325 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
botp/LOFI1
|
botp
| 2023-05-05T06:03:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T06:03:32Z |
---
license: creativeml-openrail-m
duplicated_from: DucHaiten/DucHaiten-LoFi
---
|
Vignesh-Trender/my_awesome_model
|
Vignesh-Trender
| 2023-05-05T06:02:21Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T11:46:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Vignesh-Trender/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Vignesh-Trender/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1294
- Validation Loss: 0.2072
- Train Accuracy: 0.9230
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2500 | 0.1823 | 0.9293 | 0 |
| 0.1294 | 0.2072 | 0.9230 | 1 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
botp/LOFI21
|
botp
| 2023-05-05T06:01:14Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T06:01:14Z |
---
license: creativeml-openrail-m
duplicated_from: jtamph/LOFI
---
|
navien523/JtveemoH
|
navien523
| 2023-05-05T05:59:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T05:59:04Z |
---
license: creativeml-openrail-m
---
|
AnshulRustogi/bert-finetuned-multilingual-xquad2
|
AnshulRustogi
| 2023-05-05T05:13:11Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-05T04:13:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-multilingual-xquad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-multilingual-xquad2
This model is a fine-tuned version of [AnshulRustogi/bert-base-multilingual-cased1](https://huggingface.co/AnshulRustogi/bert-base-multilingual-cased1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 209 | 1.4790 |
| No log | 2.0 | 418 | 1.3976 |
| 1.5107 | 3.0 | 627 | 1.3624 |
| 1.5107 | 4.0 | 836 | 1.3265 |
| 1.1003 | 5.0 | 1045 | 1.3174 |
| 1.1003 | 6.0 | 1254 | 1.3216 |
| 1.1003 | 7.0 | 1463 | 1.3219 |
| 0.9379 | 8.0 | 1672 | 1.3234 |
| 0.9379 | 9.0 | 1881 | 1.3234 |
| 0.8494 | 10.0 | 2090 | 1.3256 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hermanshid/opus-mt-finetuned-id-to-jv
|
hermanshid
| 2023-05-05T05:04:41Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"jv",
"id",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-02T23:22:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-finetuned-id-to-jv
results: []
language:
- jv
- id
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-finetuned-id-to-jv
This model is a fine-tuned version of [hermanshid/opus-mt-finetuned-su-to-id](https://huggingface.co/hermanshid/opus-mt-finetuned-su-to-id) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5597
- Bleu: 50.74
- Gen Len: 58.1428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.841 | 1.0 | 2500 | 0.7481 | 44.6803 | 58.0716 |
| 0.7025 | 2.0 | 5000 | 0.6599 | 47.0415 | 58.3842 |
| 0.6305 | 3.0 | 7500 | 0.6203 | 48.4781 | 58.154 |
| 0.5772 | 4.0 | 10000 | 0.5969 | 49.1335 | 58.4164 |
| 0.5472 | 5.0 | 12500 | 0.5816 | 49.7317 | 58.149 |
| 0.5215 | 6.0 | 15000 | 0.5728 | 50.1163 | 58.0292 |
| 0.5079 | 7.0 | 17500 | 0.5676 | 50.4371 | 58.2302 |
| 0.4845 | 8.0 | 20000 | 0.5626 | 50.606 | 58.0254 |
| 0.4703 | 9.0 | 22500 | 0.5600 | 50.7025 | 58.0016 |
| 0.4597 | 10.0 | 25000 | 0.5597 | 50.74 | 58.1428 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mikephillips/slant-axial-lora-2-1
|
mikephillips
| 2023-05-05T04:41:40Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-01T01:14:09Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - mikephillips/slant-axial-lora-2-1
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
liuliu96/detr-resnet-50_finetuned_cppe5
|
liuliu96
| 2023-05-05T03:57:28Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-05-05T03:22:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zho/segformer-finetuned-sidewalk-10k-steps
|
zho
| 2023-05-05T03:39:23Z | 223 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-05-04T14:11:20Z |
---
license: other
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-sidewalk-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-sidewalk-10k-steps
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6468
- Mean Iou: 0.2931
- Mean Accuracy: 0.3665
- Overall Accuracy: 0.8121
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.6505
- Accuracy Flat-sidewalk: 0.9345
- Accuracy Flat-crosswalk: 0.9011
- Accuracy Flat-cyclinglane: 0.7895
- Accuracy Flat-parkingdriveway: 0.2382
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.4519
- Accuracy Human-person: 0.5536
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.9509
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.7507
- Accuracy Vehicle-caravan: nan
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8681
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.6107
- Accuracy Construction-fenceguardrail: 0.3192
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.5156
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9183
- Accuracy Nature-terrain: 0.8478
- Accuracy Sky: 0.9246
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.1083
- Accuracy Void-static: 0.3940
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.5472
- Iou Flat-sidewalk: 0.8329
- Iou Flat-crosswalk: 0.7961
- Iou Flat-cyclinglane: 0.5266
- Iou Flat-parkingdriveway: 0.2013
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.2863
- Iou Human-person: 0.3887
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.7872
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.4759
- Iou Vehicle-caravan: nan
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.6992
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.3924
- Iou Construction-fenceguardrail: 0.2614
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.3413
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.8182
- Iou Nature-terrain: 0.7517
- Iou Sky: 0.8855
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0963
- Iou Void-static: 0.2896
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Accuracy Construction-bridge | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-fenceguardrail | Accuracy Construction-stairs | Accuracy Construction-tunnel | Accuracy Construction-wall | Accuracy Flat-crosswalk | Accuracy Flat-curb | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Human-person | Accuracy Human-rider | Accuracy Nature-terrain | Accuracy Nature-vegetation | Accuracy Object-pole | Accuracy Object-trafficlight | Accuracy Object-trafficsign | Accuracy Sky | Accuracy Unlabeled | Accuracy Vehicle-bicycle | Accuracy Vehicle-bus | Accuracy Vehicle-car | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Vehicle-motorcycle | Accuracy Vehicle-tramtrain | Accuracy Vehicle-truck | Accuracy Void-dynamic | Accuracy Void-ground | Accuracy Void-static | Accuracy Void-unclear | Iou Construction-bridge | Iou Construction-building | Iou Construction-door | Iou Construction-fenceguardrail | Iou Construction-stairs | Iou Construction-tunnel | Iou Construction-wall | Iou Flat-crosswalk | Iou Flat-curb | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-road | Iou Flat-sidewalk | Iou Human-person | Iou Human-rider | Iou Nature-terrain | Iou Nature-vegetation | Iou Object-pole | Iou Object-trafficlight | Iou Object-trafficsign | Iou Sky | Iou Unlabeled | Iou Vehicle-bicycle | Iou Vehicle-bus | Iou Vehicle-car | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Vehicle-motorcycle | Iou Vehicle-tramtrain | Iou Vehicle-truck | Iou Void-dynamic | Iou Void-ground | Iou Void-static | Iou Void-unclear | Validation Loss | Mean Accuracy | Mean Iou | Overall Accuracy |
|:-------------:|:-----:|:-----:|:----------------------------:|:------------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:----------------------:|:---------------------:|:--------------------:|:-----------------------:|:--------------------------:|:--------------------:|:----------------------------:|:---------------------------:|:------------:|:------------------:|:------------------------:|:--------------------:|:--------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:----------------------:|:---------------------:|:--------------------:|:--------------------:|:---------------------:|:-----------------------:|:-------------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:-----------------:|:----------------:|:---------------:|:------------------:|:---------------------:|:---------------:|:-----------------------:|:----------------------:|:-------:|:-------------:|:-------------------:|:---------------:|:---------------:|:-------------------:|:----------------------:|:----------------------:|:---------------------:|:-----------------:|:----------------:|:---------------:|:---------------:|:----------------:|:---------------:|:-------------:|:--------:|:----------------:|
| 2.5227 | 1.0 | 107 | 0.0 | 0.8334 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0 | 0.0 | 0.0416 | 0.0001 | nan | 0.5390 | 0.9293 | 0.0 | 0.0 | 0.2834 | 0.9261 | 0.0 | 0.0 | 0.0 | 0.5133 | nan | 0.0 | 0.0 | 0.8875 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4909 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0 | 0.0 | 0.0411 | 0.0001 | nan | 0.3808 | 0.7051 | 0.0 | 0.0 | 0.2534 | 0.5904 | 0.0 | 0.0 | 0.0 | 0.5116 | nan | 0.0 | 0.0 | 0.5403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.7749 | 0.1548 | 0.1098 | 0.6606 |
| 1.7544 | 2.0 | 214 | 0.0 | 0.8141 | 0.0 | 0.0 | 0.0 | nan | 0.0024 | 0.0 | 0.0 | 0.2967 | 0.0009 | nan | 0.6039 | 0.9275 | 0.0 | 0.0 | 0.8832 | 0.8157 | 0.0 | 0.0 | 0.0 | 0.7111 | nan | 0.0 | 0.0 | 0.9009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5356 | 0.0 | 0.0 | 0.0 | nan | 0.0024 | 0.0 | 0.0 | 0.2702 | 0.0009 | nan | 0.4296 | 0.7139 | 0.0 | 0.0 | 0.5124 | 0.6367 | 0.0 | 0.0 | 0.0 | 0.7016 | nan | 0.0 | 0.0 | 0.5653 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.4883 | 0.1861 | 0.1365 | 0.6975 |
| 1.523 | 3.0 | 321 | 0.0 | 0.8975 | 0.0 | 0.0 | 0.0 | nan | 0.0009 | 0.0 | 0.0003 | 0.5309 | 0.0063 | nan | 0.4954 | 0.9432 | 0.0 | 0.0 | 0.8476 | 0.8378 | 0.0 | 0.0 | 0.0 | 0.7705 | nan | 0.0 | 0.0 | 0.8567 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5155 | 0.0 | 0.0 | 0.0 | nan | 0.0009 | 0.0 | 0.0003 | 0.4164 | 0.0062 | nan | 0.4161 | 0.7219 | 0.0 | 0.0 | 0.5408 | 0.6765 | 0.0 | 0.0 | 0.0 | 0.7594 | nan | 0.0 | 0.0 | 0.6132 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.2403 | 0.1934 | 0.1459 | 0.7123 |
| 1.2744 | 4.0 | 428 | 0.0 | 0.8602 | 0.0 | 0.0 | 0.0 | nan | 0.0009 | 0.0 | 0.0015 | 0.4753 | 0.0069 | nan | 0.3731 | 0.9792 | 0.0 | 0.0 | 0.7062 | 0.8948 | 0.0 | 0.0 | 0.0 | 0.7488 | nan | 0.0 | 0.0 | 0.8857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5565 | 0.0 | 0.0 | 0.0 | nan | 0.0009 | 0.0 | 0.0015 | 0.4431 | 0.0068 | nan | 0.3413 | 0.6728 | 0.0 | 0.0 | 0.5473 | 0.6788 | 0.0 | 0.0 | 0.0 | 0.7389 | nan | 0.0 | 0.0 | 0.6552 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.1870 | 0.1854 | 0.1451 | 0.7068 |
| 1.1579 | 5.0 | 535 | 0.0 | 0.7388 | 0.0 | 0.0 | 0.0 | nan | 0.0008 | 0.0 | 0.0040 | 0.6937 | 0.0681 | nan | 0.5908 | 0.9639 | 0.0 | 0.0 | 0.5152 | 0.9429 | 0.0 | 0.0 | 0.0 | 0.8365 | nan | 0.0 | 0.0 | 0.9525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5687 | 0.0 | 0.0 | 0.0 | nan | 0.0008 | 0.0 | 0.0039 | 0.5783 | 0.0606 | nan | 0.4884 | 0.7434 | 0.0 | 0.0 | 0.4397 | 0.6660 | 0.0 | 0.0 | 0.0 | 0.8076 | nan | 0.0 | 0.0 | 0.5868 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0435 | 0.1971 | 0.1545 | 0.7340 |
| 1.0928 | 6.0 | 642 | 0.0 | 0.8126 | 0.0 | 0.0 | 0.0 | nan | 0.0127 | 0.1193 | 0.0326 | 0.7981 | 0.1432 | nan | 0.6767 | 0.9152 | 0.0 | 0.0 | 0.8393 | 0.8990 | 0.0115 | 0.0 | 0.0 | 0.8664 | nan | 0.0 | 0.0 | 0.9427 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0048 | 0.0 | 0.0 | 0.6031 | 0.0 | 0.0 | 0.0 | nan | 0.0126 | 0.1193 | 0.0298 | 0.6282 | 0.1206 | nan | 0.5205 | 0.7688 | 0.0 | 0.0 | 0.6037 | 0.6827 | 0.0113 | 0.0 | 0.0 | 0.8312 | nan | 0.0 | 0.0 | 0.5963 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0047 | 0.0 | 0.9777 | 0.2211 | 0.1729 | 0.7531 |
| 1.0371 | 7.0 | 749 | 0.0 | 0.8108 | 0.0 | 0.0 | 0.0 | nan | 0.0145 | 0.2878 | 0.0499 | 0.7673 | 0.1179 | nan | 0.5506 | 0.9510 | 0.0 | 0.0 | 0.8458 | 0.8788 | 0.0158 | 0.0 | 0.0 | 0.8125 | nan | 0.0 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.5687 | 0.0 | 0.0 | 0.0 | nan | 0.0143 | 0.2871 | 0.0416 | 0.5650 | 0.1067 | nan | 0.4769 | 0.7722 | 0.0 | 0.0 | 0.5986 | 0.6729 | 0.0154 | 0.0 | 0.0 | 0.7949 | nan | 0.0 | 0.0 | 0.5910 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0032 | 0.0 | 0.9290 | 0.2200 | 0.1722 | 0.7457 |
| 0.9645 | 8.0 | 856 | 0.0 | 0.8913 | 0.0 | 0.0 | 0.0 | nan | 0.0530 | 0.3879 | 0.1304 | 0.8027 | 0.1244 | nan | 0.5733 | 0.9459 | 0.0 | 0.0 | 0.8434 | 0.8598 | 0.1344 | 0.0 | 0.0 | 0.8596 | nan | 0.0 | 0.0 | 0.9192 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0196 | 0.0 | 0.0 | 0.5899 | 0.0 | 0.0 | 0.0 | nan | 0.0518 | 0.3362 | 0.0872 | 0.6482 | 0.1137 | nan | 0.4887 | 0.7610 | 0.0 | 0.0 | 0.6153 | 0.7148 | 0.1144 | 0.0 | 0.0 | 0.8278 | nan | 0.0 | 0.0 | 0.6957 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0192 | 0.0 | 0.8855 | 0.2358 | 0.1895 | 0.7593 |
| 0.9171 | 9.0 | 963 | 0.0 | 0.8681 | 0.0 | 0.0 | 0.0 | nan | 0.2267 | 0.2895 | 0.1798 | 0.7741 | 0.2153 | nan | 0.6580 | 0.9264 | 0.0009 | 0.0 | 0.7788 | 0.8887 | 0.1800 | 0.0 | 0.0 | 0.8648 | nan | 0.0 | 0.0 | 0.9422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0689 | 0.0 | 0.0 | 0.6112 | 0.0 | 0.0 | 0.0 | nan | 0.2013 | 0.2859 | 0.1173 | 0.6393 | 0.1769 | nan | 0.5251 | 0.7761 | 0.0009 | 0.0 | 0.6220 | 0.7328 | 0.1391 | 0.0 | 0.0 | 0.8329 | nan | 0.0 | 0.0 | 0.6550 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0622 | 0.0 | 0.8439 | 0.2457 | 0.1993 | 0.7676 |
| 0.8373 | 10.0 | 1070 | 0.0 | 0.8391 | 0.0 | 0.0000 | 0.0 | nan | 0.4409 | 0.3294 | 0.1364 | 0.7858 | 0.1023 | nan | 0.6096 | 0.9644 | 0.0756 | 0.0 | 0.6853 | 0.8993 | 0.1614 | 0.0 | 0.0 | 0.8876 | nan | 0.0 | 0.0 | 0.9315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0874 | 0.0 | 0.0 | 0.6203 | 0.0 | 0.0000 | 0.0 | nan | 0.2914 | 0.3283 | 0.1050 | 0.6096 | 0.0951 | nan | 0.5427 | 0.7678 | 0.0740 | 0.0 | 0.5665 | 0.7403 | 0.1321 | 0.0 | 0.0 | 0.8500 | nan | 0.0 | 0.0 | 0.6756 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0767 | 0.0 | 0.8317 | 0.2480 | 0.2024 | 0.7710 |
| 0.8375 | 11.0 | 1177 | 0.0 | 0.8248 | 0.0 | 0.0000 | 0.0 | nan | 0.3739 | 0.3951 | 0.2834 | 0.7626 | 0.1777 | nan | 0.4734 | 0.9515 | 0.1276 | 0.0 | 0.7447 | 0.9010 | 0.1872 | 0.0 | 0.0 | 0.9018 | nan | 0.0 | 0.0 | 0.9378 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0591 | 0.0 | 0.0 | 0.6017 | 0.0 | 0.0000 | 0.0 | nan | 0.2379 | 0.3570 | 0.1503 | 0.6432 | 0.1533 | nan | 0.4411 | 0.7743 | 0.1234 | 0.0 | 0.5987 | 0.7041 | 0.1362 | 0.0 | 0.0 | 0.8576 | nan | 0.0 | 0.0 | 0.6553 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0518 | 0.0 | 0.8539 | 0.2532 | 0.2027 | 0.7577 |
| 0.8014 | 12.0 | 1284 | 0.0 | 0.8213 | 0.0 | 0.0002 | 0.0 | nan | 0.4219 | 0.5045 | 0.3125 | 0.8556 | 0.2246 | nan | 0.6546 | 0.8896 | 0.2522 | 0.0 | 0.7563 | 0.9184 | 0.2091 | 0.0 | 0.0 | 0.8852 | nan | 0.0 | 0.0 | 0.9338 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1150 | 0.0 | 0.0 | 0.6244 | 0.0 | 0.0002 | 0.0 | nan | 0.2819 | 0.4181 | 0.1371 | 0.5936 | 0.1892 | nan | 0.5497 | 0.7848 | 0.2332 | 0.0 | 0.6418 | 0.7339 | 0.1582 | 0.0 | 0.0 | 0.8537 | nan | 0.0 | 0.0 | 0.6887 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0936 | 0.0 | 0.7821 | 0.2736 | 0.2182 | 0.7698 |
| 0.7598 | 13.0 | 1391 | 0.0 | 0.7520 | 0.0 | 0.0 | 0.0 | nan | 0.5035 | 0.5241 | 0.2865 | 0.8708 | 0.1666 | nan | 0.6404 | 0.8870 | 0.2805 | 0.0 | 0.7662 | 0.9230 | 0.3694 | 0.0 | 0.0 | 0.8932 | nan | 0.0 | 0.0 | 0.9492 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2009 | 0.0 | 0.0 | 0.6246 | 0.0 | 0.0 | 0.0 | nan | 0.3111 | 0.4894 | 0.1504 | 0.5451 | 0.1555 | nan | 0.5227 | 0.7890 | 0.2569 | 0.0 | 0.6171 | 0.7275 | 0.1555 | 0.0 | 0.0 | 0.8569 | nan | 0.0 | 0.0 | 0.6889 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1265 | 0.0 | 0.7959 | 0.2817 | 0.2193 | 0.7653 |
| 0.7333 | 14.0 | 1498 | 0.0 | 0.7852 | 0.0 | 0.0005 | 0.0 | nan | 0.6099 | 0.5852 | 0.3890 | 0.8211 | 0.2961 | nan | 0.6321 | 0.9313 | 0.3684 | 0.0 | 0.6342 | 0.9311 | 0.2435 | 0.0 | 0.0 | 0.8845 | nan | 0.0 | 0.0 | 0.9298 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1712 | 0.0 | 0.0 | 0.6312 | 0.0 | 0.0005 | 0.0 | nan | 0.2920 | 0.4813 | 0.1830 | 0.6730 | 0.2504 | nan | 0.5405 | 0.8112 | 0.3183 | 0.0 | 0.5574 | 0.7360 | 0.1553 | 0.0 | 0.0 | 0.8543 | nan | 0.0 | 0.0 | 0.7520 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1219 | 0.0 | 0.7463 | 0.2879 | 0.2300 | 0.7815 |
| 0.7128 | 15.0 | 1605 | 0.0 | 0.7547 | 0.0 | 0.0126 | 0.0 | nan | 0.6715 | 0.6477 | 0.2623 | 0.8694 | 0.1131 | 0.0 | 0.7576 | 0.9015 | 0.5131 | 0.0 | 0.8870 | 0.8915 | 0.3275 | 0.0 | 0.0 | 0.9177 | nan | 0.0008 | 0.0 | 0.9290 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2520 | 0.0 | 0.0 | 0.5980 | 0.0 | 0.0126 | 0.0 | nan | 0.4000 | 0.3362 | 0.1721 | 0.4706 | 0.1069 | 0.0 | 0.6593 | 0.8212 | 0.2914 | 0.0 | 0.6797 | 0.7574 | 0.1981 | 0.0 | 0.0 | 0.8704 | nan | 0.0008 | 0.0 | 0.6431 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1881 | 0.0 | 0.7557 | 0.2942 | 0.2184 | 0.7786 |
| 0.6885 | 16.0 | 1712 | 0.0 | 0.8416 | 0.0 | 0.0086 | 0.0 | nan | 0.5907 | 0.7737 | 0.3100 | 0.7765 | 0.1341 | 0.0 | 0.6753 | 0.9522 | 0.5143 | 0.0 | 0.8466 | 0.8795 | 0.2986 | 0.0 | 0.0 | 0.9155 | nan | 0.0071 | 0.0 | 0.9178 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3074 | 0.0 | 0.0 | 0.6078 | 0.0 | 0.0086 | 0.0 | nan | 0.4106 | 0.3222 | 0.1815 | 0.6082 | 0.1171 | 0.0 | 0.6206 | 0.8253 | 0.2609 | 0.0 | 0.6832 | 0.7692 | 0.1957 | 0.0 | 0.0 | 0.8691 | nan | 0.0071 | 0.0 | 0.6951 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2366 | 0.0 | 0.7262 | 0.2954 | 0.2248 | 0.7882 |
| 0.6627 | 17.0 | 1819 | 0.0 | 0.7096 | 0.0 | 0.0181 | 0.0 | nan | 0.7189 | 0.6110 | 0.3654 | 0.8153 | 0.1210 | 0.0 | 0.7156 | 0.9114 | 0.5562 | 0.0 | 0.8788 | 0.9226 | 0.3042 | 0.0 | 0.0 | 0.9273 | nan | 0.0002 | 0.0 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3069 | 0.0 | 0.0 | 0.5809 | 0.0 | 0.0179 | 0.0 | nan | 0.3488 | 0.3724 | 0.2149 | 0.5069 | 0.1137 | 0.0 | 0.6477 | 0.8079 | 0.2559 | 0.0 | 0.7100 | 0.7595 | 0.1837 | 0.0 | 0.0 | 0.8734 | nan | 0.0002 | 0.0 | 0.7016 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2201 | 0.0 | 0.7429 | 0.2967 | 0.2217 | 0.7786 |
| 0.6954 | 18.0 | 1926 | 0.0 | 0.8919 | 0.0 | 0.0031 | 0.0 | nan | 0.5763 | 0.5167 | 0.3013 | 0.7439 | 0.1958 | 0.0 | 0.7281 | 0.9530 | 0.4080 | 0.0 | 0.8497 | 0.8852 | 0.2874 | 0.0 | 0.0 | 0.8563 | nan | 0.0056 | 0.0 | 0.9222 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3154 | 0.0 | 0.0 | 0.5730 | 0.0 | 0.0031 | 0.0 | nan | 0.3625 | 0.4887 | 0.1980 | 0.6038 | 0.1714 | 0.0 | 0.6684 | 0.8291 | 0.2599 | 0.0 | 0.7176 | 0.7922 | 0.2045 | 0.0 | 0.0 | 0.8322 | nan | 0.0056 | 0.0 | 0.6432 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2459 | 0.0 | 0.6984 | 0.2861 | 0.2303 | 0.7947 |
| 0.6592 | 19.0 | 2033 | 0.0 | 0.8433 | 0.0 | 0.0496 | 0.0 | nan | 0.5622 | 0.6415 | 0.3618 | 0.7738 | 0.1797 | 0.0 | 0.6474 | 0.9741 | 0.6289 | 0.0 | 0.6784 | 0.9279 | 0.3132 | 0.0 | 0.0 | 0.8985 | nan | 0.0019 | 0.0 | 0.9235 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2431 | 0.0 | 0.0 | 0.6155 | 0.0 | 0.0493 | 0.0 | nan | 0.3959 | 0.5424 | 0.2210 | 0.6568 | 0.1504 | 0.0 | 0.6217 | 0.8227 | 0.2586 | 0.0 | 0.6198 | 0.7658 | 0.2117 | 0.0 | 0.0 | 0.8686 | nan | 0.0019 | 0.0 | 0.6541 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1919 | 0.0 | 0.6999 | 0.2924 | 0.2318 | 0.7924 |
| 0.6682 | 20.0 | 2140 | 0.0 | 0.8071 | 0.0 | 0.0796 | 0.0 | nan | 0.5870 | 0.4899 | 0.4985 | 0.7638 | 0.2075 | 0.0 | 0.7505 | 0.9346 | 0.6505 | 0.0 | 0.8297 | 0.9187 | 0.3668 | 0.0 | 0.0 | 0.9157 | nan | 0.0082 | 0.0 | 0.9407 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2163 | 0.0 | 0.0 | 0.6144 | 0.0 | 0.0748 | 0.0 | nan | 0.3846 | 0.4807 | 0.2584 | 0.6083 | 0.1892 | 0.0 | 0.6719 | 0.8371 | 0.2436 | 0.0 | 0.7173 | 0.7842 | 0.1994 | 0.0 | 0.0 | 0.8798 | nan | 0.0082 | 0.0 | 0.6331 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1719 | 0.0 | 0.6756 | 0.3020 | 0.2351 | 0.7976 |
| 0.6249 | 21.0 | 2247 | 0.6678 | 0.2540 | 0.3195 | 0.7981 | nan | 0.6625 | 0.9563 | 0.8027 | 0.7398 | 0.1695 | 0.0 | 0.4050 | 0.7541 | 0.0 | 0.9306 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0473 | nan | 0.0 | 0.8526 | 0.0 | 0.6384 | 0.1242 | 0.0 | nan | 0.0 | 0.3671 | 0.0 | 0.0 | 0.9185 | 0.7725 | 0.8706 | 0.0 | 0.0 | 0.2129 | 0.0 | nan | 0.5746 | 0.8111 | 0.7593 | 0.5842 | 0.1557 | 0.0 | 0.2176 | 0.3250 | 0.0 | 0.7386 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0473 | nan | 0.0 | 0.6693 | 0.0 | 0.3844 | 0.1188 | 0.0 | nan | 0.0 | 0.2479 | 0.0 | 0.0 | 0.7914 | 0.7105 | 0.8285 | 0.0 | 0.0 | 0.1638 | 0.0 |
| 0.6278 | 22.0 | 2354 | 0.6800 | 0.2513 | 0.3216 | 0.7949 | nan | 0.6354 | 0.9558 | 0.8656 | 0.7557 | 0.1401 | 0.0 | 0.4619 | 0.6943 | 0.0 | 0.9333 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0315 | nan | 0.0 | 0.8031 | 0.0 | 0.6422 | 0.1074 | 0.0 | nan | 0.0 | 0.4139 | 0.0 | 0.0 | 0.9114 | 0.8658 | 0.8302 | 0.0 | 0.0 | 0.2446 | 0.0 | nan | 0.5527 | 0.8215 | 0.7864 | 0.5887 | 0.1346 | 0.0 | 0.2336 | 0.3191 | 0.0 | 0.7265 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0315 | nan | 0.0 | 0.6458 | 0.0 | 0.3638 | 0.1048 | 0.0 | nan | 0.0 | 0.2338 | 0.0 | 0.0 | 0.7831 | 0.7282 | 0.8001 | 0.0 | 0.0 | 0.1868 | 0.0 |
| 0.6375 | 23.0 | 2461 | 0.6680 | 0.2563 | 0.3186 | 0.7976 | nan | 0.6355 | 0.9595 | 0.8844 | 0.6403 | 0.2228 | 0.0 | 0.3772 | 0.5620 | 0.0 | 0.9094 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0640 | nan | 0.0 | 0.8615 | 0.0 | 0.6510 | 0.1498 | 0.0 | nan | 0.0 | 0.3834 | 0.0 | 0.0 | 0.9024 | 0.8874 | 0.8627 | 0.0 | 0.0 | 0.2419 | 0.0 | nan | 0.5548 | 0.8086 | 0.7729 | 0.5236 | 0.2018 | 0.0 | 0.2287 | 0.3137 | 0.0 | 0.7398 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0634 | nan | 0.0 | 0.6603 | 0.0 | 0.3896 | 0.1381 | 0.0 | nan | 0.0 | 0.2666 | 0.0 | 0.0 | 0.7881 | 0.7394 | 0.8256 | 0.0 | 0.0 | 0.1871 | 0.0 |
| 0.6202 | 24.0 | 2568 | 0.6866 | 0.2618 | 0.3236 | 0.7961 | nan | 0.6075 | 0.9674 | 0.8360 | 0.6102 | 0.1879 | 0.0 | 0.4285 | 0.5972 | 0.0 | 0.9180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1500 | nan | 0.0 | 0.8830 | 0.0 | 0.6661 | 0.1963 | 0.0 | nan | 0.0 | 0.4180 | 0.0 | 0.0 | 0.8918 | 0.8483 | 0.8660 | 0.0 | 0.0 | 0.2840 | 0.0 | nan | 0.5428 | 0.7997 | 0.7679 | 0.5062 | 0.1644 | 0.0 | 0.2289 | 0.3309 | 0.0 | 0.7596 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1473 | nan | 0.0 | 0.6670 | 0.0 | 0.4004 | 0.1767 | 0.0 | nan | 0.0 | 0.2836 | 0.0 | 0.0 | 0.8076 | 0.7619 | 0.8236 | 0.0 | 0.0 | 0.2079 | 0.0 |
| 0.5627 | 25.0 | 2675 | 0.6950 | 0.2551 | 0.3248 | 0.7883 | nan | 0.6233 | 0.9526 | 0.7145 | 0.7187 | 0.1813 | 0.0 | 0.3959 | 0.7039 | 0.0 | 0.9160 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1183 | nan | 0.0 | 0.8342 | 0.0 | 0.5499 | 0.2476 | 0.0 | nan | 0.0 | 0.4821 | 0.0 | 0.0 | 0.8725 | 0.8618 | 0.8633 | 0.0 | 0.0 | 0.3577 | 0.0 | nan | 0.5503 | 0.7925 | 0.6705 | 0.5845 | 0.1689 | 0.0 | 0.2198 | 0.3385 | 0.0 | 0.7322 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1174 | nan | 0.0 | 0.6527 | 0.0 | 0.3227 | 0.2119 | 0.0 | nan | 0.0 | 0.2422 | 0.0 | 0.0 | 0.7923 | 0.7260 | 0.8255 | 0.0 | 0.0 | 0.2167 | 0.0 |
| 0.5623 | 26.0 | 2782 | 0.6558 | 0.2686 | 0.3385 | 0.8010 | nan | 0.6338 | 0.9493 | 0.8134 | 0.7256 | 0.1979 | 0.0 | 0.4685 | 0.7518 | 0.0 | 0.9364 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2286 | nan | 0.0 | 0.8577 | 0.0 | 0.5809 | 0.2585 | 0.0 | nan | 0.0 | 0.4459 | 0.0 | 0.0 | 0.8951 | 0.8978 | 0.8844 | 0.0 | 0.0192 | 0.2882 | 0.0 | nan | 0.5476 | 0.8200 | 0.7429 | 0.5770 | 0.1837 | 0.0 | 0.2364 | 0.3743 | 0.0 | 0.7396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2160 | nan | 0.0 | 0.6671 | 0.0 | 0.3646 | 0.2093 | 0.0 | nan | 0.0 | 0.2863 | 0.0 | 0.0 | 0.8023 | 0.7446 | 0.8423 | 0.0 | 0.0185 | 0.2213 | 0.0 |
| 0.5882 | 27.0 | 2889 | 0.6416 | 0.2680 | 0.3280 | 0.8106 | nan | 0.7809 | 0.9232 | 0.8840 | 0.6978 | 0.2374 | 0.0 | 0.4869 | 0.4140 | 0.0 | 0.9242 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2349 | nan | 0.0 | 0.8828 | 0.0 | 0.4518 | 0.2084 | 0.0 | nan | 0.0 | 0.3889 | 0.0 | 0.0 | 0.9206 | 0.8679 | 0.8908 | 0.0 | 0.0 | 0.3012 | 0.0 | nan | 0.6265 | 0.8391 | 0.7529 | 0.6005 | 0.2168 | 0.0 | 0.2675 | 0.2729 | 0.0 | 0.7130 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2226 | nan | 0.0 | 0.6384 | 0.0 | 0.3296 | 0.1915 | 0.0 | nan | 0.0 | 0.2781 | 0.0 | 0.0 | 0.7946 | 0.7640 | 0.8488 | 0.0 | 0.0 | 0.2194 | 0.0 |
| 0.583 | 28.0 | 2996 | 0.6491 | 0.2734 | 0.3417 | 0.8046 | nan | 0.6541 | 0.9605 | 0.8786 | 0.7598 | 0.1411 | 0.0 | 0.4900 | 0.6147 | 0.0 | 0.9432 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3777 | nan | 0.0 | 0.8500 | 0.0 | 0.6605 | 0.2360 | 0.0 | nan | 0.0 | 0.4016 | 0.0 | 0.0 | 0.8786 | 0.8680 | 0.8514 | 0.0 | 0.0716 | 0.2973 | 0.0 | nan | 0.5775 | 0.8311 | 0.7770 | 0.5680 | 0.1357 | 0.0 | 0.2297 | 0.3515 | 0.0 | 0.7436 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3387 | nan | 0.0 | 0.6728 | 0.0 | 0.3790 | 0.2067 | 0.0 | nan | 0.0 | 0.2924 | 0.0 | 0.0 | 0.7950 | 0.7335 | 0.8178 | 0.0 | 0.0647 | 0.2332 | 0.0 |
| 0.5399 | 29.0 | 3103 | 0.6503 | 0.2714 | 0.3437 | 0.8027 | nan | 0.7145 | 0.9360 | 0.8554 | 0.7869 | 0.1668 | 0.0 | 0.4411 | 0.6746 | 0.0 | 0.9579 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3204 | nan | 0.0 | 0.7367 | 0.0 | 0.5891 | 0.2639 | 0.0 | nan | 0.0 | 0.4256 | 0.0 | 0.0 | 0.9170 | 0.9052 | 0.9104 | 0.0 | 0.0836 | 0.3133 | 0.0 | nan | 0.5941 | 0.8288 | 0.7852 | 0.5776 | 0.1580 | 0.0 | 0.2699 | 0.3237 | 0.0 | 0.6720 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2925 | nan | 0.0 | 0.6494 | 0.0 | 0.3454 | 0.2215 | 0.0 | nan | 0.0 | 0.2747 | 0.0 | 0.0 | 0.7852 | 0.7457 | 0.8558 | 0.0 | 0.0774 | 0.2273 | 0.0 |
| 0.5293 | 30.0 | 3210 | 0.6663 | 0.2713 | 0.3395 | 0.8042 | nan | 0.7217 | 0.9318 | 0.8745 | 0.8165 | 0.1842 | 0.0 | 0.3759 | 0.7404 | 0.0 | 0.9308 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3370 | nan | 0.0 | 0.8642 | 0.0 | 0.5393 | 0.2070 | 0.0 | nan | 0.0 | 0.3817 | 0.0 | 0.0 | 0.9030 | 0.7994 | 0.8605 | 0.0 | 0.0136 | 0.3816 | 0.0 | nan | 0.6056 | 0.8248 | 0.7837 | 0.5368 | 0.1772 | 0.0 | 0.2484 | 0.3753 | 0.0 | 0.7504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3106 | nan | 0.0 | 0.6453 | 0.0 | 0.3263 | 0.1887 | 0.0 | nan | 0.0 | 0.2868 | 0.0 | 0.0 | 0.7993 | 0.7363 | 0.8267 | 0.0 | 0.0130 | 0.2477 | 0.0 |
| 0.5507 | 31.0 | 3317 | 0.6914 | 0.2660 | 0.3290 | 0.7919 | nan | 0.6185 | 0.9644 | 0.6731 | 0.6413 | 0.1576 | 0.0 | 0.3454 | 0.5530 | 0.0 | 0.9147 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5739 | nan | 0.0 | 0.8711 | 0.0 | 0.5920 | 0.3049 | 0.0 | nan | 0.0 | 0.4400 | 0.0 | 0.0 | 0.9047 | 0.7982 | 0.8196 | 0.0 | 0.0041 | 0.3518 | 0.0 | nan | 0.5435 | 0.7910 | 0.6258 | 0.5648 | 0.1434 | 0.0 | 0.2163 | 0.3586 | 0.0 | 0.7603 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4199 | nan | 0.0 | 0.6568 | 0.0 | 0.3419 | 0.2427 | 0.0 | nan | 0.0 | 0.2974 | 0.0 | 0.0 | 0.8016 | 0.7234 | 0.7915 | 0.0 | 0.0040 | 0.2283 | 0.0 |
| 0.5602 | 32.0 | 3424 | 0.6411 | 0.2802 | 0.3472 | 0.8101 | nan | 0.6883 | 0.9485 | 0.8664 | 0.7639 | 0.1489 | 0.0 | 0.5011 | 0.6326 | 0.0 | 0.9104 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5617 | nan | 0.0 | 0.8921 | 0.0 | 0.6268 | 0.2051 | 0.0 | nan | 0.0 | 0.3632 | 0.0 | 0.0 | 0.8960 | 0.8552 | 0.8981 | 0.0 | 0.0221 | 0.3290 | 0.0 | nan | 0.5877 | 0.8330 | 0.7807 | 0.5591 | 0.1386 | 0.0 | 0.2813 | 0.3887 | 0.0 | 0.7831 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4225 | nan | 0.0 | 0.6645 | 0.0 | 0.3730 | 0.1864 | 0.0 | nan | 0.0 | 0.2938 | 0.0 | 0.0 | 0.8000 | 0.7455 | 0.8533 | 0.0 | 0.0216 | 0.2553 | 0.0 |
| 0.5403 | 33.0 | 3531 | 0.6642 | 0.2729 | 0.3431 | 0.8017 | nan | 0.7235 | 0.9123 | 0.8745 | 0.7791 | 0.1617 | 0.0 | 0.4874 | 0.5172 | 0.0 | 0.9381 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5344 | nan | 0.0 | 0.8467 | 0.0 | 0.6245 | 0.1614 | 0.0 | nan | 0.0 | 0.4356 | 0.0 | 0.0 | 0.9141 | 0.8488 | 0.9075 | 0.0 | 0.0052 | 0.3063 | 0.0 | nan | 0.5819 | 0.8258 | 0.7765 | 0.5111 | 0.1504 | 0.0 | 0.2836 | 0.3475 | 0.0 | 0.7294 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4025 | nan | 0.0 | 0.6638 | 0.0 | 0.3659 | 0.1505 | 0.0 | nan | 0.0 | 0.3046 | 0.0 | 0.0 | 0.7944 | 0.7435 | 0.8602 | 0.0 | 0.0052 | 0.2349 | 0.0 |
| 0.5168 | 34.0 | 3638 | 0.6402 | 0.2810 | 0.3485 | 0.8095 | nan | 0.7201 | 0.9345 | 0.8740 | 0.7414 | 0.1833 | 0.0 | 0.5538 | 0.5357 | 0.0 | 0.9369 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5640 | nan | 0.0 | 0.8776 | 0.0 | 0.5961 | 0.2626 | 0.0 | nan | 0.0 | 0.4488 | 0.0 | 0.0 | 0.9137 | 0.7841 | 0.8616 | 0.0 | 0.0 | 0.3650 | 0.0 | nan | 0.5901 | 0.8362 | 0.7926 | 0.6243 | 0.1652 | 0.0 | 0.2893 | 0.3653 | 0.0 | 0.7485 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4192 | nan | 0.0 | 0.6649 | 0.0 | 0.3752 | 0.2284 | 0.0 | nan | 0.0 | 0.3013 | 0.0 | 0.0 | 0.7971 | 0.7158 | 0.8280 | 0.0 | 0.0 | 0.2491 | 0.0 |
| 0.522 | 35.0 | 3745 | 0.6674 | 0.2743 | 0.3458 | 0.8002 | nan | 0.5916 | 0.9608 | 0.8505 | 0.7896 | 0.1387 | 0.0 | 0.4421 | 0.7247 | 0.0 | 0.9421 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5275 | nan | 0.0 | 0.8349 | 0.0 | 0.5652 | 0.1952 | 0.0 | nan | 0.0 | 0.4814 | 0.0 | 0.0 | 0.9081 | 0.8478 | 0.8898 | 0.0 | 0.0069 | 0.3697 | 0.0 | nan | 0.5251 | 0.8163 | 0.7812 | 0.5692 | 0.1306 | 0.0 | 0.2611 | 0.3743 | 0.0 | 0.7538 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4358 | nan | 0.0 | 0.6717 | 0.0 | 0.3549 | 0.1812 | 0.0 | nan | 0.0 | 0.2812 | 0.0 | 0.0 | 0.7991 | 0.7471 | 0.8535 | 0.0 | 0.0068 | 0.2358 | 0.0 |
| 0.4947 | 36.0 | 3852 | 0.6619 | 0.2752 | 0.3503 | 0.7991 | nan | 0.6020 | 0.9553 | 0.6755 | 0.7710 | 0.2239 | 0.0 | 0.5168 | 0.6551 | 0.0 | 0.9349 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6691 | nan | 0.0 | 0.8095 | 0.0 | 0.7100 | 0.1976 | 0.0 | nan | 0.0 | 0.4787 | 0.0 | 0.0 | 0.8903 | 0.8914 | 0.8668 | 0.0 | 0.0007 | 0.3623 | 0.0 | nan | 0.5291 | 0.8115 | 0.6361 | 0.5873 | 0.1919 | 0.0 | 0.2904 | 0.4117 | 0.0 | 0.7803 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4348 | nan | 0.0 | 0.6702 | 0.0 | 0.3617 | 0.1812 | 0.0 | nan | 0.0 | 0.2947 | 0.0 | 0.0 | 0.8036 | 0.7365 | 0.8339 | 0.0 | 0.0007 | 0.2507 | 0.0 |
| 0.5073 | 37.0 | 3959 | 0.6782 | 0.2792 | 0.3508 | 0.8019 | nan | 0.6843 | 0.9206 | 0.8269 | 0.7932 | 0.2000 | 0.0 | 0.5293 | 0.6061 | 0.0 | 0.9381 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6202 | nan | 0.0 | 0.8888 | 0.0 | 0.6030 | 0.2416 | 0.0 | nan | 0.0 | 0.3985 | 0.0 | 0.0 | 0.8823 | 0.8329 | 0.8918 | 0.0 | 0.0 | 0.3687 | 0.0 | nan | 0.5649 | 0.8204 | 0.7692 | 0.5226 | 0.1828 | 0.0 | 0.3027 | 0.4019 | 0.0 | 0.7543 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4303 | nan | 0.0 | 0.6624 | 0.0 | 0.3595 | 0.2136 | 0.0 | nan | 0.0 | 0.2976 | 0.0 | 0.0 | 0.8008 | 0.7378 | 0.8484 | 0.0 | 0.0 | 0.2667 | 0.0 |
| 0.4788 | 38.0 | 4066 | 0.6694 | 0.2768 | 0.3467 | 0.8020 | nan | 0.6894 | 0.9371 | 0.8519 | 0.7659 | 0.2090 | 0.0 | 0.4494 | 0.5935 | 0.0 | 0.9331 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6390 | nan | 0.0 | 0.9029 | 0.0 | 0.4947 | 0.2279 | 0.0 | nan | 0.0 | 0.4255 | 0.0 | 0.0 | 0.8438 | 0.8985 | 0.8365 | 0.0 | 0.0 | 0.3976 | 0.0 | nan | 0.5567 | 0.8293 | 0.7865 | 0.5419 | 0.1959 | 0.0 | 0.2915 | 0.3960 | 0.0 | 0.7643 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4261 | nan | 0.0 | 0.6474 | 0.0 | 0.3359 | 0.1974 | 0.0 | nan | 0.0 | 0.3047 | 0.0 | 0.0 | 0.7876 | 0.7159 | 0.8095 | 0.0 | 0.0 | 0.2724 | 0.0 |
| 0.4627 | 39.0 | 4173 | 0.6439 | 0.2840 | 0.3563 | 0.8069 | nan | 0.6652 | 0.9293 | 0.8861 | 0.7534 | 0.2398 | 0.0 | 0.5481 | 0.5694 | 0.0 | 0.9305 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6488 | nan | 0.0 | 0.8714 | 0.0 | 0.5817 | 0.3115 | 0.0 | nan | 0.0 | 0.4716 | 0.0 | 0.0 | 0.9060 | 0.8645 | 0.8991 | 0.0 | 0.0123 | 0.3128 | 0.0 | nan | 0.5453 | 0.8303 | 0.7889 | 0.5693 | 0.2107 | 0.0 | 0.3035 | 0.3784 | 0.0 | 0.7531 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4437 | nan | 0.0 | 0.6747 | 0.0 | 0.3647 | 0.2365 | 0.0 | nan | 0.0 | 0.3209 | 0.0 | 0.0 | 0.8070 | 0.7501 | 0.8626 | 0.0 | 0.0123 | 0.2376 | 0.0 |
| 0.4775 | 40.0 | 4280 | 0.6679 | 0.2808 | 0.3499 | 0.8051 | nan | 0.6127 | 0.9570 | 0.8742 | 0.8046 | 0.1980 | 0.0 | 0.4223 | 0.4104 | 0.0 | 0.8918 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7077 | nan | 0.0 | 0.8362 | 0.0 | 0.6999 | 0.3405 | 0.0 | nan | 0.0 | 0.4473 | 0.0 | 0.0 | 0.9272 | 0.7890 | 0.8870 | 0.0 | 0.0348 | 0.3578 | 0.0 | nan | 0.5307 | 0.8250 | 0.7915 | 0.5729 | 0.1789 | 0.0 | 0.2532 | 0.3154 | 0.0 | 0.7855 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4322 | nan | 0.0 | 0.6863 | 0.0 | 0.4071 | 0.2521 | 0.0 | nan | 0.0 | 0.3089 | 0.0 | 0.0 | 0.7975 | 0.7101 | 0.8518 | 0.0 | 0.0332 | 0.2520 | 0.0 |
| 0.4816 | 41.0 | 4387 | 0.6700 | 0.2812 | 0.3491 | 0.8060 | nan | 0.6497 | 0.9430 | 0.8488 | 0.7581 | 0.1492 | 0.0 | 0.5026 | 0.5415 | 0.0 | 0.9317 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5586 | nan | 0.0 | 0.8655 | 0.0 | 0.6495 | 0.3284 | 0.0 | nan | 0.0 | 0.4062 | 0.0 | 0.0 | 0.9026 | 0.8756 | 0.9041 | 0.0 | 0.0154 | 0.3409 | 0.0 | nan | 0.5483 | 0.8245 | 0.7804 | 0.5613 | 0.1444 | 0.0 | 0.2941 | 0.3765 | 0.0 | 0.7657 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4309 | nan | 0.0 | 0.6812 | 0.0 | 0.3456 | 0.2526 | 0.0 | nan | 0.0 | 0.3020 | 0.0 | 0.0 | 0.8013 | 0.7384 | 0.8651 | 0.0 | 0.0147 | 0.2719 | 0.0 |
| 0.4643 | 42.0 | 4494 | 0.6465 | 0.2865 | 0.3603 | 0.8079 | nan | 0.6087 | 0.9460 | 0.8859 | 0.8411 | 0.2736 | 0.0 | 0.5016 | 0.5636 | 0.0 | 0.9311 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6503 | nan | 0.0 | 0.8152 | 0.0 | 0.6211 | 0.3064 | 0.0 | nan | 0.0 | 0.4719 | 0.0 | 0.0 | 0.9130 | 0.8643 | 0.8988 | 0.0 | 0.0386 | 0.3972 | 0.0 | nan | 0.5283 | 0.8363 | 0.7831 | 0.5893 | 0.2376 | 0.0 | 0.2835 | 0.3871 | 0.0 | 0.7808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4435 | nan | 0.0 | 0.6630 | 0.0 | 0.3653 | 0.2468 | 0.0 | nan | 0.0 | 0.3230 | 0.0 | 0.0 | 0.8082 | 0.7553 | 0.8615 | 0.0 | 0.0352 | 0.2410 | 0.0 |
| 0.4758 | 43.0 | 4601 | 0.6531 | 0.2866 | 0.3573 | 0.8033 | nan | 0.6189 | 0.9384 | 0.8678 | 0.7635 | 0.2556 | 0.0 | 0.4631 | 0.5328 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7078 | nan | 0.0 | 0.8840 | 0.0 | 0.5168 | 0.3159 | 0.0 | nan | 0.0 | 0.5012 | 0.0 | 0.0 | 0.9003 | 0.8435 | 0.8800 | 0.0 | 0.1130 | 0.3953 | 0.0 | nan | 0.5198 | 0.8118 | 0.7952 | 0.5642 | 0.2235 | 0.0 | 0.2833 | 0.3642 | 0.0 | 0.7845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4597 | nan | 0.0 | 0.6755 | 0.0 | 0.3530 | 0.2604 | 0.0 | nan | 0.0 | 0.3225 | 0.0 | 0.0 | 0.8104 | 0.7326 | 0.8509 | 0.0 | 0.0804 | 0.2786 | 0.0 |
| 0.4682 | 44.0 | 4708 | 0.6534 | 0.2843 | 0.3584 | 0.8035 | nan | 0.6193 | 0.9309 | 0.8952 | 0.8209 | 0.2108 | 0.0 | 0.4880 | 0.5279 | 0.0 | 0.9208 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6156 | nan | 0.0 | 0.8474 | 0.0 | 0.6475 | 0.3017 | 0.0 | nan | 0.0 | 0.5203 | 0.0 | 0.0 | 0.9113 | 0.8445 | 0.9254 | 0.0 | 0.0324 | 0.4089 | 0.0 | nan | 0.5374 | 0.8204 | 0.7932 | 0.5268 | 0.1915 | 0.0 | 0.2784 | 0.3506 | 0.0 | 0.7789 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4807 | nan | 0.0 | 0.6878 | 0.0 | 0.3576 | 0.2551 | 0.0 | nan | 0.0 | 0.3135 | 0.0 | 0.0 | 0.8111 | 0.7485 | 0.8770 | 0.0 | 0.0241 | 0.2662 | 0.0 |
| 0.4807 | 45.0 | 4815 | 0.6325 | 0.2885 | 0.3653 | 0.8075 | nan | 0.6071 | 0.9223 | 0.8977 | 0.8564 | 0.3516 | 0.0 | 0.5039 | 0.5266 | 0.0 | 0.9433 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6309 | nan | 0.0 | 0.8390 | 0.0 | 0.5600 | 0.3684 | 0.0 | nan | 0.0 | 0.4760 | 0.0 | 0.0 | 0.9242 | 0.8477 | 0.9264 | 0.0 | 0.0706 | 0.4361 | 0.0 | nan | 0.5390 | 0.8355 | 0.7773 | 0.5424 | 0.2623 | 0.0 | 0.2809 | 0.3567 | 0.0 | 0.7695 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4435 | nan | 0.0 | 0.6957 | 0.0 | 0.3710 | 0.2746 | 0.0 | nan | 0.0 | 0.3253 | 0.0 | 0.0 | 0.8070 | 0.7405 | 0.8751 | 0.0 | 0.0603 | 0.2758 | 0.0 |
| 0.4611 | 46.0 | 4922 | 0.6577 | 0.2850 | 0.3588 | 0.8022 | nan | 0.6022 | 0.9292 | 0.8230 | 0.8449 | 0.2449 | 0.0 | 0.4479 | 0.5166 | 0.0 | 0.9396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6521 | nan | 0.0 | 0.8516 | 0.0 | 0.7020 | 0.3122 | 0.0 | nan | 0.0 | 0.4822 | 0.0 | 0.0 | 0.9015 | 0.8642 | 0.9095 | 0.0 | 0.0737 | 0.3834 | 0.0 | nan | 0.5034 | 0.8172 | 0.7584 | 0.5407 | 0.2171 | 0.0 | 0.2684 | 0.3534 | 0.0 | 0.7740 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4392 | nan | 0.0 | 0.6942 | 0.0 | 0.3877 | 0.2651 | 0.0 | nan | 0.0 | 0.3266 | 0.0 | 0.0 | 0.8136 | 0.7528 | 0.8682 | 0.0 | 0.0684 | 0.2725 | 0.0 |
| 0.3966 | 47.0 | 5029 | 0.6749 | 0.2810 | 0.3530 | 0.7981 | nan | 0.5613 | 0.9379 | 0.7768 | 0.8262 | 0.2161 | 0.0 | 0.4333 | 0.4777 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7133 | nan | 0.0 | 0.8766 | 0.0 | 0.7119 | 0.3548 | 0.0 | nan | 0.0 | 0.3871 | 0.0 | 0.0 | 0.9073 | 0.8326 | 0.8935 | 0.0 | 0.1104 | 0.3367 | 0.0 | nan | 0.4867 | 0.8241 | 0.7219 | 0.4978 | 0.1759 | 0.0 | 0.2653 | 0.3573 | 0.0 | 0.7854 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4599 | nan | 0.0 | 0.6910 | 0.0 | 0.3907 | 0.2610 | 0.0 | nan | 0.0 | 0.3009 | 0.0 | 0.0 | 0.8082 | 0.7419 | 0.8593 | 0.0 | 0.0926 | 0.2732 | 0.0 |
| 0.4672 | 48.0 | 5136 | 0.6660 | 0.2784 | 0.3546 | 0.8021 | nan | 0.7292 | 0.9096 | 0.8990 | 0.8135 | 0.1493 | 0.0 | 0.5230 | 0.5946 | 0.0 | 0.9526 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7375 | nan | 0.0 | 0.8687 | 0.0 | 0.4252 | 0.2353 | 0.0 | nan | 0.0 | 0.4237 | 0.0 | 0.0 | 0.8933 | 0.8270 | 0.9183 | 0.0 | 0.0817 | 0.3646 | 0.0 | nan | 0.5942 | 0.8232 | 0.8036 | 0.5377 | 0.1347 | 0.0 | 0.2647 | 0.3728 | 0.0 | 0.7137 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4297 | nan | 0.0 | 0.6601 | 0.0 | 0.2946 | 0.2127 | 0.0 | nan | 0.0 | 0.3151 | 0.0 | 0.0 | 0.8132 | 0.7409 | 0.8699 | 0.0 | 0.0613 | 0.2675 | 0.0 |
| 0.4622 | 49.0 | 5243 | 0.7150 | 0.2767 | 0.3475 | 0.7951 | nan | 0.6718 | 0.8870 | 0.8975 | 0.8901 | 0.1529 | 0.0 | 0.4453 | 0.5462 | 0.0 | 0.9481 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6509 | nan | 0.0 | 0.9012 | 0.0 | 0.5387 | 0.2114 | 0.0 | nan | 0.0 | 0.4295 | 0.0 | 0.0 | 0.9268 | 0.7997 | 0.9010 | 0.0 | 0.0350 | 0.2863 | 0.0 | nan | 0.5624 | 0.8117 | 0.7961 | 0.4364 | 0.1414 | 0.0 | 0.2702 | 0.3711 | 0.0 | 0.7633 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4582 | nan | 0.0 | 0.6590 | 0.0 | 0.3660 | 0.1902 | 0.0 | nan | 0.0 | 0.3246 | 0.0 | 0.0 | 0.8088 | 0.7464 | 0.8666 | 0.0 | 0.0332 | 0.2487 | 0.0 |
| 0.4145 | 50.0 | 5350 | 0.6807 | 0.2818 | 0.3565 | 0.8008 | nan | 0.6541 | 0.9143 | 0.8871 | 0.8536 | 0.1956 | 0.0 | 0.4524 | 0.5023 | 0.0 | 0.9266 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7110 | nan | 0.0 | 0.8771 | 0.0 | 0.6214 | 0.2828 | 0.0 | nan | 0.0 | 0.4773 | 0.0 | 0.0 | 0.9139 | 0.7822 | 0.9051 | 0.0 | 0.0741 | 0.3770 | 0.0 | nan | 0.5552 | 0.8258 | 0.7814 | 0.4837 | 0.1731 | 0.0 | 0.2705 | 0.3530 | 0.0 | 0.7938 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4623 | nan | 0.0 | 0.6836 | 0.0 | 0.3253 | 0.2389 | 0.0 | nan | 0.0 | 0.3381 | 0.0 | 0.0 | 0.8077 | 0.7193 | 0.8652 | 0.0 | 0.0640 | 0.2760 | 0.0 |
| 0.4544 | 51.0 | 5457 | 0.6710 | 0.2839 | 0.3635 | 0.8006 | nan | 0.6233 | 0.9087 | 0.9049 | 0.8695 | 0.2469 | 0.0 | 0.4528 | 0.5746 | 0.0 | 0.9279 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7524 | nan | 0.0 | 0.8690 | 0.0 | 0.5925 | 0.3026 | 0.0 | nan | 0.0 | 0.4862 | 0.0 | 0.0 | 0.9113 | 0.8522 | 0.9246 | 0.0 | 0.0797 | 0.3522 | 0.0 | nan | 0.5369 | 0.8237 | 0.7538 | 0.4608 | 0.2062 | 0.0 | 0.2692 | 0.3838 | 0.0 | 0.7862 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4326 | nan | 0.0 | 0.6889 | 0.0 | 0.3838 | 0.2423 | 0.0 | nan | 0.0 | 0.3336 | 0.0 | 0.0 | 0.8112 | 0.7403 | 0.8791 | 0.0 | 0.0742 | 0.2796 | 0.0 |
| 0.4084 | 52.0 | 5564 | 0.6546 | 0.2867 | 0.3640 | 0.8059 | nan | 0.6423 | 0.9216 | 0.8728 | 0.8610 | 0.1706 | 0.0 | 0.4997 | 0.5610 | 0.0 | 0.9239 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7156 | nan | 0.0 | 0.8634 | 0.0 | 0.6920 | 0.2740 | 0.0 | nan | 0.0 | 0.4887 | 0.0 | 0.0 | 0.9069 | 0.8889 | 0.9000 | 0.0 | 0.0903 | 0.3739 | 0.0 | nan | 0.5431 | 0.8278 | 0.7981 | 0.5189 | 0.1560 | 0.0 | 0.3024 | 0.3737 | 0.0 | 0.7986 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4567 | nan | 0.0 | 0.6880 | 0.0 | 0.3761 | 0.2251 | 0.0 | nan | 0.0 | 0.3343 | 0.0 | 0.0 | 0.8139 | 0.7548 | 0.8646 | 0.0 | 0.0756 | 0.2675 | 0.0 |
| 0.4475 | 53.0 | 5671 | 0.6712 | 0.2818 | 0.3527 | 0.8026 | nan | 0.6170 | 0.9199 | 0.9040 | 0.8414 | 0.2396 | 0.0 | 0.4268 | 0.4352 | 0.0 | 0.9374 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6281 | nan | 0.0 | 0.8676 | 0.0 | 0.6078 | 0.2969 | 0.0 | nan | 0.0 | 0.4899 | 0.0 | 0.0 | 0.9292 | 0.8389 | 0.9008 | 0.0 | 0.0345 | 0.3705 | 0.0 | nan | 0.5299 | 0.8287 | 0.7780 | 0.5304 | 0.1865 | 0.0 | 0.2782 | 0.3138 | 0.0 | 0.7708 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4800 | nan | 0.0 | 0.6830 | 0.0 | 0.3721 | 0.2416 | 0.0 | nan | 0.0 | 0.3294 | 0.0 | 0.0 | 0.7999 | 0.7324 | 0.8688 | 0.0 | 0.0287 | 0.2662 | 0.0 |
| 0.4077 | 54.0 | 5778 | 0.6743 | 0.2885 | 0.3600 | 0.8048 | nan | 0.5791 | 0.9423 | 0.8905 | 0.7810 | 0.2604 | 0.0 | 0.4610 | 0.5324 | 0.0 | 0.9467 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6770 | nan | 0.0 | 0.8826 | 0.0 | 0.5999 | 0.3432 | 0.0 | nan | 0.0 | 0.4846 | 0.0 | 0.0 | 0.9008 | 0.8470 | 0.9224 | 0.0 | 0.0643 | 0.4035 | 0.0 | nan | 0.5145 | 0.8210 | 0.8031 | 0.5666 | 0.1975 | 0.0 | 0.2818 | 0.3555 | 0.0 | 0.7569 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4759 | nan | 0.0 | 0.6745 | 0.0 | 0.3937 | 0.2589 | 0.0 | nan | 0.0 | 0.3445 | 0.0 | 0.0 | 0.8146 | 0.7552 | 0.8771 | 0.0 | 0.0526 | 0.2890 | 0.0 |
| 0.4334 | 55.0 | 5885 | 0.6318 | 0.2919 | 0.3684 | 0.8122 | nan | 0.6590 | 0.9261 | 0.8843 | 0.8552 | 0.2511 | 0.0 | 0.5269 | 0.6052 | 0.0 | 0.9416 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6763 | nan | 0.0 | 0.8438 | 0.0 | 0.6329 | 0.3218 | 0.0 | nan | 0.0 | 0.4795 | 0.0 | 0.0 | 0.9021 | 0.9073 | 0.9129 | 0.0 | 0.0510 | 0.4103 | 0.0 | nan | 0.5659 | 0.8404 | 0.7976 | 0.5330 | 0.2067 | 0.0 | 0.2976 | 0.3918 | 0.0 | 0.7881 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4634 | nan | 0.0 | 0.6961 | 0.0 | 0.4115 | 0.2632 | 0.0 | nan | 0.0 | 0.3295 | 0.0 | 0.0 | 0.8043 | 0.7360 | 0.8742 | 0.0 | 0.0446 | 0.2963 | 0.0 |
| 0.4379 | 56.0 | 5992 | 0.6688 | 0.2871 | 0.3580 | 0.8059 | nan | 0.5682 | 0.9473 | 0.8909 | 0.8684 | 0.1827 | 0.0 | 0.4078 | 0.5539 | 0.0 | 0.9361 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6815 | nan | 0.0 | 0.8838 | 0.0 | 0.6820 | 0.3338 | 0.0 | nan | 0.0 | 0.4720 | 0.0 | 0.0 | 0.9061 | 0.8482 | 0.9133 | 0.0 | 0.0386 | 0.3415 | 0.0 | nan | 0.5081 | 0.8198 | 0.8017 | 0.5046 | 0.1626 | 0.0 | 0.2799 | 0.3793 | 0.0 | 0.7869 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4692 | nan | 0.0 | 0.6965 | 0.0 | 0.4161 | 0.2613 | 0.0 | nan | 0.0 | 0.3389 | 0.0 | 0.0 | 0.8176 | 0.7576 | 0.8727 | 0.0 | 0.0347 | 0.2792 | 0.0 |
| 0.4489 | 57.0 | 6099 | 0.6413 | 0.2898 | 0.3657 | 0.8118 | nan | 0.6336 | 0.9369 | 0.8978 | 0.8637 | 0.2405 | 0.0 | 0.4683 | 0.4792 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7398 | nan | 0.0 | 0.8757 | 0.0 | 0.6220 | 0.3338 | 0.0 | nan | 0.0 | 0.5178 | 0.0 | 0.0 | 0.8798 | 0.8909 | 0.9242 | 0.0 | 0.0371 | 0.4152 | 0.0 | nan | 0.5641 | 0.8302 | 0.7988 | 0.5222 | 0.2052 | 0.0 | 0.2923 | 0.3509 | 0.0 | 0.7819 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4626 | nan | 0.0 | 0.7010 | 0.0 | 0.4040 | 0.2609 | 0.0 | nan | 0.0 | 0.3240 | 0.0 | 0.0 | 0.8147 | 0.7572 | 0.8780 | 0.0 | 0.0348 | 0.2924 | 0.0 |
| 0.4042 | 58.0 | 6206 | 0.6378 | 0.2905 | 0.3632 | 0.8141 | nan | 0.6889 | 0.9331 | 0.8987 | 0.8277 | 0.1904 | 0.0 | 0.4609 | 0.4760 | 0.0 | 0.9308 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7672 | nan | 0.0 | 0.8689 | 0.0 | 0.6552 | 0.3481 | 0.0 | nan | 0.0 | 0.4860 | 0.0 | 0.0 | 0.9232 | 0.8152 | 0.9071 | 0.0 | 0.0922 | 0.3534 | 0.0 | nan | 0.5797 | 0.8300 | 0.7955 | 0.5497 | 0.1802 | 0.0 | 0.3002 | 0.3608 | 0.0 | 0.7923 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4430 | nan | 0.0 | 0.7010 | 0.0 | 0.3912 | 0.2586 | 0.0 | nan | 0.0 | 0.3330 | 0.0 | 0.0 | 0.8063 | 0.7389 | 0.8722 | 0.0 | 0.0842 | 0.2788 | 0.0 |
| 0.4033 | 59.0 | 6313 | 0.6393 | 0.2901 | 0.3629 | 0.8131 | nan | 0.6851 | 0.9282 | 0.8829 | 0.8307 | 0.1882 | 0.0 | 0.4846 | 0.5244 | 0.0 | 0.9433 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7355 | nan | 0.0 | 0.8673 | 0.0 | 0.6451 | 0.2991 | 0.0 | nan | 0.0 | 0.5054 | 0.0 | 0.0 | 0.9144 | 0.8542 | 0.9130 | 0.0 | 0.0306 | 0.3796 | 0.0 | nan | 0.5736 | 0.8304 | 0.8001 | 0.5264 | 0.1720 | 0.0 | 0.2962 | 0.3684 | 0.0 | 0.7884 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4725 | nan | 0.0 | 0.6993 | 0.0 | 0.3926 | 0.2552 | 0.0 | nan | 0.0 | 0.3409 | 0.0 | 0.0 | 0.8158 | 0.7611 | 0.8778 | 0.0 | 0.0283 | 0.2835 | 0.0 |
| 0.4021 | 60.0 | 6420 | 0.6501 | 0.2886 | 0.3651 | 0.8139 | nan | 0.7362 | 0.9216 | 0.9046 | 0.8150 | 0.1901 | 0.0 | 0.4200 | 0.4985 | 0.0 | 0.9507 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7714 | nan | 0.0 | 0.8632 | 0.0 | 0.6387 | 0.3715 | 0.0 | nan | 0.0 | 0.4586 | 0.0 | 0.0 | 0.9045 | 0.8397 | 0.9113 | 0.0 | 0.1205 | 0.3673 | 0.0 | nan | 0.6146 | 0.8265 | 0.7334 | 0.5541 | 0.1753 | 0.0 | 0.2840 | 0.3505 | 0.0 | 0.7546 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4427 | nan | 0.0 | 0.6866 | 0.0 | 0.3878 | 0.2644 | 0.0 | nan | 0.0 | 0.3435 | 0.0 | 0.0 | 0.8178 | 0.7580 | 0.8759 | 0.0 | 0.0918 | 0.2734 | 0.0 |
| 0.4143 | 61.0 | 6527 | 0.6427 | 0.2897 | 0.3612 | 0.8105 | nan | 0.6811 | 0.9188 | 0.8982 | 0.7937 | 0.2651 | 0.0 | 0.5039 | 0.4599 | 0.0 | 0.9477 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7194 | nan | 0.0 | 0.8837 | 0.0 | 0.5937 | 0.3117 | 0.0 | nan | 0.0 | 0.4858 | 0.0 | 0.0 | 0.9079 | 0.8499 | 0.9188 | 0.0 | 0.0464 | 0.3740 | 0.0 | nan | 0.5727 | 0.8170 | 0.7807 | 0.5701 | 0.2198 | 0.0 | 0.2939 | 0.3411 | 0.0 | 0.7690 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4610 | nan | 0.0 | 0.6847 | 0.0 | 0.3873 | 0.2523 | 0.0 | nan | 0.0 | 0.3447 | 0.0 | 0.0 | 0.8200 | 0.7614 | 0.8782 | 0.0 | 0.0412 | 0.2743 | 0.0 |
| 0.3857 | 62.0 | 6634 | 0.6568 | 0.2875 | 0.3664 | 0.8074 | nan | 0.6878 | 0.9189 | 0.8964 | 0.8039 | 0.1812 | 0.0 | 0.5164 | 0.5660 | 0.0 | 0.9535 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7389 | nan | 0.0 | 0.8524 | 0.0 | 0.6142 | 0.3820 | 0.0 | nan | 0.0 | 0.4951 | 0.0 | 0.0 | 0.8928 | 0.8760 | 0.9272 | 0.0 | 0.0857 | 0.3362 | 0.0 | nan | 0.5667 | 0.8261 | 0.7933 | 0.5405 | 0.1623 | 0.0 | 0.3019 | 0.3736 | 0.0 | 0.7409 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4574 | nan | 0.0 | 0.6916 | 0.0 | 0.3764 | 0.2648 | 0.0 | nan | 0.0 | 0.3324 | 0.0 | 0.0 | 0.8177 | 0.7557 | 0.8809 | 0.0 | 0.0753 | 0.2436 | 0.0 |
| 0.4062 | 63.0 | 6741 | 0.6513 | 0.2914 | 0.3663 | 0.8120 | nan | 0.7112 | 0.9218 | 0.8867 | 0.7747 | 0.2310 | 0.0 | 0.5184 | 0.5408 | 0.0 | 0.9502 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7454 | nan | 0.0 | 0.8541 | 0.0 | 0.5815 | 0.3421 | 0.0 | nan | 0.0 | 0.5055 | 0.0 | 0.0 | 0.9086 | 0.8560 | 0.9291 | 0.0 | 0.0971 | 0.3675 | 0.0 | nan | 0.5784 | 0.8288 | 0.8002 | 0.5326 | 0.2018 | 0.0 | 0.3257 | 0.3750 | 0.0 | 0.7532 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4492 | nan | 0.0 | 0.6895 | 0.0 | 0.3791 | 0.2637 | 0.0 | nan | 0.0 | 0.3276 | 0.0 | 0.0 | 0.8196 | 0.7602 | 0.8842 | 0.0 | 0.0878 | 0.2676 | 0.0 |
| 0.3899 | 64.0 | 6848 | 0.6511 | 0.2897 | 0.3660 | 0.8078 | nan | 0.6784 | 0.9222 | 0.8927 | 0.7620 | 0.2273 | 0.0 | 0.5211 | 0.5469 | 0.0 | 0.9366 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7375 | nan | 0.0 | 0.8515 | 0.0 | 0.6301 | 0.3594 | 0.0 | nan | 0.0 | 0.5137 | 0.0 | 0.0 | 0.9027 | 0.8641 | 0.9136 | 0.0 | 0.0311 | 0.4211 | 0.0 | nan | 0.5682 | 0.8239 | 0.8068 | 0.5166 | 0.2014 | 0.0 | 0.3059 | 0.3793 | 0.0 | 0.7849 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4731 | nan | 0.0 | 0.6751 | 0.0 | 0.3873 | 0.2718 | 0.0 | nan | 0.0 | 0.3411 | 0.0 | 0.0 | 0.8171 | 0.7490 | 0.8788 | 0.0 | 0.0271 | 0.2641 | 0.0 |
| 0.4094 | 65.0 | 6955 | 0.6321 | 0.2906 | 0.3633 | 0.8155 | nan | 0.7419 | 0.9262 | 0.8953 | 0.7420 | 0.2358 | 0.0 | 0.4796 | 0.5340 | 0.0 | 0.9593 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7218 | nan | 0.0 | 0.8464 | 0.0 | 0.5849 | 0.3341 | 0.0 | nan | 0.0 | 0.4942 | 0.0 | 0.0 | 0.9074 | 0.8709 | 0.9111 | 0.0009 | 0.0280 | 0.4123 | 0.0 | nan | 0.6028 | 0.8365 | 0.8011 | 0.5280 | 0.2101 | 0.0 | 0.3052 | 0.3724 | 0.0 | 0.7332 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4604 | nan | 0.0 | 0.6845 | 0.0 | 0.3982 | 0.2645 | 0.0 | nan | 0.0 | 0.3412 | 0.0 | 0.0 | 0.8201 | 0.7577 | 0.8759 | 0.0009 | 0.0255 | 0.2797 | 0.0 |
| 0.3902 | 66.0 | 7062 | 0.6383 | 0.2892 | 0.3622 | 0.8112 | nan | 0.6557 | 0.9316 | 0.8911 | 0.7814 | 0.2329 | 0.0 | 0.5098 | 0.4581 | 0.0 | 0.9394 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7239 | nan | 0.0 | 0.8559 | 0.0 | 0.6460 | 0.3358 | 0.0 | nan | 0.0 | 0.5161 | 0.0 | 0.0 | 0.9274 | 0.8429 | 0.8990 | 0.0 | 0.0312 | 0.4118 | 0.0 | nan | 0.5606 | 0.8294 | 0.8023 | 0.5414 | 0.2068 | 0.0 | 0.3016 | 0.3450 | 0.0 | 0.7787 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4684 | nan | 0.0 | 0.6942 | 0.0 | 0.3908 | 0.2621 | 0.0 | nan | 0.0 | 0.3398 | 0.0 | 0.0 | 0.8126 | 0.7445 | 0.8709 | 0.0 | 0.0272 | 0.2774 | 0.0 |
| 0.3735 | 67.0 | 7169 | 0.6484 | 0.2885 | 0.3627 | 0.8076 | nan | 0.6374 | 0.9351 | 0.9035 | 0.7568 | 0.2251 | 0.0 | 0.4998 | 0.4948 | 0.0 | 0.9478 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7209 | nan | 0.0 | 0.8596 | 0.0 | 0.5804 | 0.3791 | 0.0 | nan | 0.0 | 0.4997 | 0.0 | 0.0 | 0.8999 | 0.8741 | 0.9245 | 0.0 | 0.0483 | 0.4185 | 0.0 | nan | 0.5389 | 0.8231 | 0.7871 | 0.5304 | 0.1996 | 0.0 | 0.2827 | 0.3614 | 0.0 | 0.7835 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4719 | nan | 0.0 | 0.6932 | 0.0 | 0.3775 | 0.2770 | 0.0 | nan | 0.0 | 0.3393 | 0.0 | 0.0 | 0.8216 | 0.7540 | 0.8823 | 0.0 | 0.0421 | 0.2668 | 0.0 |
| 0.3888 | 68.0 | 7276 | 0.6295 | 0.2932 | 0.3681 | 0.8124 | nan | 0.6453 | 0.9414 | 0.8924 | 0.7985 | 0.2832 | 0.0 | 0.5193 | 0.6389 | 0.0 | 0.9459 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7338 | nan | 0.0 | 0.8423 | 0.0 | 0.5126 | 0.3179 | 0.0 | nan | 0.0 | 0.5176 | 0.0 | 0.0 | 0.9164 | 0.8300 | 0.9247 | 0.0010 | 0.0627 | 0.4567 | 0.0 | nan | 0.5521 | 0.8326 | 0.7984 | 0.5384 | 0.2291 | 0.0 | 0.3097 | 0.4143 | 0.0 | 0.7877 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4724 | nan | 0.0 | 0.7028 | 0.0 | 0.3784 | 0.2540 | 0.0 | nan | 0.0 | 0.3337 | 0.0 | 0.0 | 0.8172 | 0.7398 | 0.8859 | 0.0010 | 0.0533 | 0.2830 | 0.0 |
| 0.3463 | 69.0 | 7383 | 0.6746 | 0.2916 | 0.3677 | 0.8094 | nan | 0.6515 | 0.9210 | 0.8823 | 0.8440 | 0.1789 | 0.0 | 0.5215 | 0.5737 | 0.0 | 0.9359 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7389 | nan | 0.0 | 0.8837 | 0.0 | 0.6300 | 0.3350 | 0.0 | nan | 0.0 | 0.4968 | 0.0 | 0.0 | 0.9032 | 0.8934 | 0.9017 | 0.0 | 0.0703 | 0.4058 | 0.0 | nan | 0.5528 | 0.8245 | 0.7907 | 0.5250 | 0.1632 | 0.0 | 0.3014 | 0.3934 | 0.0 | 0.8010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4788 | nan | 0.0 | 0.6967 | 0.0 | 0.3744 | 0.2605 | 0.0 | nan | 0.0 | 0.3469 | 0.0 | 0.0 | 0.8186 | 0.7642 | 0.8737 | 0.0 | 0.0613 | 0.3051 | 0.0 |
| 0.3702 | 70.0 | 7490 | 0.6890 | 0.2875 | 0.3635 | 0.8012 | nan | 0.5995 | 0.9326 | 0.8853 | 0.8029 | 0.2289 | 0.0 | 0.5002 | 0.5737 | 0.0 | 0.9451 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7417 | nan | 0.0 | 0.8227 | 0.0 | 0.6097 | 0.3263 | 0.0 | nan | 0.0 | 0.5053 | 0.0 | 0.0 | 0.9192 | 0.8235 | 0.9210 | 0.0 | 0.0666 | 0.4292 | 0.0 | nan | 0.5210 | 0.8170 | 0.8010 | 0.5198 | 0.1907 | 0.0 | 0.3010 | 0.3898 | 0.0 | 0.7651 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4836 | nan | 0.0 | 0.6753 | 0.0 | 0.3649 | 0.2576 | 0.0 | nan | 0.0 | 0.3513 | 0.0 | 0.0 | 0.8151 | 0.7466 | 0.8840 | 0.0 | 0.0563 | 0.2598 | 0.0 |
| 0.3642 | 71.0 | 7597 | 0.6835 | 0.2867 | 0.3593 | 0.8038 | nan | 0.6182 | 0.9263 | 0.8897 | 0.8120 | 0.1957 | 0.0 | 0.4355 | 0.5927 | 0.0 | 0.9233 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7200 | nan | 0.0 | 0.8870 | 0.0 | 0.6023 | 0.3097 | 0.0 | nan | 0.0 | 0.4994 | 0.0 | 0.0 | 0.9270 | 0.8288 | 0.9199 | 0.0 | 0.0564 | 0.3520 | 0.0 | nan | 0.5306 | 0.8156 | 0.7929 | 0.4950 | 0.1747 | 0.0 | 0.2794 | 0.3891 | 0.0 | 0.8032 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4771 | nan | 0.0 | 0.6905 | 0.0 | 0.3674 | 0.2453 | 0.0 | nan | 0.0 | 0.3447 | 0.0 | 0.0 | 0.8116 | 0.7450 | 0.8826 | 0.0 | 0.0496 | 0.2805 | 0.0 |
| 0.36 | 72.0 | 7704 | 0.6669 | 0.2901 | 0.3652 | 0.8075 | nan | 0.6434 | 0.9327 | 0.8960 | 0.7900 | 0.2190 | 0.0 | 0.4746 | 0.5706 | 0.0 | 0.9461 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7405 | nan | 0.0 | 0.8967 | 0.0 | 0.5709 | 0.3347 | 0.0 | nan | 0.0 | 0.5213 | 0.0 | 0.0 | 0.8767 | 0.8656 | 0.9185 | 0.0 | 0.0645 | 0.4230 | 0.0 | nan | 0.5397 | 0.8231 | 0.7948 | 0.5252 | 0.1971 | 0.0 | 0.2832 | 0.3853 | 0.0 | 0.7856 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4817 | nan | 0.0 | 0.6834 | 0.0 | 0.3839 | 0.2617 | 0.0 | nan | 0.0 | 0.3396 | 0.0 | 0.0 | 0.8178 | 0.7627 | 0.8720 | 0.0 | 0.0530 | 0.2933 | 0.0 |
| 0.3973 | 73.0 | 7811 | 0.6383 | 0.2949 | 0.3680 | 0.8186 | nan | 0.7241 | 0.9280 | 0.9008 | 0.7697 | 0.2577 | 0.0 | 0.5086 | 0.5711 | 0.0 | 0.9495 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7286 | nan | 0.0 | 0.8676 | 0.0 | 0.6173 | 0.3238 | 0.0 | nan | 0.0 | 0.5022 | 0.0 | 0.0 | 0.9099 | 0.8670 | 0.9130 | 0.0 | 0.0432 | 0.3933 | 0.0 | nan | 0.5943 | 0.8414 | 0.7925 | 0.5329 | 0.2288 | 0.0 | 0.3133 | 0.3883 | 0.0 | 0.7799 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4800 | nan | 0.0 | 0.6892 | 0.0 | 0.4039 | 0.2600 | 0.0 | nan | 0.0 | 0.3515 | 0.0 | 0.0 | 0.8218 | 0.7658 | 0.8779 | 0.0 | 0.0378 | 0.2778 | 0.0 |
| 0.3552 | 74.0 | 7918 | 0.6462 | 0.2937 | 0.3665 | 0.8151 | nan | 0.6810 | 0.9352 | 0.9009 | 0.7938 | 0.2200 | 0.0 | 0.4290 | 0.5985 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7497 | nan | 0.0 | 0.8762 | 0.0 | 0.6223 | 0.3297 | 0.0 | nan | 0.0 | 0.5028 | 0.0 | 0.0 | 0.9107 | 0.8538 | 0.9194 | 0.0 | 0.0489 | 0.4105 | 0.0 | nan | 0.5681 | 0.8314 | 0.8066 | 0.5452 | 0.1979 | 0.0 | 0.2832 | 0.4003 | 0.0 | 0.7864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4794 | nan | 0.0 | 0.6941 | 0.0 | 0.4007 | 0.2634 | 0.0 | nan | 0.0 | 0.3505 | 0.0 | 0.0 | 0.8197 | 0.7579 | 0.8799 | 0.0 | 0.0428 | 0.2906 | 0.0 |
| 0.3735 | 75.0 | 8025 | 0.6607 | 0.2912 | 0.3658 | 0.8094 | nan | 0.6830 | 0.9221 | 0.8990 | 0.7703 | 0.2393 | 0.0 | 0.4768 | 0.5555 | 0.0 | 0.9397 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7627 | nan | 0.0 | 0.8774 | 0.0 | 0.5842 | 0.3146 | 0.0 | nan | 0.0 | 0.5209 | 0.0 | 0.0 | 0.9052 | 0.8376 | 0.9323 | 0.0006 | 0.0601 | 0.4251 | 0.0 | nan | 0.5616 | 0.8266 | 0.8043 | 0.4916 | 0.2068 | 0.0 | 0.2969 | 0.3852 | 0.0 | 0.7947 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4696 | nan | 0.0 | 0.6919 | 0.0 | 0.3934 | 0.2599 | 0.0 | nan | 0.0 | 0.3454 | 0.0 | 0.0 | 0.8176 | 0.7506 | 0.8838 | 0.0006 | 0.0529 | 0.2857 | 0.0 |
| 0.349 | 76.0 | 8132 | 0.6499 | 0.2920 | 0.3634 | 0.8132 | nan | 0.6815 | 0.9338 | 0.8990 | 0.7476 | 0.2275 | 0.0 | 0.4769 | 0.5225 | 0.0 | 0.9473 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7426 | nan | 0.0 | 0.8829 | 0.0 | 0.6085 | 0.3132 | 0.0 | nan | 0.0 | 0.5296 | 0.0 | 0.0 | 0.9144 | 0.8342 | 0.9098 | 0.0 | 0.0538 | 0.4042 | 0.0 | nan | 0.5611 | 0.8351 | 0.8007 | 0.5302 | 0.1879 | 0.0 | 0.2919 | 0.3759 | 0.0 | 0.7918 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4747 | nan | 0.0 | 0.6961 | 0.0 | 0.4043 | 0.2598 | 0.0 | nan | 0.0 | 0.3443 | 0.0 | 0.0 | 0.8162 | 0.7462 | 0.8769 | 0.0 | 0.0491 | 0.3031 | 0.0 |
| 0.3714 | 77.0 | 8239 | 0.6534 | 0.2926 | 0.3678 | 0.8124 | nan | 0.6790 | 0.9351 | 0.8952 | 0.7512 | 0.2106 | 0.0 | 0.5023 | 0.5752 | 0.0 | 0.9328 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7807 | nan | 0.0 | 0.8562 | 0.0 | 0.6458 | 0.3162 | 0.0 | nan | 0.0 | 0.5232 | 0.0 | 0.0 | 0.9210 | 0.8265 | 0.9273 | 0.0 | 0.0808 | 0.4113 | 0.0 | nan | 0.5593 | 0.8347 | 0.8043 | 0.5370 | 0.1833 | 0.0 | 0.2953 | 0.3971 | 0.0 | 0.7974 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4632 | nan | 0.0 | 0.6987 | 0.0 | 0.3865 | 0.2565 | 0.0 | nan | 0.0 | 0.3415 | 0.0 | 0.0 | 0.8136 | 0.7420 | 0.8860 | 0.0 | 0.0712 | 0.2942 | 0.0 |
| 0.363 | 78.0 | 8346 | 0.6516 | 0.2910 | 0.3632 | 0.8136 | nan | 0.6971 | 0.9296 | 0.8965 | 0.7702 | 0.2131 | 0.0 | 0.4759 | 0.5148 | 0.0 | 0.9332 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7724 | nan | 0.0 | 0.8932 | 0.0 | 0.5626 | 0.3029 | 0.0 | nan | 0.0 | 0.5263 | 0.0 | 0.0 | 0.9160 | 0.8210 | 0.9231 | 0.0 | 0.0554 | 0.4197 | 0.0 | nan | 0.5716 | 0.8385 | 0.7896 | 0.5483 | 0.1777 | 0.0 | 0.2883 | 0.3691 | 0.0 | 0.7908 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4736 | nan | 0.0 | 0.6864 | 0.0 | 0.3961 | 0.2512 | 0.0 | nan | 0.0 | 0.3478 | 0.0 | 0.0 | 0.8160 | 0.7383 | 0.8834 | 0.0 | 0.0501 | 0.2945 | 0.0 |
| 0.3493 | 79.0 | 8453 | 0.6702 | 0.2912 | 0.3685 | 0.8100 | nan | 0.6696 | 0.9258 | 0.9017 | 0.7644 | 0.2376 | 0.0 | 0.4962 | 0.5597 | 0.0 | 0.9498 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7711 | nan | 0.0 | 0.8724 | 0.0 | 0.5995 | 0.3210 | 0.0 | nan | 0.0 | 0.5325 | 0.0 | 0.0 | 0.9025 | 0.8466 | 0.9381 | 0.0 | 0.0799 | 0.4247 | 0.0 | nan | 0.5541 | 0.8345 | 0.7881 | 0.5164 | 0.1987 | 0.0 | 0.2920 | 0.3835 | 0.0 | 0.7768 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4737 | nan | 0.0 | 0.6941 | 0.0 | 0.3974 | 0.2577 | 0.0 | nan | 0.0 | 0.3448 | 0.0 | 0.0 | 0.8187 | 0.7431 | 0.8877 | 0.0 | 0.0699 | 0.2870 | 0.0 |
| 0.3792 | 80.0 | 8560 | 0.6412 | 0.2946 | 0.3691 | 0.8157 | nan | 0.6826 | 0.9328 | 0.9031 | 0.7805 | 0.2240 | 0.0 | 0.5004 | 0.5717 | 0.0 | 0.9422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7532 | nan | 0.0 | 0.8790 | 0.0 | 0.6263 | 0.3250 | 0.0 | nan | 0.0 | 0.5130 | 0.0 | 0.0 | 0.9049 | 0.8708 | 0.9215 | 0.0 | 0.0666 | 0.4137 | 0.0 | nan | 0.5668 | 0.8404 | 0.7926 | 0.5316 | 0.1912 | 0.0 | 0.3036 | 0.3948 | 0.0 | 0.7940 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4767 | nan | 0.0 | 0.6963 | 0.0 | 0.3952 | 0.2617 | 0.0 | nan | 0.0 | 0.3547 | 0.0 | 0.0 | 0.8229 | 0.7615 | 0.8830 | 0.0 | 0.0593 | 0.2996 | 0.0 |
| 0.3466 | 81.0 | 8667 | 0.6398 | 0.2949 | 0.3696 | 0.8181 | nan | 0.7198 | 0.9374 | 0.8927 | 0.7518 | 0.1953 | 0.0 | 0.5069 | 0.6073 | 0.0 | 0.9437 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7508 | nan | 0.0 | 0.8438 | 0.0 | 0.6477 | 0.3045 | 0.0 | nan | 0.0 | 0.5206 | 0.0 | 0.0 | 0.9149 | 0.8694 | 0.9313 | 0.0 | 0.0794 | 0.4091 | 0.0 | nan | 0.5959 | 0.8409 | 0.8043 | 0.5625 | 0.1746 | 0.0 | 0.2955 | 0.4016 | 0.0 | 0.7887 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4771 | nan | 0.0 | 0.6876 | 0.0 | 0.3937 | 0.2508 | 0.0 | nan | 0.0 | 0.3438 | 0.0 | 0.0 | 0.8203 | 0.7721 | 0.8882 | 0.0 | 0.0703 | 0.2696 | 0.0 |
| 0.3434 | 82.0 | 8774 | 0.6427 | 0.2948 | 0.3702 | 0.8144 | nan | 0.6701 | 0.9388 | 0.8942 | 0.7976 | 0.2036 | 0.0 | 0.4717 | 0.5793 | 0.0 | 0.9421 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7673 | nan | 0.0 | 0.8614 | 0.0 | 0.6617 | 0.3411 | 0.0 | nan | 0.0 | 0.5250 | 0.0 | 0.0 | 0.9065 | 0.8583 | 0.9214 | 0.0 | 0.1155 | 0.3911 | 0.0 | nan | 0.5615 | 0.8356 | 0.8036 | 0.5543 | 0.1765 | 0.0 | 0.2927 | 0.3998 | 0.0 | 0.7927 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4756 | nan | 0.0 | 0.6983 | 0.0 | 0.3912 | 0.2617 | 0.0 | nan | 0.0 | 0.3422 | 0.0 | 0.0 | 0.8220 | 0.7609 | 0.8829 | 0.0 | 0.0994 | 0.2837 | 0.0 |
| 0.3728 | 83.0 | 8881 | 0.6632 | 0.2935 | 0.3712 | 0.8071 | nan | 0.6362 | 0.9181 | 0.8946 | 0.8165 | 0.2796 | 0.0 | 0.4980 | 0.5929 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7612 | nan | 0.0 | 0.8576 | 0.0 | 0.6222 | 0.3247 | 0.0 | nan | 0.0 | 0.5315 | 0.0 | 0.0 | 0.9206 | 0.8297 | 0.9324 | 0.0 | 0.1246 | 0.3953 | 0.0 | nan | 0.5330 | 0.8303 | 0.8021 | 0.5115 | 0.2133 | 0.0 | 0.3082 | 0.4008 | 0.0 | 0.7792 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4707 | nan | 0.0 | 0.6944 | 0.0 | 0.3960 | 0.2571 | 0.0 | nan | 0.0 | 0.3433 | 0.0 | 0.0 | 0.8166 | 0.7505 | 0.8884 | 0.0 | 0.1076 | 0.2874 | 0.0 |
| 0.3449 | 84.0 | 8988 | 0.6665 | 0.2911 | 0.3655 | 0.8080 | nan | 0.6208 | 0.9362 | 0.8933 | 0.7983 | 0.2167 | 0.0 | 0.4705 | 0.5213 | 0.0 | 0.9445 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7528 | nan | 0.0 | 0.8565 | 0.0 | 0.6339 | 0.3453 | 0.0 | nan | 0.0 | 0.5227 | 0.0 | 0.0 | 0.9203 | 0.8327 | 0.9315 | 0.0 | 0.1078 | 0.3915 | 0.0 | nan | 0.5271 | 0.8305 | 0.8038 | 0.5352 | 0.1796 | 0.0 | 0.2901 | 0.3788 | 0.0 | 0.7816 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4767 | nan | 0.0 | 0.6966 | 0.0 | 0.3857 | 0.2623 | 0.0 | nan | 0.0 | 0.3403 | 0.0 | 0.0 | 0.8154 | 0.7512 | 0.8876 | 0.0 | 0.0934 | 0.2779 | 0.0 |
| 0.3677 | 85.0 | 9095 | 0.6600 | 0.2914 | 0.3667 | 0.8089 | nan | 0.6430 | 0.9281 | 0.8959 | 0.7877 | 0.2441 | 0.0 | 0.5011 | 0.5246 | 0.0 | 0.9417 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7416 | nan | 0.0 | 0.8635 | 0.0 | 0.6224 | 0.3337 | 0.0 | nan | 0.0 | 0.5238 | 0.0 | 0.0 | 0.9166 | 0.8404 | 0.9203 | 0.0 | 0.0966 | 0.4086 | 0.0 | nan | 0.5410 | 0.8368 | 0.8012 | 0.5221 | 0.1990 | 0.0 | 0.3032 | 0.3763 | 0.0 | 0.7839 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4819 | nan | 0.0 | 0.6880 | 0.0 | 0.3785 | 0.2603 | 0.0 | nan | 0.0 | 0.3469 | 0.0 | 0.0 | 0.8166 | 0.7502 | 0.8825 | 0.0 | 0.0826 | 0.2728 | 0.0 |
| 0.3479 | 86.0 | 9202 | 0.6653 | 0.2925 | 0.3659 | 0.8083 | nan | 0.6215 | 0.9364 | 0.8955 | 0.8062 | 0.2438 | 0.0 | 0.4356 | 0.5749 | 0.0 | 0.9352 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7572 | nan | 0.0 | 0.8647 | 0.0 | 0.5950 | 0.3194 | 0.0 | nan | 0.0 | 0.5181 | 0.0 | 0.0 | 0.9142 | 0.8559 | 0.9196 | 0.0010 | 0.1131 | 0.4024 | 0.0 | nan | 0.5305 | 0.8260 | 0.8026 | 0.5177 | 0.2000 | 0.0 | 0.2845 | 0.3964 | 0.0 | 0.8037 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4777 | nan | 0.0 | 0.6850 | 0.0 | 0.3926 | 0.2605 | 0.0 | nan | 0.0 | 0.3443 | 0.0 | 0.0 | 0.8210 | 0.7590 | 0.8827 | 0.0010 | 0.0985 | 0.2760 | 0.0 |
| 0.373 | 87.0 | 9309 | 0.6488 | 0.2953 | 0.3681 | 0.8141 | nan | 0.6465 | 0.9404 | 0.8996 | 0.7934 | 0.2418 | 0.0 | 0.4875 | 0.5646 | 0.0 | 0.9394 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7519 | nan | 0.0 | 0.8931 | 0.0 | 0.6325 | 0.3185 | 0.0 | nan | 0.0 | 0.5045 | 0.0 | 0.0 | 0.8982 | 0.8624 | 0.9196 | 0.0000 | 0.1086 | 0.3763 | 0.0 | nan | 0.5479 | 0.8347 | 0.7989 | 0.5439 | 0.2043 | 0.0 | 0.2952 | 0.3956 | 0.0 | 0.8041 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4802 | nan | 0.0 | 0.6921 | 0.0 | 0.3919 | 0.2632 | 0.0 | nan | 0.0 | 0.3462 | 0.0 | 0.0 | 0.8219 | 0.7598 | 0.8803 | 0.0000 | 0.0954 | 0.2939 | 0.0 |
| 0.3509 | 88.0 | 9416 | 0.6508 | 0.2938 | 0.3690 | 0.8125 | nan | 0.6480 | 0.9359 | 0.8987 | 0.8023 | 0.2228 | 0.0 | 0.4828 | 0.5941 | 0.0 | 0.9355 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7617 | nan | 0.0 | 0.8669 | 0.0 | 0.5964 | 0.3253 | 0.0 | nan | 0.0 | 0.5218 | 0.0 | 0.0 | 0.9249 | 0.8344 | 0.9275 | 0.0 | 0.1256 | 0.4037 | 0.0 | nan | 0.5517 | 0.8360 | 0.7990 | 0.5289 | 0.1923 | 0.0 | 0.2911 | 0.3969 | 0.0 | 0.7989 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4790 | nan | 0.0 | 0.6967 | 0.0 | 0.3872 | 0.2572 | 0.0 | nan | 0.0 | 0.3400 | 0.0 | 0.0 | 0.8153 | 0.7499 | 0.8866 | 0.0 | 0.1061 | 0.2894 | 0.0 |
| 0.3249 | 89.0 | 9523 | 0.6380 | 0.2947 | 0.3653 | 0.8162 | nan | 0.6541 | 0.9527 | 0.9012 | 0.7578 | 0.2159 | 0.0 | 0.4779 | 0.5541 | 0.0 | 0.9496 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7475 | nan | 0.0 | 0.8613 | 0.0 | 0.6083 | 0.3103 | 0.0 | nan | 0.0 | 0.5111 | 0.0 | 0.0 | 0.9215 | 0.8387 | 0.9247 | 0.0 | 0.1075 | 0.3965 | 0.0 | nan | 0.5525 | 0.8372 | 0.8023 | 0.5649 | 0.1893 | 0.0 | 0.2923 | 0.3918 | 0.0 | 0.7877 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4774 | nan | 0.0 | 0.7001 | 0.0 | 0.3917 | 0.2583 | 0.0 | nan | 0.0 | 0.3406 | 0.0 | 0.0 | 0.8165 | 0.7519 | 0.8854 | 0.0 | 0.0954 | 0.2955 | 0.0 |
| 0.3507 | 90.0 | 9630 | 0.6552 | 0.2931 | 0.3681 | 0.8112 | nan | 0.6412 | 0.9316 | 0.9007 | 0.7940 | 0.2344 | 0.0 | 0.4845 | 0.5679 | 0.0 | 0.9438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7501 | nan | 0.0 | 0.8788 | 0.0 | 0.6209 | 0.3117 | 0.0 | nan | 0.0 | 0.5239 | 0.0 | 0.0 | 0.9155 | 0.8504 | 0.9231 | 0.0 | 0.1052 | 0.4019 | 0.0 | nan | 0.5432 | 0.8346 | 0.7967 | 0.5219 | 0.1977 | 0.0 | 0.2933 | 0.3922 | 0.0 | 0.7936 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4792 | nan | 0.0 | 0.6958 | 0.0 | 0.3913 | 0.2588 | 0.0 | nan | 0.0 | 0.3429 | 0.0 | 0.0 | 0.8188 | 0.7511 | 0.8841 | 0.0 | 0.0910 | 0.2920 | 0.0 |
| 0.3327 | 91.0 | 9737 | 0.6568 | 0.2929 | 0.3687 | 0.8102 | nan | 0.6277 | 0.9380 | 0.8989 | 0.8059 | 0.2578 | 0.0 | 0.4617 | 0.5809 | 0.0 | 0.9460 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7536 | nan | 0.0 | 0.8356 | 0.0 | 0.6285 | 0.3180 | 0.0 | nan | 0.0 | 0.5218 | 0.0 | 0.0 | 0.9181 | 0.8578 | 0.9230 | 0.0004 | 0.0976 | 0.4261 | 0.0 | nan | 0.5366 | 0.8321 | 0.7979 | 0.5259 | 0.2114 | 0.0 | 0.2900 | 0.3969 | 0.0 | 0.7969 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4798 | nan | 0.0 | 0.6966 | 0.0 | 0.3832 | 0.2618 | 0.0 | nan | 0.0 | 0.3398 | 0.0 | 0.0 | 0.8184 | 0.7523 | 0.8857 | 0.0004 | 0.0849 | 0.2836 | 0.0 |
| 0.3428 | 92.0 | 9844 | 0.6481 | 0.2933 | 0.3672 | 0.8120 | nan | 0.6540 | 0.9343 | 0.9003 | 0.7727 | 0.2264 | 0.0 | 0.4777 | 0.5473 | 0.0 | 0.9437 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7544 | nan | 0.0 | 0.8720 | 0.0 | 0.6385 | 0.3236 | 0.0 | nan | 0.0 | 0.5132 | 0.0 | 0.0 | 0.9136 | 0.8557 | 0.9224 | 0.0 | 0.1007 | 0.4012 | 0.0 | nan | 0.5486 | 0.8334 | 0.7997 | 0.5315 | 0.1937 | 0.0 | 0.2905 | 0.3891 | 0.0 | 0.7948 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4778 | nan | 0.0 | 0.6974 | 0.0 | 0.3843 | 0.2628 | 0.0 | nan | 0.0 | 0.3480 | 0.0 | 0.0 | 0.8193 | 0.7522 | 0.8844 | 0.0 | 0.0885 | 0.2890 | 0.0 |
| 0.3483 | 93.0 | 9951 | 0.6642 | 0.2923 | 0.3664 | 0.8104 | nan | 0.6314 | 0.9384 | 0.9008 | 0.7929 | 0.2027 | 0.0 | 0.4565 | 0.5687 | 0.0 | 0.9355 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7620 | nan | 0.0 | 0.8702 | 0.0 | 0.6443 | 0.3233 | 0.0 | nan | 0.0 | 0.5056 | 0.0 | 0.0 | 0.9195 | 0.8529 | 0.9224 | 0.0 | 0.1132 | 0.3833 | 0.0 | nan | 0.5395 | 0.8298 | 0.7942 | 0.5268 | 0.1771 | 0.0 | 0.2783 | 0.3974 | 0.0 | 0.8030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4790 | nan | 0.0 | 0.7001 | 0.0 | 0.3838 | 0.2612 | 0.0 | nan | 0.0 | 0.3438 | 0.0 | 0.0 | 0.8168 | 0.7498 | 0.8846 | 0.0 | 0.0996 | 0.2879 | 0.0 |
| 0.346 | 93.46 | 10000 | 0.6468 | 0.2931 | 0.3665 | 0.8121 | nan | 0.6505 | 0.9345 | 0.9011 | 0.7895 | 0.2382 | 0.0 | 0.4519 | 0.5536 | 0.0 | 0.9509 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7507 | nan | 0.0 | 0.8681 | 0.0 | 0.6107 | 0.3192 | 0.0 | nan | 0.0 | 0.5156 | 0.0 | 0.0 | 0.9183 | 0.8478 | 0.9246 | 0.0 | 0.1083 | 0.3940 | 0.0 | nan | 0.5472 | 0.8329 | 0.7961 | 0.5266 | 0.2013 | 0.0 | 0.2863 | 0.3887 | 0.0 | 0.7872 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4759 | nan | 0.0 | 0.6992 | 0.0 | 0.3924 | 0.2614 | 0.0 | nan | 0.0 | 0.3413 | 0.0 | 0.0 | 0.8182 | 0.7517 | 0.8855 | 0.0 | 0.0963 | 0.2896 | 0.0 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DrishtiSharma/a2c-PandaReachDense-v2
|
DrishtiSharma
| 2023-05-05T03:05:05Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-29T09:54:09Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.38 +/- 0.28
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Soulaimen/swin-tiny-patch4-window7-224-bottomCleanedData
|
Soulaimen
| 2023-05-05T02:28:05Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-05T00:07:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-bottomCleanedData
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9931895573212258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-bottomCleanedData
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0238
- Accuracy: 0.9932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 7
- total_train_batch_size: 56
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3257 | 1.0 | 141 | 0.2017 | 0.9330 |
| 0.2234 | 2.0 | 283 | 0.0655 | 0.9773 |
| 0.2719 | 2.99 | 424 | 0.0542 | 0.9773 |
| 0.1726 | 4.0 | 566 | 0.0446 | 0.9818 |
| 0.2053 | 4.99 | 707 | 0.0373 | 0.9864 |
| 0.1794 | 6.0 | 849 | 0.0413 | 0.9864 |
| 0.1645 | 7.0 | 991 | 0.0446 | 0.9818 |
| 0.1445 | 8.0 | 1132 | 0.0238 | 0.9932 |
| 0.1469 | 9.0 | 1274 | 0.0252 | 0.9909 |
| 0.0931 | 9.96 | 1410 | 0.0236 | 0.9921 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ramya2300/autotrain-final-sentiment-analysis-55566129341
|
Ramya2300
| 2023-05-05T02:15:26Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Ramya2300/autotrain-data-final-sentiment-analysis",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T02:09:52Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ramya2300/autotrain-data-final-sentiment-analysis
co2_eq_emissions:
emissions: 2.1068707556976243
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 55566129341
- CO2 Emissions (in grams): 2.1069
## Validation Metrics
- Loss: 0.652
- Accuracy: 0.780
- Macro F1: 0.761
- Micro F1: 0.780
- Weighted F1: 0.780
- Macro Precision: 0.759
- Micro Precision: 0.780
- Weighted Precision: 0.781
- Macro Recall: 0.763
- Micro Recall: 0.780
- Weighted Recall: 0.780
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ramya2300/autotrain-final-sentiment-analysis-55566129341
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ramya2300/autotrain-final-sentiment-analysis-55566129341", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ramya2300/autotrain-final-sentiment-analysis-55566129341", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
kujaomega/ppo-LunarLander-v2
|
kujaomega
| 2023-05-05T02:14:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-04T23:57:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.33 +/- 24.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
character-aware-diffusion/charred
|
character-aware-diffusion
| 2023-05-05T02:10:27Z | 6 | 2 |
transformers
|
[
"transformers",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-04-24T21:01:00Z |
---
license: cc-by-nc-sa-4.0
---
|
DreamPerson/upscale
|
DreamPerson
| 2023-05-05T01:41:40Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T01:38:27Z |
---
license: creativeml-openrail-m
---
|
Ibrahim-Alam/finetuning-albert-base-v2-on-imdb
|
Ibrahim-Alam
| 2023-05-05T01:18:02Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T23:47:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-albert-base-v2-on-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.95812
- name: F1
type: f1
value: 0.9580680043253634
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-albert-base-v2-on-imdb
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1288
- Accuracy: 0.9581
- F1: 0.9581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
marianodo/ContrastiveLoss
|
marianodo
| 2023-05-05T01:11:45Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-05T01:10:43Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1337 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1337,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
lrthomps/ppo-Huggy
|
lrthomps
| 2023-05-05T01:09:37Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-05T01:04:15Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: lrthomps/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
japuralo/futurama
|
japuralo
| 2023-05-05T00:39:37Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-05-05T00:39:32Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
hli/distilroberta-base-sentence-transformer-eval-qqp
|
hli
| 2023-05-05T00:30:28Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-05T00:30:21Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hli/distilroberta-base-sentence-transformer-eval-qqp
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hli/distilroberta-base-sentence-transformer-eval-qqp')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hli/distilroberta-base-sentence-transformer-eval-qqp')
model = AutoModel.from_pretrained('hli/distilroberta-base-sentence-transformer-eval-qqp')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hli/distilroberta-base-sentence-transformer-eval-qqp)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3181 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3181,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ethzanalytics/stablelm-tuned-alpha-7b-sharded-8bit
|
ethzanalytics
| 2023-05-04T23:58:46Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"stableLM",
"sharded",
"8-bit",
"quantized",
"tuned",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-04-28T02:13:44Z |
---
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- stableLM
- sharded
- 8-bit
- quantized
- tuned
inference: false
---
# stablelm-tuned-alpha-7b-sharded-8bit
This is a sharded checkpoint (with ~4GB shards) of the `stabilityai/stablelm-tuned-alpha-7b` model **in `8bit` precision** using `bitsandbytes`.
Refer to the [original model](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) for all details w.r.t. to the model. For more info on loading 8bit models, refer to the [example repo](https://huggingface.co/ybelkada/bloom-1b7-8bit) and/or the `4.28.0` [release info](https://github.com/huggingface/transformers/releases/tag/v4.28.0).
- total model size is only ~7 GB!
- this enables low-RAM loading, i.e. Colab :)
## Basic Usage
<a href="https://colab.research.google.com/gist/pszemraj/4bd75aa3744f2a02a5c0ee499932b7eb/sharded-stablelm-testing-notebook.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
You can use this model as a drop-in replacement in the notebook for the standard sharded models.
### Python
Install/upgrade `transformers`, `accelerate`, and `bitsandbytes`. For this to work **you must have** `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
```bash
pip install -U -q transformers bitsandbytes accelerate
```
Load the model. As it is serialized in 8bit you don't need to do anything special:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ethzanalytics/stablelm-tuned-alpha-7b-sharded-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
|
conorjudge/distilbert-base-uncased-finetuned-diabetes_sentences
|
conorjudge
| 2023-05-04T23:41:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T23:38:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-diabetes_sentences
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-diabetes_sentences
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5278
- Accuracy: 0.8462
- F1: 0.8441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1171 | 1.0 | 2 | 1.1097 | 0.4103 | 0.3239 |
| 1.0594 | 2.0 | 4 | 1.0910 | 0.5641 | 0.4791 |
| 1.0633 | 3.0 | 6 | 1.0726 | 0.5897 | 0.4859 |
| 1.0348 | 4.0 | 8 | 1.0520 | 0.6410 | 0.5779 |
| 0.9992 | 5.0 | 10 | 1.0326 | 0.5385 | 0.4980 |
| 0.9915 | 6.0 | 12 | 1.0260 | 0.4872 | 0.4518 |
| 0.9447 | 7.0 | 14 | 0.9811 | 0.5641 | 0.5369 |
| 0.8217 | 8.0 | 16 | 0.9087 | 0.8205 | 0.8205 |
| 0.8067 | 9.0 | 18 | 0.8497 | 0.8462 | 0.8437 |
| 0.7156 | 10.0 | 20 | 0.8001 | 0.8462 | 0.8437 |
| 0.6859 | 11.0 | 22 | 0.7691 | 0.8462 | 0.8437 |
| 0.5988 | 12.0 | 24 | 0.7399 | 0.8462 | 0.8437 |
| 0.5365 | 13.0 | 26 | 0.6851 | 0.8462 | 0.8437 |
| 0.4467 | 14.0 | 28 | 0.6255 | 0.8462 | 0.8437 |
| 0.4347 | 15.0 | 30 | 0.5791 | 0.8462 | 0.8437 |
| 0.363 | 16.0 | 32 | 0.5482 | 0.8462 | 0.8437 |
| 0.2946 | 17.0 | 34 | 0.5359 | 0.7949 | 0.7967 |
| 0.2343 | 18.0 | 36 | 0.4981 | 0.7949 | 0.7967 |
| 0.1999 | 19.0 | 38 | 0.4467 | 0.8718 | 0.8706 |
| 0.1615 | 20.0 | 40 | 0.4282 | 0.8718 | 0.8706 |
| 0.1314 | 21.0 | 42 | 0.4236 | 0.8718 | 0.8706 |
| 0.1386 | 22.0 | 44 | 0.4183 | 0.8718 | 0.8706 |
| 0.0973 | 23.0 | 46 | 0.4291 | 0.8462 | 0.8467 |
| 0.0853 | 24.0 | 48 | 0.4173 | 0.8462 | 0.8467 |
| 0.0732 | 25.0 | 50 | 0.3749 | 0.8462 | 0.8467 |
| 0.0641 | 26.0 | 52 | 0.3341 | 0.8974 | 0.8971 |
| 0.0541 | 27.0 | 54 | 0.3223 | 0.8974 | 0.8971 |
| 0.0481 | 28.0 | 56 | 0.3277 | 0.8974 | 0.8971 |
| 0.0383 | 29.0 | 58 | 0.3415 | 0.8974 | 0.8971 |
| 0.036 | 30.0 | 60 | 0.3609 | 0.8974 | 0.8971 |
| 0.0299 | 31.0 | 62 | 0.3823 | 0.8974 | 0.8971 |
| 0.0321 | 32.0 | 64 | 0.4026 | 0.8974 | 0.8971 |
| 0.03 | 33.0 | 66 | 0.4176 | 0.8718 | 0.8706 |
| 0.0277 | 34.0 | 68 | 0.4201 | 0.8718 | 0.8706 |
| 0.0236 | 35.0 | 70 | 0.4129 | 0.8718 | 0.8706 |
| 0.022 | 36.0 | 72 | 0.4003 | 0.8974 | 0.8971 |
| 0.022 | 37.0 | 74 | 0.3865 | 0.8974 | 0.8971 |
| 0.0211 | 38.0 | 76 | 0.3731 | 0.8974 | 0.8971 |
| 0.017 | 39.0 | 78 | 0.3634 | 0.8718 | 0.8705 |
| 0.0188 | 40.0 | 80 | 0.3618 | 0.8718 | 0.8705 |
| 0.0169 | 41.0 | 82 | 0.3683 | 0.8718 | 0.8705 |
| 0.0161 | 42.0 | 84 | 0.3810 | 0.8718 | 0.8705 |
| 0.0162 | 43.0 | 86 | 0.3944 | 0.8718 | 0.8705 |
| 0.0141 | 44.0 | 88 | 0.4091 | 0.8974 | 0.8971 |
| 0.0132 | 45.0 | 90 | 0.4233 | 0.8974 | 0.8971 |
| 0.0143 | 46.0 | 92 | 0.4335 | 0.8718 | 0.8706 |
| 0.0142 | 47.0 | 94 | 0.4413 | 0.8718 | 0.8706 |
| 0.0125 | 48.0 | 96 | 0.4436 | 0.8718 | 0.8706 |
| 0.0115 | 49.0 | 98 | 0.4437 | 0.8718 | 0.8706 |
| 0.0106 | 50.0 | 100 | 0.4410 | 0.8462 | 0.8441 |
| 0.0109 | 51.0 | 102 | 0.4376 | 0.8462 | 0.8441 |
| 0.0119 | 52.0 | 104 | 0.4341 | 0.8462 | 0.8441 |
| 0.012 | 53.0 | 106 | 0.4322 | 0.8718 | 0.8705 |
| 0.0122 | 54.0 | 108 | 0.4314 | 0.8718 | 0.8705 |
| 0.0107 | 55.0 | 110 | 0.4315 | 0.8718 | 0.8705 |
| 0.0102 | 56.0 | 112 | 0.4324 | 0.8718 | 0.8705 |
| 0.0102 | 57.0 | 114 | 0.4351 | 0.8462 | 0.8441 |
| 0.0098 | 58.0 | 116 | 0.4379 | 0.8462 | 0.8441 |
| 0.009 | 59.0 | 118 | 0.4399 | 0.8462 | 0.8441 |
| 0.0099 | 60.0 | 120 | 0.4415 | 0.8462 | 0.8441 |
| 0.0094 | 61.0 | 122 | 0.4429 | 0.8462 | 0.8441 |
| 0.008 | 62.0 | 124 | 0.4479 | 0.8462 | 0.8441 |
| 0.0084 | 63.0 | 126 | 0.4531 | 0.8462 | 0.8441 |
| 0.0079 | 64.0 | 128 | 0.4571 | 0.8462 | 0.8441 |
| 0.0079 | 65.0 | 130 | 0.4607 | 0.8462 | 0.8441 |
| 0.0076 | 66.0 | 132 | 0.4637 | 0.8462 | 0.8441 |
| 0.0072 | 67.0 | 134 | 0.4659 | 0.8462 | 0.8441 |
| 0.0076 | 68.0 | 136 | 0.4693 | 0.8462 | 0.8441 |
| 0.0078 | 69.0 | 138 | 0.4726 | 0.8462 | 0.8441 |
| 0.0066 | 70.0 | 140 | 0.4729 | 0.8462 | 0.8441 |
| 0.0082 | 71.0 | 142 | 0.4711 | 0.8462 | 0.8441 |
| 0.0075 | 72.0 | 144 | 0.4673 | 0.8462 | 0.8441 |
| 0.0065 | 73.0 | 146 | 0.4645 | 0.8462 | 0.8441 |
| 0.0064 | 74.0 | 148 | 0.4623 | 0.8462 | 0.8441 |
| 0.0075 | 75.0 | 150 | 0.4613 | 0.8718 | 0.8705 |
| 0.0064 | 76.0 | 152 | 0.4616 | 0.8718 | 0.8705 |
| 0.0063 | 77.0 | 154 | 0.4627 | 0.8462 | 0.8441 |
| 0.0072 | 78.0 | 156 | 0.4635 | 0.8462 | 0.8441 |
| 0.0058 | 79.0 | 158 | 0.4636 | 0.8462 | 0.8441 |
| 0.006 | 80.0 | 160 | 0.4641 | 0.8462 | 0.8441 |
| 0.0061 | 81.0 | 162 | 0.4651 | 0.8462 | 0.8441 |
| 0.0054 | 82.0 | 164 | 0.4675 | 0.8462 | 0.8441 |
| 0.0066 | 83.0 | 166 | 0.4692 | 0.8462 | 0.8441 |
| 0.0056 | 84.0 | 168 | 0.4699 | 0.8462 | 0.8441 |
| 0.0058 | 85.0 | 170 | 0.4706 | 0.8462 | 0.8441 |
| 0.0056 | 86.0 | 172 | 0.4718 | 0.8462 | 0.8441 |
| 0.005 | 87.0 | 174 | 0.4745 | 0.8462 | 0.8441 |
| 0.0062 | 88.0 | 176 | 0.4766 | 0.8462 | 0.8441 |
| 0.0052 | 89.0 | 178 | 0.4786 | 0.8462 | 0.8441 |
| 0.0055 | 90.0 | 180 | 0.4801 | 0.8462 | 0.8441 |
| 0.0052 | 91.0 | 182 | 0.4811 | 0.8462 | 0.8441 |
| 0.0052 | 92.0 | 184 | 0.4818 | 0.8462 | 0.8441 |
| 0.0057 | 93.0 | 186 | 0.4832 | 0.8462 | 0.8441 |
| 0.005 | 94.0 | 188 | 0.4844 | 0.8462 | 0.8441 |
| 0.0055 | 95.0 | 190 | 0.4850 | 0.8462 | 0.8441 |
| 0.005 | 96.0 | 192 | 0.4852 | 0.8462 | 0.8441 |
| 0.0055 | 97.0 | 194 | 0.4860 | 0.8462 | 0.8441 |
| 0.0047 | 98.0 | 196 | 0.4872 | 0.8462 | 0.8441 |
| 0.0043 | 99.0 | 198 | 0.4889 | 0.8462 | 0.8441 |
| 0.0049 | 100.0 | 200 | 0.4902 | 0.8462 | 0.8441 |
| 0.0048 | 101.0 | 202 | 0.4909 | 0.8462 | 0.8441 |
| 0.0044 | 102.0 | 204 | 0.4908 | 0.8462 | 0.8441 |
| 0.004 | 103.0 | 206 | 0.4915 | 0.8462 | 0.8441 |
| 0.0044 | 104.0 | 208 | 0.4918 | 0.8462 | 0.8441 |
| 0.0044 | 105.0 | 210 | 0.4935 | 0.8462 | 0.8441 |
| 0.0043 | 106.0 | 212 | 0.4956 | 0.8462 | 0.8441 |
| 0.004 | 107.0 | 214 | 0.4978 | 0.8462 | 0.8441 |
| 0.0047 | 108.0 | 216 | 0.4987 | 0.8462 | 0.8441 |
| 0.0037 | 109.0 | 218 | 0.4994 | 0.8462 | 0.8441 |
| 0.0046 | 110.0 | 220 | 0.5012 | 0.8462 | 0.8441 |
| 0.004 | 111.0 | 222 | 0.5021 | 0.8462 | 0.8441 |
| 0.004 | 112.0 | 224 | 0.5030 | 0.8462 | 0.8441 |
| 0.004 | 113.0 | 226 | 0.5044 | 0.8462 | 0.8441 |
| 0.0039 | 114.0 | 228 | 0.5053 | 0.8462 | 0.8441 |
| 0.0038 | 115.0 | 230 | 0.5058 | 0.8462 | 0.8441 |
| 0.0041 | 116.0 | 232 | 0.5054 | 0.8462 | 0.8441 |
| 0.0038 | 117.0 | 234 | 0.5047 | 0.8462 | 0.8441 |
| 0.0035 | 118.0 | 236 | 0.5043 | 0.8462 | 0.8441 |
| 0.004 | 119.0 | 238 | 0.5035 | 0.8462 | 0.8441 |
| 0.0039 | 120.0 | 240 | 0.5029 | 0.8462 | 0.8441 |
| 0.0036 | 121.0 | 242 | 0.5019 | 0.8462 | 0.8441 |
| 0.0042 | 122.0 | 244 | 0.5012 | 0.8462 | 0.8441 |
| 0.0033 | 123.0 | 246 | 0.5005 | 0.8462 | 0.8441 |
| 0.0034 | 124.0 | 248 | 0.5003 | 0.8462 | 0.8441 |
| 0.0038 | 125.0 | 250 | 0.5002 | 0.8462 | 0.8441 |
| 0.0035 | 126.0 | 252 | 0.4998 | 0.8462 | 0.8441 |
| 0.0033 | 127.0 | 254 | 0.5002 | 0.8462 | 0.8441 |
| 0.0041 | 128.0 | 256 | 0.5010 | 0.8462 | 0.8441 |
| 0.0036 | 129.0 | 258 | 0.5025 | 0.8462 | 0.8441 |
| 0.0036 | 130.0 | 260 | 0.5037 | 0.8462 | 0.8441 |
| 0.0032 | 131.0 | 262 | 0.5049 | 0.8462 | 0.8441 |
| 0.0033 | 132.0 | 264 | 0.5061 | 0.8462 | 0.8441 |
| 0.0038 | 133.0 | 266 | 0.5075 | 0.8462 | 0.8441 |
| 0.0041 | 134.0 | 268 | 0.5087 | 0.8462 | 0.8441 |
| 0.0034 | 135.0 | 270 | 0.5094 | 0.8462 | 0.8441 |
| 0.0032 | 136.0 | 272 | 0.5107 | 0.8462 | 0.8441 |
| 0.0035 | 137.0 | 274 | 0.5123 | 0.8462 | 0.8441 |
| 0.0032 | 138.0 | 276 | 0.5138 | 0.8462 | 0.8441 |
| 0.0031 | 139.0 | 278 | 0.5143 | 0.8462 | 0.8441 |
| 0.0034 | 140.0 | 280 | 0.5145 | 0.8462 | 0.8441 |
| 0.0036 | 141.0 | 282 | 0.5151 | 0.8462 | 0.8441 |
| 0.003 | 142.0 | 284 | 0.5160 | 0.8462 | 0.8441 |
| 0.0034 | 143.0 | 286 | 0.5162 | 0.8462 | 0.8441 |
| 0.0031 | 144.0 | 288 | 0.5160 | 0.8462 | 0.8441 |
| 0.0031 | 145.0 | 290 | 0.5157 | 0.8462 | 0.8441 |
| 0.0032 | 146.0 | 292 | 0.5155 | 0.8462 | 0.8441 |
| 0.0029 | 147.0 | 294 | 0.5159 | 0.8462 | 0.8441 |
| 0.0032 | 148.0 | 296 | 0.5162 | 0.8462 | 0.8441 |
| 0.0036 | 149.0 | 298 | 0.5164 | 0.8462 | 0.8441 |
| 0.0028 | 150.0 | 300 | 0.5167 | 0.8462 | 0.8441 |
| 0.0026 | 151.0 | 302 | 0.5172 | 0.8462 | 0.8441 |
| 0.0028 | 152.0 | 304 | 0.5174 | 0.8462 | 0.8441 |
| 0.0031 | 153.0 | 306 | 0.5172 | 0.8462 | 0.8441 |
| 0.0029 | 154.0 | 308 | 0.5168 | 0.8462 | 0.8441 |
| 0.0031 | 155.0 | 310 | 0.5168 | 0.8462 | 0.8441 |
| 0.0033 | 156.0 | 312 | 0.5167 | 0.8462 | 0.8441 |
| 0.003 | 157.0 | 314 | 0.5168 | 0.8462 | 0.8441 |
| 0.0029 | 158.0 | 316 | 0.5175 | 0.8462 | 0.8441 |
| 0.0031 | 159.0 | 318 | 0.5181 | 0.8462 | 0.8441 |
| 0.003 | 160.0 | 320 | 0.5186 | 0.8462 | 0.8441 |
| 0.0031 | 161.0 | 322 | 0.5190 | 0.8462 | 0.8441 |
| 0.0032 | 162.0 | 324 | 0.5194 | 0.8462 | 0.8441 |
| 0.0028 | 163.0 | 326 | 0.5201 | 0.8462 | 0.8441 |
| 0.0026 | 164.0 | 328 | 0.5209 | 0.8462 | 0.8441 |
| 0.0032 | 165.0 | 330 | 0.5218 | 0.8462 | 0.8441 |
| 0.0031 | 166.0 | 332 | 0.5226 | 0.8462 | 0.8441 |
| 0.0029 | 167.0 | 334 | 0.5234 | 0.8462 | 0.8441 |
| 0.0032 | 168.0 | 336 | 0.5239 | 0.8462 | 0.8441 |
| 0.0031 | 169.0 | 338 | 0.5240 | 0.8462 | 0.8441 |
| 0.003 | 170.0 | 340 | 0.5243 | 0.8462 | 0.8441 |
| 0.0031 | 171.0 | 342 | 0.5246 | 0.8462 | 0.8441 |
| 0.0024 | 172.0 | 344 | 0.5250 | 0.8462 | 0.8441 |
| 0.0025 | 173.0 | 346 | 0.5256 | 0.8462 | 0.8441 |
| 0.0028 | 174.0 | 348 | 0.5265 | 0.8462 | 0.8441 |
| 0.003 | 175.0 | 350 | 0.5272 | 0.8462 | 0.8441 |
| 0.003 | 176.0 | 352 | 0.5275 | 0.8462 | 0.8441 |
| 0.0027 | 177.0 | 354 | 0.5278 | 0.8462 | 0.8441 |
| 0.0027 | 178.0 | 356 | 0.5277 | 0.8462 | 0.8441 |
| 0.0028 | 179.0 | 358 | 0.5276 | 0.8462 | 0.8441 |
| 0.0027 | 180.0 | 360 | 0.5274 | 0.8462 | 0.8441 |
| 0.0028 | 181.0 | 362 | 0.5272 | 0.8462 | 0.8441 |
| 0.0035 | 182.0 | 364 | 0.5270 | 0.8462 | 0.8441 |
| 0.003 | 183.0 | 366 | 0.5269 | 0.8462 | 0.8441 |
| 0.0028 | 184.0 | 368 | 0.5267 | 0.8462 | 0.8441 |
| 0.0026 | 185.0 | 370 | 0.5266 | 0.8462 | 0.8441 |
| 0.0033 | 186.0 | 372 | 0.5265 | 0.8462 | 0.8441 |
| 0.0028 | 187.0 | 374 | 0.5265 | 0.8462 | 0.8441 |
| 0.0025 | 188.0 | 376 | 0.5267 | 0.8462 | 0.8441 |
| 0.0029 | 189.0 | 378 | 0.5268 | 0.8462 | 0.8441 |
| 0.0029 | 190.0 | 380 | 0.5269 | 0.8462 | 0.8441 |
| 0.0024 | 191.0 | 382 | 0.5270 | 0.8462 | 0.8441 |
| 0.0031 | 192.0 | 384 | 0.5271 | 0.8462 | 0.8441 |
| 0.0028 | 193.0 | 386 | 0.5273 | 0.8462 | 0.8441 |
| 0.0026 | 194.0 | 388 | 0.5274 | 0.8462 | 0.8441 |
| 0.0027 | 195.0 | 390 | 0.5275 | 0.8462 | 0.8441 |
| 0.0026 | 196.0 | 392 | 0.5276 | 0.8462 | 0.8441 |
| 0.0026 | 197.0 | 394 | 0.5277 | 0.8462 | 0.8441 |
| 0.0028 | 198.0 | 396 | 0.5277 | 0.8462 | 0.8441 |
| 0.0026 | 199.0 | 398 | 0.5278 | 0.8462 | 0.8441 |
| 0.003 | 200.0 | 400 | 0.5278 | 0.8462 | 0.8441 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chribeiro/reinforce-CartPole-v1
|
chribeiro
| 2023-05-04T23:37:49Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-04T22:46:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
huggingtweets/tstorm106
|
huggingtweets
| 2023-05-04T23:31:47Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-04T23:31:39Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1411783471228461058/NACe_2Kf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">TStorm</div>
<div style="text-align: center; font-size: 14px;">@tstorm106</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from TStorm.
| Data | TStorm |
| --- | --- |
| Tweets downloaded | 3220 |
| Retweets | 171 |
| Short tweets | 900 |
| Tweets kept | 2149 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/cxkqs7up/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tstorm106's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/72bi3ylz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/72bi3ylz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tstorm106')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
juro95/xlm-roberta-finetuned-ner-full_equal_dist
|
juro95
| 2023-05-04T23:28:45Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-04T15:47:27Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: juro95/xlm-roberta-finetuned-ner-full_equal_dist
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juro95/xlm-roberta-finetuned-ner-full_equal_dist
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0120
- Validation Loss: 0.0324
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 87500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1425 | 0.0591 | 0 |
| 0.0528 | 0.0426 | 1 |
| 0.0310 | 0.0348 | 2 |
| 0.0193 | 0.0322 | 3 |
| 0.0120 | 0.0324 | 4 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.6.5
- Datasets 2.3.2
- Tokenizers 0.13.2
|
uisikdag/ayla_ozetler300_bertuncased
|
uisikdag
| 2023-05-04T23:00:04Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T21:43:45Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ayla_ozetler300_bertuncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ayla_ozetler300_bertuncased
This model is a fine-tuned version of [dbmdz/bert-base-turkish-uncased](https://huggingface.co/dbmdz/bert-base-turkish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1056
- Accuracy: 0.9756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.97 | 8 | 1.5103 | 0.48 |
| 1.5956 | 1.94 | 16 | 0.8089 | 0.7911 |
| 0.9875 | 2.91 | 24 | 0.3019 | 0.9289 |
| 0.3379 | 4.0 | 33 | 0.1606 | 0.9556 |
| 0.1349 | 4.97 | 41 | 0.1423 | 0.96 |
| 0.1349 | 5.94 | 49 | 0.1177 | 0.9667 |
| 0.0697 | 6.91 | 57 | 0.1122 | 0.9689 |
| 0.0434 | 8.0 | 66 | 0.1065 | 0.9756 |
| 0.0238 | 8.97 | 74 | 0.1060 | 0.9756 |
| 0.0288 | 9.7 | 80 | 0.1056 | 0.9756 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.11.0
|
mayank-mishra/santacoder-GPTQ-4bit-128g
|
mayank-mishra
| 2023-05-04T22:35:22Z | 0 | 2 | null |
[
"arxiv:2301.03988",
"arxiv:2210.17323",
"license:openrail",
"region:us"
] | null | 2023-05-04T22:33:55Z |
---
license: openrail
---
# GPTQ-for-SantaCoder
Visit [GPTQ-for-SantaCoder](https://github.com/mayank31398/GPTQ-for-SantaCoder) for instructions on how to use the model weights here.
If you want 8-bit weights, visit [santacoder-GPTQ-8bit-128g](https://huggingface.co/mayank31398/santacoder-GPTQ-8bit-128g).
## Results
| [SantaCoder](https://arxiv.org/abs/2301.03988) | Bits | group-size | memory(MiB) | wikitext2 | ptb | c4 | stack | checkpoint size(MB) |
| -------------------------------------------------- | ---- | ---------- | ----------- | --------- | ---------- | ---------- | ---------- | ------------------- |
| FP32 | 32 | - | 4344.722 | 24.927 | 38.574 | 27.779 | 2.619 | 4394 |
| BF16 | 16 | - | 2173.680 | 24.960 | 38.597 | 27.794 | 2.621 | 2195 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 8 | -1 | 1396.548 | 24.936 | 38.592 | 27.785 | 2.619 | 1411 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | -1 | 911.384 | 26.581 | 40.717 | 29.232 | 2.658 | 913 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 3 | -1 | - | 11761.473 | 7273.338 | 9124.941 | 2485.844 | 789 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 2 | -1 | - | 67976.797 | 68994.484 | 73294.438 | 45370.488 | 649 |
# License
The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license).
# Acknowledgements
Thanks to everyone in BigCode who worked so hard to create these code models.
|
mayank-mishra/santacoder-GPTQ-8bit-128g
|
mayank-mishra
| 2023-05-04T22:32:59Z | 0 | 1 | null |
[
"arxiv:2301.03988",
"arxiv:2210.17323",
"license:openrail",
"region:us"
] | null | 2023-05-04T22:25:06Z |
---
license: openrail
---
# GPTQ-for-SantaCoder
Visit [GPTQ-for-SantaCoder](https://github.com/mayank31398/GPTQ-for-SantaCoder) for instructions on how to use the model weights here.
If you want 4-bit weights, visit [santacoder-GPTQ-4bit-128g](https://huggingface.co/mayank31398/santacoder-GPTQ-4bit-128g).
## Results
| [SantaCoder](https://arxiv.org/abs/2301.03988) | Bits | group-size | memory(MiB) | wikitext2 | ptb | c4 | stack | checkpoint size(MB) |
| -------------------------------------------------- | ---- | ---------- | ----------- | --------- | ---------- | ---------- | ---------- | ------------------- |
| FP32 | 32 | - | 4344.722 | 24.927 | 38.574 | 27.779 | 2.619 | 4394 |
| BF16 | 16 | - | 2173.680 | 24.960 | 38.597 | 27.794 | 2.621 | 2195 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 8 | -1 | 1396.548 | 24.936 | 38.592 | 27.785 | 2.619 | 1411 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | -1 | 911.384 | 26.581 | 40.717 | 29.232 | 2.658 | 913 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 3 | -1 | - | 11761.473 | 7273.338 | 9124.941 | 2485.844 | 789 |
| [GPTQ](https://arxiv.org/abs/2210.17323) | 2 | -1 | - | 67976.797 | 68994.484 | 73294.438 | 45370.488 | 649 |
# License
The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license).
# Acknowledgements
Thanks to everyone in BigCode who worked so hard to create these code models.
|
GoldfieldGeek/ppo-Huggy
|
GoldfieldGeek
| 2023-05-04T22:17:52Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-04T22:17:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: GoldfieldGeek/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
az00/selu-segformer-b0-scene-parse-150-cvfinal
|
az00
| 2023-05-04T22:09:44Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-05-04T20:18:19Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: selu-segformer-b0-scene-parse-150-cvfinal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# selu-segformer-b0-scene-parse-150-cvfinal
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3127
- Mean Iou: 0.0939
- Mean Accuracy: 0.1473
- Overall Accuracy: 0.5650
- Per Category Iou: [0.5336831298064109, 0.7946016311618073, 0.9542612124083791, 0.4698340415687516, 0.7895064764715527, 0.6196465123602583, 0.0, 0.28656549336868975, 0.10462468913822695, nan, 0.0, 0.0, 0.024119941721107298, 0.0, 0.0, 0.0, 0.0, nan, 0.11790298802632052, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan]
- Per Category Accuracy: [0.9584331199463405, 0.9359158986175116, 0.9802837552806004, 0.7382528506925844, 0.797905481540399, 0.704932715344484, nan, 0.3646332654803336, 0.13570692997334627, nan, 0.0, nan, 0.024543985001960842, nan, 0.0, 0.0, nan, nan, 0.39719777113785026, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.8909 | 1.0 | 20 | 4.8871 | 0.0125 | 0.0437 | 0.2028 | [0.2508561911058272, 0.04515398550724638, 0.44804797891594955, 0.21111439623524775, 0.31233771405814414, 7.014449766519029e-05, 0.0, 0.0, 0.002980895171853123, nan, 0.0, nan, 0.027754770004437213, nan, 0.0, 0.0205260184545854, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0003843936190659235, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.004370956146657081, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0] | [0.39185298015397546, 0.06756573597180808, 0.6783385713684792, 0.21857860768857937, 0.32489891959965533, 7.836815118335908e-05, nan, 0.0, 0.0029985461594378483, nan, 0.0, nan, 0.0358977302074665, nan, 0.0, 0.06666893200584458, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0003843936190659235, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.005191611448869459, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.7422 | 2.0 | 40 | 4.5099 | 0.0250 | 0.0841 | 0.3808 | [0.34984703137884793, 0.3589339267863093, 0.4974385245901639, 0.37037865924560065, 0.5684109932116259, 0.01841334153311869, 0.0, 0.001964517481175478, 0.0029039377395748637, nan, 0.0, nan, 0.06118068223901683, nan, 0.0, 0.03404671283875128, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.03711163107543761, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0] | [0.7327161706761063, 0.5109870561127677, 0.9394270811424725, 0.4854978189331905, 0.5716510903426791, 0.02279393654418844, nan, 0.0019648747076680623, 0.003028834504482675, nan, 0.0, nan, 0.08533004294719121, nan, 0.0, 0.05562540351354107, nan, nan, 0.0, 0.0, nan, nan, 0.03711163107543761, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.0553 | 3.0 | 60 | 3.8580 | 0.0431 | 0.0969 | 0.4522 | [0.39298635765482803, 0.5115893703108798, 0.7615359747458901, 0.3781477627471384, 0.608186554204912, 0.07249520170070661, 0.0, 0.00026263422123902746, 0.0002654632333421821, nan, 0.0, nan, 0.031058319540453704, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.004609370441682712, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.890797371986272, 0.8525853889943074, 0.9625745894201705, 0.5214471569602817, 0.6093656790614437, 0.09429927677391908, nan, 0.00026265677326154047, 0.00027259510540344074, nan, 0.0, nan, 0.03661511090705616, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.004679678938482955, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.187 | 4.0 | 80 | 3.6772 | 0.0470 | 0.1028 | 0.4582 | [0.41381878117227805, 0.6254016101621327, 0.723075600130345, 0.4342820137592303, 0.5828271454173473, 0.03514338327468898, 0.0, 0.0013178423739339867, 0.0032081449763366166, nan, 0.0, nan, 0.08433073627395674, nan, 0.0, 0.05683375940128191, 0.0, 0.0, 0.0013988020515763424, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8694650748085648, 0.8541525481160206, 0.9600627362373881, 0.6586184025917706, 0.6089514151256048, 0.04550950493719352, nan, 0.0013183349581011937, 0.003664889750424037, nan, 0.0, nan, 0.10580887065147733, nan, 0.0, 0.10425090896734514, nan, nan, 0.0015747395623031575, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.9225 | 5.0 | 100 | 3.4916 | 0.0535 | 0.1072 | 0.4745 | [0.45367797998078097, 0.6130674322082363, 0.7883869096437359, 0.44548332225614223, 0.6276848544314471, 0.03952981755468821, 0.0, 0.008173727017699651, 0.0006412792839502804, nan, 0.0, nan, 0.019051546391752577, nan, 0.0, 0.034960804888587126, 0.0, 0.0, 0.020120972719417356, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8962918531278948, 0.9171862293304418, 0.9473472306518199, 0.7566005969235479, 0.6784317624444887, 0.0487561854862184, nan, 0.008182768705455683, 0.0007117761085534286, nan, 0.0, nan, 0.019885792992625325, nan, 0.0, 0.09623160827754935, nan, nan, 0.026326415246709197, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.3489 | 6.0 | 120 | 3.4303 | 0.0520 | 0.1086 | 0.4474 | [0.42187173154853513, 0.708063699670283, 0.5798242120634122, 0.3861172591019147, 0.6270752544837616, 0.02814913099490091, 0.0, 0.00023740131429408467, 0.017860624731895643, nan, 0.0, nan, 0.027602765692416008, nan, 0.0, 0.0016909891128098215, 0.0, 0.0, 0.11093712358467839, 0.0, nan, nan, 0.00014647722279185587, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.7836365221219719, 0.8768382352941176, 0.9973919993269675, 0.8288627841126502, 0.6859713660767548, 0.040852197666868185, nan, 0.00023740131429408467, 0.020808093045795978, nan, 0.0, nan, 0.029250002391268997, nan, 0.0, 0.0024805463998097115, nan, nan, 0.18594040216425745, 0.0, nan, nan, 0.00014809110564819477, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.185 | 7.0 | 140 | 3.2048 | 0.0641 | 0.1138 | 0.4879 | [0.4625065290705925, 0.7300395796369592, 0.8077190100423401, 0.42056456498035616, 0.6794219083593365, 0.044713581380198694, 0.0, 0.028159846611837125, 0.024135304777162067, nan, 0.0, 0.0, 0.03210493557646753, nan, 0.0, 0.009216222146861098, 0.0, 0.0, 0.09707657301147606, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9052804046916575, 0.9062415288696124, 0.9927588921405436, 0.7748463049412004, 0.7540763571286538, 0.06762051902106983, nan, 0.028190143299474182, 0.027834989096195785, nan, 0.0, nan, 0.033056902637091455, nan, 0.0, 0.015698800502905296, nan, nan, 0.15848340466768956, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.0486 | 8.0 | 160 | 3.0909 | 0.0698 | 0.1105 | 0.4975 | [0.4651597922984161, 0.7260885140196046, 0.9206514284429353, 0.45898710456104685, 0.7276357698339558, 0.04093562606727295, 0.0, 0.015882548879086716, 0.005692492407655908, nan, 0.0, 0.0, 0.010177352573912554, nan, 0.0, 0.008494170905639253, 0.0, 0.0, 0.04026935665611418, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9556969599371216, 0.9443616156139876, 0.9841356641087428, 0.7372643555011352, 0.7457579372970107, 0.05770135017129039, nan, 0.01589578587411669, 0.006103101526532591, nan, 0.0, nan, 0.010368542378069194, nan, 0.0, 0.011859050596350538, nan, nan, 0.06060728417992409, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.9697 | 9.0 | 180 | 2.9824 | 0.0718 | 0.1146 | 0.5030 | [0.47571929063641566, 0.7620331717703751, 0.9091288105282214, 0.45845890761854624, 0.6875787794176307, 0.06392846340574371, 0.0, 0.029610548726600787, 0.009870234653398776, nan, 0.0, 0.0, 0.04909003605961605, nan, 0.0, 0.0034267639047535366, 0.0, 0.0, 0.06778747181648857, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9398650535338866, 0.9216335727839523, 0.99588368557367, 0.8275235325629449, 0.7140916020414927, 0.09228409574348985, nan, 0.02978628830621739, 0.011115822631451418, nan, 0.0, nan, 0.04987230623547304, nan, 0.0, 0.005300893676305685, nan, nan, 0.11168537511103933, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.975 | 10.0 | 200 | 2.9308 | 0.0751 | 0.1232 | 0.5169 | [0.49561224561439093, 0.7497106384686921, 0.9130948912615037, 0.3754963905075124, 0.7629068787092295, 0.22973931751081408, 0.0, 0.05083747710023554, 0.015496721582456333, nan, 0.0, 0.0, 0.012014619855137467, nan, 0.0, 0.0, 0.0, 0.0, 0.07644850942663618, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9386077791452011, 0.9492494578476552, 0.9605134275979352, 0.8774010867069718, 0.7647146550009942, 0.33713978639081077, nan, 0.05102107820605424, 0.017037194087715046, nan, 0.0, nan, 0.01204243067711173, nan, 0.0, 0.0, nan, nan, 0.14309941048211258, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.5359 | 11.0 | 220 | 2.7183 | 0.0707 | 0.1091 | 0.4964 | [0.436494042818339, 0.7312176249886404, 0.8698091735772315, 0.449270633714399, 0.715452001258549, 0.07408947729180626, 0.0, 0.023337516948682666, 0.0007162605469365537, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.023290337478969417, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9563026973041733, 0.9542474247763622, 0.9920918689269339, 0.6937386801357108, 0.7159143633591833, 0.10770023062627349, nan, 0.023386555003864086, 0.0007572086261206688, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.028506823871436646, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.5711 | 12.0 | 240 | 2.8566 | 0.0763 | 0.1213 | 0.5148 | [0.48463778209415787, 0.7972517840126281, 0.8992532843509164, 0.3620071881258745, 0.7991759196438913, 0.23374793779079467, 0.0, 0.05480187511495767, 0.017412369340310773, nan, 0.0, 0.0, 0.010156830611505994, nan, 0.0, 0.0024294237037948215, 0.0, 0.0, 0.07926785671727163, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9499483645829794, 0.9284274193548387, 0.9472931476885542, 0.8414836356215403, 0.8002750712533969, 0.2823492532634737, nan, 0.055683235931446584, 0.0197025684516598, nan, 0.0, nan, 0.010196371010167675, nan, 0.0, 0.0026844269258214687, nan, nan, 0.13429702010821287, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.4239 | 13.0 | 260 | 2.8176 | 0.0785 | 0.1216 | 0.5215 | [0.4772270741554508, 0.7986058009962806, 0.9456717376101984, 0.38306746584232015, 0.7562297037577043, 0.16837090306762553, 0.0, 0.12381028314015488, 0.015467904098994586, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0001597290994473373, 0.0, nan, 0.09785982304164544, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9578687500092338, 0.9112649091894822, 0.9914008088407618, 0.8282059131144613, 0.7563299529396169, 0.2298761783211303, nan, 0.12549942670108144, 0.016355706324206444, nan, 0.0, nan, 0.0, nan, 0.0, 0.00016990043834313093, nan, nan, 0.17059678591617541, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.5524 | 14.0 | 280 | 2.6836 | 0.0803 | 0.1330 | 0.5227 | [0.47872884863743637, 0.7690534821348954, 0.8143990086741016, 0.39197327448075703, 0.7833511205976521, 0.4254791369299206, 0.0, 0.05692139759856595, 0.015772870662460567, nan, 0.0, 0.0, 5.655842013479757e-05, nan, 0.0, 0.0, 0.0, nan, 0.12069485104184605, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.910749769893671, 0.9497661968013011, 0.987344586595838, 0.8268475293997603, 0.790581295154769, 0.687467813080764, nan, 0.05742081150840754, 0.016658589774654713, nan, 0.0, nan, 5.739045596717266e-05, nan, 0.0, 0.0, nan, nan, 0.22639909553420012, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2365 | 15.0 | 300 | 2.6524 | 0.0783 | 0.1216 | 0.5209 | [0.4727612635992119, 0.7967315791368201, 0.9461993821443159, 0.4549778460982006, 0.7957215731446438, 0.1510305721400498, 0.0, 0.09256875307543579, 0.010900638156564927, nan, 0.0, 0.0, 0.0019851210451189653, nan, 0.0, 0.00021183876044062461, 0.0, 0.0, 0.11264323169547485, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9702627275023971, 0.9375, 0.9883661536797447, 0.8022116782735135, 0.8012858752568437, 0.16907368845301268, nan, 0.09407153356197942, 0.011433850254422099, nan, 0.0, nan, 0.002008665958851043, nan, 0.0, 0.0002378606136803833, nan, nan, 0.2107728337236534, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3117 | 16.0 | 320 | 2.5956 | 0.0860 | 0.1316 | 0.5415 | [0.4894104007776673, 0.779094307350649, 0.9424582493688185, 0.4388413214151246, 0.7654435517269941, 0.4694480829822636, 0.0, 0.14206474651680132, 0.012345500959083894, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.08905017355632691, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9692019484059504, 0.9569327731092437, 0.978030298477865, 0.8027792658350552, 0.7656591767747067, 0.5938290678668189, nan, 0.14709284411825616, 0.01296341167918585, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.17091980941613502, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.8282 | 17.0 | 340 | 2.6868 | 0.0776 | 0.1316 | 0.5153 | [0.4743226115554707, 0.7954512666628195, 0.7070193198952633, 0.4438320057367676, 0.7353437159012003, 0.35684062059238364, 0.0, 0.14815248552530494, 0.0068802802972507135, nan, 0.0, 0.0, 0.0024155733434376926, 0.0, 0.0, 0.0, 0.0, nan, 0.13392355469165237, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8951394747222842, 0.9341539034968827, 0.9946517958548413, 0.7302237187826841, 0.7370252535295287, 0.5778195741250756, nan, 0.15975088015274502, 0.0073752120184153135, nan, 0.0, nan, 0.002439094378604838, nan, 0.0, 0.0, nan, nan, 0.35609303076798837, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.0514 | 18.0 | 360 | 2.5944 | 0.0846 | 0.1299 | 0.5291 | [0.4706921853235463, 0.762537340607025, 0.9442168569685349, 0.4813733604418935, 0.7709313108564615, 0.30066122830344627, 0.0, 0.15233106243610275, 0.010987503600773652, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0013024142312579416, 0.0, nan, 0.16403877693519314, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9700204325555764, 0.9579323664949851, 0.9764679017613018, 0.6824889671181857, 0.7714423013190164, 0.3746669353574707, nan, 0.17009046505402142, 0.012130482190453113, nan, 0.0, nan, 0.0, nan, 0.0, 0.0013931835944136735, nan, nan, 0.40789792457401275, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1405 | 19.0 | 380 | 2.6256 | 0.0898 | 0.1426 | 0.5498 | [0.5185031999297933, 0.8150003672959671, 0.9434324339806381, 0.4280491080929272, 0.8144273942862782, 0.4402868608799049, 0.0, 0.23071341887392194, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.11919999121834488, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9427297480575776, 0.9398380319869883, 0.9896881816706828, 0.7824609066095253, 0.8205408629946311, 0.6632744452654441, nan, 0.2698343747000914, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.438464023257692, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7111 | 20.0 | 400 | 2.5802 | 0.0873 | 0.1327 | 0.5361 | [0.4831464880075981, 0.8199493255831284, 0.9468093671643164, 0.45873317459935703, 0.8320261007118376, 0.2898698861398996, 0.0, 0.17530824031501036, 0.004702239712946351, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.18089675000909852, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.972520207250824, 0.9320700054215234, 0.9893276285822452, 0.7298410754827683, 0.836713727049778, 0.3873401849488368, nan, 0.18507705440530972, 0.004921856069784347, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4013970766373254, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.5598 | 21.0 | 420 | 2.5502 | 0.0870 | 0.1403 | 0.5487 | [0.5087946262173165, 0.7867801857585139, 0.876644631963781, 0.35241226393455155, 0.8468368479467259, 0.4543256761763616, 0.0, 0.2858426005132592, 0.0036147422467467318, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.14941308790216548, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9337426739020271, 0.8611073461642722, 0.991364753531918, 0.8818397489859953, 0.8471034665606151, 0.5491256353417971, nan, 0.303808018103113, 0.003710322267991277, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.38084470645239443, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9659 | 22.0 | 440 | 2.5646 | 0.0885 | 0.1421 | 0.5519 | [0.5045213604927574, 0.8042200331977878, 0.9418047369854599, 0.45272025606142935, 0.8257880700359456, 0.49502223166994386, 0.0, 0.2643218071018291, 0.009075791313171171, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00026377394572851066, 0.0, nan, 0.12570543441761717, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9617011469119953, 0.9398803876389266, 0.9765880861241144, 0.7279342363715211, 0.8260754291774375, 0.625702514498108, nan, 0.28985185147769693, 0.009752847104434213, nan, 0.0, nan, 0.0, nan, 0.0, 0.0002718407013490095, nan, nan, 0.46858596462892677, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.7829 | 23.0 | 460 | 2.5177 | 0.0846 | 0.1345 | 0.5462 | [0.4930088449307022, 0.7882521985751578, 0.9101626308882405, 0.44989356331658825, 0.8198836997398983, 0.40375010070087813, 0.0, 0.255825791600409, 0.006403338369666883, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10222337548824124, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9612653114893605, 0.8876050420168067, 0.9877291765568382, 0.8127343690211984, 0.8200603168290581, 0.44887037907794275, nan, 0.28685150295236317, 0.006739156772473952, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.3022288621497214, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1919 | 24.0 | 480 | 2.5331 | 0.0882 | 0.1434 | 0.5555 | [0.5115400805821155, 0.7956842451360199, 0.8616338898084154, 0.45560905508889904, 0.8224094829727, 0.5002673724089801, 0.0, 0.3255104125870554, 0.012325574865934807, nan, 0.0, 0.0, 0.0007629074402548111, 0.0, 0.0, 0.0, 0.0, nan, 0.12330337703247325, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9425568912967359, 0.9430147058823529, 0.9791480130520218, 0.7440562740746409, 0.8231590110691324, 0.6179440675309554, nan, 0.3862317339893018, 0.013296583474678943, nan, 0.0, nan, 0.0007652060795623021, nan, 0.0, 0.0, nan, nan, 0.42990390050876204, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.4307 | 25.0 | 500 | 2.4986 | 0.0878 | 0.1390 | 0.5565 | [0.5067241688681817, 0.8073821293627443, 0.9322059866647752, 0.44890528812665903, 0.8061694443524071, 0.41967209810856837, 0.0, 0.3408865002560274, 0.016331731047663174, nan, 0.0, 0.0, 0.000765023141950044, 0.0, 0.0, 0.0, 0.0, nan, 0.11318709117849012, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9637754280420943, 0.9159240309026837, 0.987182337706041, 0.7603568786510548, 0.8063564658315105, 0.4662569131904794, nan, 0.40015254297216346, 0.017082626605282286, nan, 0.0, nan, 0.0007652060795623021, nan, 0.0, 0.0, nan, nan, 0.3793911007025761, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7454 | 26.0 | 520 | 2.4647 | 0.0931 | 0.1433 | 0.5644 | [0.5157605506728834, 0.803062049148433, 0.9502783532598003, 0.4501630383564226, 0.8001457822543238, 0.5544864444466602, 0.0, 0.34386147348502605, 0.03533719952693749, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10768117416918258, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.966183603428178, 0.9091047709406344, 0.9867737108724783, 0.7615685824341216, 0.8003579240405647, 0.6225789839009427, nan, 0.407542290266041, 0.03755754785558517, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.38451909876443513, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9301 | 27.0 | 540 | 2.4489 | 0.0876 | 0.1466 | 0.5486 | [0.507267107485819, 0.7927630641160874, 0.7813868735967076, 0.4633558799098137, 0.7889578677261816, 0.5558062931081549, 0.0, 0.2951652047146916, 0.0721112427309929, nan, 0.0, 0.0, 0.0030247802978979207, 0.0, 0.0, 0.0, 0.0, nan, 0.12245389551577283, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9121651269610748, 0.9422523041474654, 0.9766361598692395, 0.7470536465906482, 0.7897030556107908, 0.6838404872259913, nan, 0.3599913121221152, 0.08112733220256846, nan, 0.0, nan, 0.003032129090265622, nan, 0.0, 0.0, nan, nan, 0.5131632076233545, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.327 | 28.0 | 560 | 2.4543 | 0.0895 | 0.1445 | 0.5619 | [0.5166661067571481, 0.7768677863017486, 0.9314251193832346, 0.45081840103589177, 0.7494204146519176, 0.5238211228089168, 0.0, 0.3455096827773653, 0.034265018714489594, nan, 0.0, 0.0, 0.002371911971460543, 0.0, 0.0, 0.0, 0.0, nan, 0.14420769872731065, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9543126284421765, 0.9521974112225535, 0.9728383340043627, 0.7415946021785158, 0.7499171472128322, 0.6500526186157946, nan, 0.4160786353970411, 0.03854191906954204, nan, 0.0, nan, 0.0023721388466431364, nan, 0.0, 0.0, nan, nan, 0.4465396107566825, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.5732 | 29.0 | 580 | 2.4811 | 0.0905 | 0.1438 | 0.5573 | [0.5287957944150393, 0.8163827989505047, 0.9545677455513522, 0.4680606710241056, 0.7693352098683666, 0.5231390901672386, 0.0, 0.3055402033194326, 0.05167843946258601, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10709085097314959, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9566114756205484, 0.9330780699376525, 0.9825432213014764, 0.7407527869187011, 0.7699343805925631, 0.5956987080450505, nan, 0.36859837253822414, 0.05994063484371214, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4869983041266252, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.3826 | 30.0 | 600 | 2.4233 | 0.0904 | 0.1416 | 0.5575 | [0.5119552280143455, 0.8199371986427764, 0.9542587763055794, 0.46780402658887665, 0.7983808215095777, 0.5227657946477864, 0.0, 0.2978553040543449, 0.01962687827114638, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.12628517230477396, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9640251100299766, 0.9334592708050963, 0.9828617098629298, 0.7486224841203031, 0.7990819911181812, 0.6042632274243748, nan, 0.3494850411916536, 0.02112612066876666, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4032140838245982, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6463 | 31.0 | 620 | 2.4264 | 0.0891 | 0.1424 | 0.5517 | [0.5087716983709148, 0.7961915655249836, 0.9422048694584277, 0.5216542842991579, 0.7608908022187267, 0.4894583034188466, 0.0, 0.2650125146137745, 0.029923016240036997, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.1422919548457617, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9685755273239262, 0.9552639604228789, 0.9773993305730991, 0.6716792938955639, 0.7614833963014516, 0.541904569982759, nan, 0.33319021906585106, 0.03331717954930943, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.5939594605507551, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1258 | 32.0 | 640 | 2.3982 | 0.0905 | 0.1459 | 0.5696 | [0.5188148838822648, 0.803991121325235, 0.9411620809356289, 0.46322936273191717, 0.7986109970939301, 0.4870755144136439, 0.0, 0.38434420091878657, 0.07936659093691757, nan, 0.0, 0.0, 0.01315502027867747, 0.0, 0.0, 0.0005077173030056864, 0.0, nan, 0.1251307754069244, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9634592626846575, 0.9542558959067498, 0.9710295593440338, 0.7470919109206398, 0.8060084841254059, 0.5419493517834352, nan, 0.4796415745263339, 0.08835110249575963, nan, 0.0, nan, 0.013247630252422355, nan, 0.0, 0.0005097013150293928, nan, nan, 0.41532746507308405, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2958 | 33.0 | 660 | 2.3860 | 0.0955 | 0.1524 | 0.5735 | [0.5375765406531137, 0.8110853226428411, 0.954462712687812, 0.47733671569201686, 0.7880233174350821, 0.5977448263382951, 0.0, 0.3451584856219023, 0.15232419259491609, nan, 0.0, 0.0, 0.00037936626864822315, 0.0, 0.0, 0.0, 0.0, nan, 0.11023077406778595, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9564489607171931, 0.9443446733532123, 0.9799171929740221, 0.7536096017958726, 0.7884934049181415, 0.7026824298605047, nan, 0.4351263025502963, 0.19121032226799128, nan, 0.0, nan, 0.00038260303978115105, nan, 0.0, 0.0, nan, nan, 0.4954776710005653, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6949 | 34.0 | 680 | 2.4148 | 0.0918 | 0.1475 | 0.5613 | [0.5290693028524354, 0.7945630929959845, 0.9478845254791959, 0.4378261787685903, 0.7753794713058447, 0.5583728198407097, 0.0, 0.294060489825014, 0.09295956791297402, nan, 0.0, 0.0, 0.015815294162562516, 0.0, 0.0, 0.0, 0.0, nan, 0.14532379891865785, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9488270117498275, 0.9470639062076444, 0.974707200846098, 0.7404658044437642, 0.7762146218598793, 0.7440720091354873, nan, 0.35005581456431806, 0.11103707293433487, nan, 0.0, nan, 0.015849330922934182, nan, 0.0, 0.0, nan, nan, 0.4373738189453283, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.7268 | 35.0 | 700 | 2.3556 | 0.0926 | 0.1479 | 0.5638 | [0.5410470829128678, 0.7780415307765347, 0.9476531340280946, 0.4552572445110248, 0.7523007150423728, 0.5545934323690384, 0.0, 0.3147277402817886, 0.15298449658591226, nan, 0.0, 0.0, 0.013374103392106993, 0.0, 0.0, 0.0, 0.0, nan, 0.11979662163590622, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9529268195390191, 0.9579069531038221, 0.9737517351617381, 0.7442220861712712, 0.7531484059123749, 0.6352970152929849, nan, 0.3820443788924976, 0.19442088684274292, nan, 0.0, nan, 0.013429366696318402, nan, 0.0, 0.0, nan, nan, 0.4576031656302996, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.8777 | 36.0 | 720 | 2.3999 | 0.0909 | 0.1447 | 0.5650 | [0.5195887577564181, 0.7589975376935272, 0.9568716745005531, 0.47163186638014915, 0.7390994868399272, 0.5596583309503823, 0.0, 0.34237489727059855, 0.05195082273248527, nan, 0.0, 0.0, 0.019154068221530048, 0.0, 0.0, 0.0, 0.0, nan, 0.12754062385704582, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9590093091491458, 0.9530868799132556, 0.9823328986665545, 0.7264419275018494, 0.7398588188506662, 0.6792503526566803, nan, 0.4187506629557979, 0.05900169614732251, nan, 0.0, nan, 0.019292758280964543, nan, 0.0, 0.0, nan, nan, 0.39425018170071874, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.5147 | 37.0 | 740 | 2.3679 | 0.0929 | 0.1519 | 0.5702 | [0.5364579552707658, 0.8087594119494329, 0.9561376717960766, 0.40982070007215154, 0.8017200811359027, 0.6135778964476434, 0.0, 0.3455637603595059, 0.1178796275898851, 0.0, 0.0, 0.0, 0.02513673698854649, 0.0, 0.0, 0.0, 0.0, nan, 0.12247120239988897, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9499276808680068, 0.9381014502575223, 0.9782526395490683, 0.7751843065227928, 0.8186849605620733, 0.739649806318712, nan, 0.3864741863953894, 0.15397080203537677, nan, 0.0, nan, 0.025232670473566914, nan, 0.0, 0.0, nan, nan, 0.463215698942098, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.8831 | 38.0 | 760 | 2.3579 | 0.0936 | 0.1469 | 0.5595 | [0.5304507686446, 0.8067229372973851, 0.9543009495299125, 0.4539171592059717, 0.8000993130844989, 0.6138934235074627, 0.0, 0.2697607964234687, 0.10662386132885616, nan, 0.0, 0.0, 0.019987270464628042, 0.0, 0.0, 0.0, 0.0, nan, 0.1252110694183865, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9523358562540906, 0.9254201680672269, 0.9808005480406944, 0.7376023570827275, 0.8009876052230397, 0.7072949553301539, nan, 0.32520444294034156, 0.1423400775381633, nan, 0.0, nan, 0.020124919892488546, nan, 0.0, 0.0, nan, nan, 0.43115561657110557, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.5653 | 39.0 | 780 | 2.3282 | 0.0907 | 0.1479 | 0.5667 | [0.5275596317733419, 0.7942082826255595, 0.9517817247869964, 0.4541504138802404, 0.807428478543563, 0.5882710451677632, 0.0, 0.32820640572452037, 0.11085403522999678, 0.0, 0.0, 0.0, 0.028076457871027343, 0.0, 0.0, 0.0, 0.0, nan, 0.12453114251969526, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9578436340696244, 0.945335795608566, 0.9753802332778482, 0.7358294431264508, 0.8231092993968318, 0.6826873558585791, nan, 0.38597412830783373, 0.1407650835958323, nan, 0.0, nan, 0.028197844031870832, nan, 0.0, 0.0, nan, nan, 0.3874263102640717, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1075 | 40.0 | 800 | 2.3290 | 0.0938 | 0.1462 | 0.5654 | [0.525696705455315, 0.8097656901943995, 0.9540430140437204, 0.468111166556006, 0.8067950394887163, 0.6026423969326851, 0.0, 0.3172524415747904, 0.07324316230784587, nan, 0.0, 0.0, 0.01990007056086351, 0.0, 0.0, 0.0, 0.0, nan, 0.11110484334984262, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9615755672139479, 0.9453188533477908, 0.9740341684143476, 0.7238909721690773, 0.807450122622125, 0.6827545285595934, nan, 0.3946165463664971, 0.08889629270656652, nan, 0.0, nan, 0.019962313600581556, nan, 0.0, 0.0, nan, nan, 0.3976419284502948, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.858 | 41.0 | 820 | 2.2884 | 0.0861 | 0.1402 | 0.5554 | [0.5197047648538989, 0.7522506253594982, 0.9529308898004213, 0.4432910713601716, 0.7616208786659381, 0.5651882235681975, 0.0, 0.2511281328404124, 0.07432842851653051, 0.0, 0.0, 0.0, 0.03464474456840869, 0.0, 0.0, 0.0, 0.0, nan, 0.12361143100118704, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9645451577207137, 0.9527649769585254, 0.9758249154202546, 0.7380551516542946, 0.7621130774839265, 0.6602404782696312, nan, 0.28615950337665486, 0.0908044584443906, nan, 0.0, nan, 0.034989047987986265, nan, 0.0, 0.0, nan, nan, 0.2817168699022854, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.8407 | 42.0 | 840 | 2.3426 | 0.0886 | 0.1434 | 0.5560 | [0.5319928364107948, 0.7797183410955866, 0.9486764705882353, 0.4390484927134142, 0.7554676247082002, 0.5886622292884924, 0.0, 0.2479837292662356, 0.0663740228502706, 0.0, 0.0, 0.0, 0.03285631534465895, 0.0, 0.0, 0.0, 0.0, nan, 0.12666638151047235, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9545416267150862, 0.9530360531309298, 0.969136655629736, 0.7508545700364787, 0.7561311062504142, 0.7336826313786078, nan, 0.2851492850179566, 0.08358068815119941, nan, 0.0, nan, 0.03309516294106957, nan, 0.0, 0.0, nan, nan, 0.3587175967051603, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.094 | 43.0 | 860 | 2.3197 | 0.0938 | 0.1469 | 0.5661 | [0.5321809020623814, 0.7960168163032635, 0.9480932639440182, 0.45480238271229184, 0.8054216967228046, 0.6017263259674073, 0.0, 0.3017606715947139, 0.11857016219842353, nan, 0.0, 0.0, 0.024840957285671008, 0.0, 0.0, 0.0, 0.0, nan, 0.10708057202650854, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.963567113484157, 0.9463353889943074, 0.9688542223771265, 0.7337823014719013, 0.8059422018956718, 0.6688497794496316, nan, 0.37692762290569104, 0.1498970196268476, nan, 0.0, nan, 0.02509875940964351, nan, 0.0, 0.0, nan, nan, 0.38427683113946537, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.2333 | 44.0 | 880 | 2.3351 | 0.0924 | 0.1440 | 0.5649 | [0.5234600096067719, 0.7993256716093246, 0.9540437935614297, 0.4402231330031544, 0.8011626173796393, 0.5988770094608107, 0.0, 0.31263255344218144, 0.06051157899885066, nan, 0.0, 0.0, 0.01943918475313282, 0.0, 0.0, 0.0, 0.0, nan, 0.1115382392045783, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9644136683898171, 0.9338489428029276, 0.9755484913857858, 0.7493877707201347, 0.8016007158480811, 0.6973422001298673, nan, 0.37526581370563244, 0.0749485098134238, nan, 0.0, nan, 0.01954145025682229, nan, 0.0, 0.0, nan, nan, 0.3116369215860454, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.7624 | 45.0 | 900 | 2.3306 | 0.0882 | 0.1482 | 0.5742 | [0.5258849135317549, 0.7047259963908543, 0.9470757995170505, 0.4796076709120187, 0.7687998311222253, 0.5991964723174914, 0.0, 0.3769427430313193, 0.12063174979711593, 0.0, 0.0, 0.0, 0.02665966248604636, 0.0, 0.0, 0.0, 0.0, nan, 0.12325740513726943, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9648908712423969, 0.9626761995120628, 0.9663063138855004, 0.7312759878574526, 0.7845330416915225, 0.6845793869371487, nan, 0.460987892532971, 0.14632299491155804, nan, 0.0, nan, 0.027183945976450782, nan, 0.0, 0.0, nan, nan, 0.3491480255188565, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.7946 | 46.0 | 920 | 2.3309 | 0.0924 | 0.1501 | 0.5684 | [0.5361640179143524, 0.7846633151596686, 0.9536386080413826, 0.4568883842931527, 0.7759403160506726, 0.6158210406873075, 0.0, 0.32342639353641117, 0.11150484571020869, nan, 0.0, 0.0, 0.0293502430935062, 0.0, 0.0, 0.0, 0.0, nan, 0.123842605156038, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9543968407102787, 0.9387537272973706, 0.9748934866084574, 0.7442603505012627, 0.7876151653741632, 0.72303575826784, nan, 0.3969147931325356, 0.14200690574267022, nan, 0.0, nan, 0.02979521172295714, nan, 0.0, 0.0, nan, nan, 0.460671888879916, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7709 | 47.0 | 940 | 2.3227 | 0.0913 | 0.1509 | 0.5704 | [0.5424758709486558, 0.7733391833636282, 0.9512393001599097, 0.4543827573277878, 0.7687249054047356, 0.6394879643801308, 0.0, 0.32752722872125856, 0.14992535250207353, 0.0, 0.0, 0.0, 0.02317038711500424, 0.0, 0.0, 0.0, 0.0, nan, 0.11885573064179251, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9536167691741732, 0.9406597316345893, 0.9723035135898468, 0.7496747531950716, 0.7709286140385763, 0.7203600456774367, nan, 0.4101183470807215, 0.20530954688635814, nan, 0.0, nan, 0.02353008694654079, nan, 0.0, 0.0, nan, nan, 0.4388678026326415, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.0854 | 48.0 | 960 | 2.3119 | 0.0923 | 0.1515 | 0.5675 | [0.5484915919545876, 0.7502571445858459, 0.9503989939945234, 0.4518522223975188, 0.7665169241908147, 0.6387045798547317, 0.0, 0.3014816981040113, 0.14846499417229633, nan, 0.0, 0.0, 0.024767918571226614, 0.0, 0.0, 0.0, 0.0, nan, 0.1255448070585734, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9485950586604931, 0.9453781512605042, 0.971912914410706, 0.7488712022652483, 0.7718897063697223, 0.7511475336423278, nan, 0.3749223394636751, 0.19869154349406348, nan, 0.0, nan, 0.025137019713621624, nan, 0.0, 0.0, nan, nan, 0.4768634418153921, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.2518 | 49.0 | 980 | 2.3313 | 0.0959 | 0.1480 | 0.5662 | [0.5342926601876615, 0.8062937473403085, 0.9553006514505319, 0.47239519660475027, 0.8008359767380386, 0.625160514049403, 0.0, 0.28114668652271035, 0.08647048160009668, nan, 0.0, 0.0, 0.027304971193724548, 0.0, 0.0, 0.0, 0.0, nan, 0.11171062547098719, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9601395264315716, 0.9309179316888045, 0.9798991653196003, 0.736824315706232, 0.8032246304765692, 0.6922034885022726, nan, 0.38143824787727865, 0.1083565543978677, nan, 0.0, nan, 0.02783437114407874, nan, 0.0, 0.0, nan, nan, 0.44892190906888474, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1639 | 50.0 | 1000 | 2.3127 | 0.0939 | 0.1473 | 0.5650 | [0.5336831298064109, 0.7946016311618073, 0.9542612124083791, 0.4698340415687516, 0.7895064764715527, 0.6196465123602583, 0.0, 0.28656549336868975, 0.10462468913822695, nan, 0.0, 0.0, 0.024119941721107298, 0.0, 0.0, 0.0, 0.0, nan, 0.11790298802632052, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9584331199463405, 0.9359158986175116, 0.9802837552806004, 0.7382528506925844, 0.797905481540399, 0.704932715344484, nan, 0.3646332654803336, 0.13570692997334627, nan, 0.0, nan, 0.024543985001960842, nan, 0.0, 0.0, nan, nan, 0.39719777113785026, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zhendongw/prompt-diffusion
|
zhendongw
| 2023-05-04T22:05:05Z | 0 | 3 | null |
[
"arxiv:2305.01115",
"arxiv:2206.02262",
"region:us"
] | null | 2023-05-04T20:36:13Z |
## Prompt-Diffusion: In-Context Learning Unlocked for Diffusion Models
[Project Page](https://zhendong-wang.github.io/prompt-diffusion.github.io/) | [Paper](https://arxiv.org/abs/2305.01115) | [GitHub](https://github.com/Zhendong-Wang/Prompt-Diffusion)

**In-Context Learning Unlocked for Diffusion Models**<br>
Zhendong Wang, Yifan Jiang, Yadong Lu, Yelong Shen, Pengcheng He, Weizhu Chen, Zhangyang Wang and Mingyuan Zhou <br>
[//]: # (https://arxiv.org/abs/2206.02262 <br>)
Abstract: *We present Prompt Diffusion, a framework for enabling in-context learning in diffusion-based generative models.
Given a pair of task-specific example images, such as depth from/to image and scribble from/to image, and a text guidance,
our model automatically understands the underlying task and performs the same task on a new query image following the text guidance.
To achieve this, we propose a vision-language prompt that can model a wide range of vision-language tasks and a diffusion model that takes it as input.
The diffusion model is trained jointly on six different tasks using these prompts.
The resulting Prompt Diffusion model becomes the first diffusion-based vision-language foundation model capable of in-context learning.
It demonstrates high-quality in-context generation for the trained tasks and effectively generalizes to new, unseen vision tasks using their respective prompts.
Our model also shows compelling text-guided image editing results. Our framework aims to facilitate research into in-context learning for computer vision, with code publicly available here.*

## Note
We have made our pretrained model checkpoints available here. For more information on how to use them, please visit our GitHub page at https://github.com/Zhendong-Wang/Prompt-Diffusion.
## Citation
```
@article{wang2023promptdiffusion,
title = {In-Context Learning Unlocked for Diffusion Models},
author = {Wang, Zhendong and Jiang, Yifan and Lu, Yadong and Shen, Yelong and He, Pengcheng and Chen, Weizhu and Wang, Zhangyang and Zhou, Mingyuan},
journal = {arXiv preprint arXiv:2305.01115},
year = {2023},
url = {https://arxiv.org/abs/2305.01115}
}
```
## Acknowledgements
We thank [Brooks et al.](https://github.com/timothybrooks/instruct-pix2pix) for sharing the dataset for finetuning Stable Diffusion.
We also thank [Lvmin Zhang and Maneesh Agrawala
](https://github.com/lllyasviel/ControlNet) for providing the awesome code base ControlNet.
|
kingji89/imjzz
|
kingji89
| 2023-05-04T21:55:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-04T21:52:34Z |
---
license: creativeml-openrail-m
---
|
kucharskipj/rl_course_vizdoom_health_gathering_supreme
|
kucharskipj
| 2023-05-04T21:54:24Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-04T21:43:08Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.96 +/- 6.81
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kucharskipj/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
gweegenaar/ppo-LunarLander-v2
|
gweegenaar
| 2023-05-04T21:34:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-04T21:28:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.62 +/- 14.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lyimo/whisper-medium-sw-v13
|
lyimo
| 2023-05-04T21:21:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sw",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-03T12:03:27Z |
---
language:
- sw
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Swahili - Badili
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13.0
type: mozilla-foundation/common_voice_13_0
config: sw
split: test
args: 'config: sw, split: test'
metrics:
- name: Wer
type: wer
value: 98.40119332745073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Swahili - Badili
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 13.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4329
- Wer: 98.4012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 12000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3563 | 0.35 | 1000 | 0.4938 | 100.5715 |
| 0.2853 | 0.69 | 2000 | 0.4143 | 100.7007 |
| 0.1612 | 1.04 | 3000 | 0.3910 | 100.9748 |
| 0.1399 | 1.38 | 4000 | 0.3762 | 98.4989 |
| 0.1657 | 1.73 | 5000 | 0.3700 | 90.3357 |
| 0.0818 | 2.08 | 6000 | 0.3775 | 98.0493 |
| 0.0749 | 2.42 | 7000 | 0.3768 | 97.9936 |
| 0.0637 | 2.77 | 8000 | 0.3822 | 92.9440 |
| 0.0355 | 3.11 | 9000 | 0.4036 | 93.8979 |
| 0.0299 | 3.46 | 10000 | 0.4141 | 97.9695 |
| 0.0277 | 3.8 | 11000 | 0.4175 | 98.2961 |
| 0.0147 | 4.15 | 12000 | 0.4329 | 98.4012 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rodekruis/sml-ukr-word-classifier-medium
|
rodekruis
| 2023-05-04T20:06:42Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-04T20:06:20Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# rodekruis/sml-ukr-word-classifier-medium
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("rodekruis/sml-ukr-word-classifier-medium")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gus07ven/distilbert-base-multilingual-cased-distilled-jd
|
gus07ven
| 2023-05-04T19:52:11Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-18T13:56:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-distilled-jd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-distilled-jd
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1316
- Accuracy: 0.8715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4909 | 1.0 | 464 | 0.2007 | 0.8531 |
| 0.1345 | 2.0 | 928 | 0.1814 | 0.8650 |
| 0.0888 | 3.0 | 1392 | 0.1670 | 0.8639 |
| 0.0757 | 4.0 | 1856 | 0.1484 | 0.8726 |
| 0.0637 | 5.0 | 2320 | 0.1394 | 0.8683 |
| 0.0577 | 6.0 | 2784 | 0.1379 | 0.8737 |
| 0.0513 | 7.0 | 3248 | 0.1431 | 0.8704 |
| 0.0464 | 8.0 | 3712 | 0.1329 | 0.8704 |
| 0.0449 | 9.0 | 4176 | 0.1316 | 0.8715 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jainr3/t5-finetuned-meetings
|
jainr3
| 2023-05-04T19:52:09Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-22T00:26:25Z |
---
license: apache-2.0
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the [knkarthick/AMI](https://huggingface.co/datasets/knkarthick/AMI), [knkarthick/dialogsum](https://huggingface.co/datasets/knkarthick/dialogsum), and [samsum](https://huggingface.co/datasets/samsum) datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-4
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- summary_len: 150
- max_len: 512
- num_epochs: <1
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
SaudxInu/audio-diffusion-electronic
|
SaudxInu
| 2023-05-04T19:47:44Z | 2 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-audio-generation",
"diffusion-models-class",
"license:mit",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2023-05-04T19:46:54Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-audio-generation
- diffusion-models-class
---
# Model Card for Unit 4 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional audio generation of music in the genre Electronic
## Usage
```python
from IPython.display import Audio
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("SaudxInu/audio-diffusion-electronic")
output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
```
|
kucharskipj/poca-SoccerTwos
|
kucharskipj
| 2023-05-04T19:29:21Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-05-04T19:29:14Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: kucharskipj/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
reginaboateng/umls_relational_extraction_adapter_clinical_bert
|
reginaboateng
| 2023-05-04T19:25:54Z | 1 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:umls",
"bert",
"dataset:umls",
"region:us"
] | null | 2023-05-04T19:25:48Z |
---
tags:
- adapterhub:umls
- adapter-transformers
- bert
datasets:
- umls
---
# Adapter `reginaboateng/umls_relational_extraction_adapter_clinical_bert` for emilyalsentzer/Bio_ClinicalBERT
An [adapter](https://adapterhub.ml) for the `emilyalsentzer/Bio_ClinicalBERT` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
adapter_name = model.load_adapter("reginaboateng/umls_relational_extraction_adapter_clinical_bert", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
reginaboateng/umls_RE_adapter_clinical_bert
|
reginaboateng
| 2023-05-04T19:11:35Z | 0 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:umls",
"bert",
"dataset:umls",
"region:us"
] | null | 2023-05-04T19:11:33Z |
---
tags:
- adapterhub:umls
- bert
- adapter-transformers
datasets:
- umls
---
# Adapter `reginaboateng/umls_RE_adapter_clinical_bert` for emilyalsentzer/Bio_ClinicalBERT
An [adapter](https://adapterhub.ml) for the `emilyalsentzer/Bio_ClinicalBERT` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
adapter_name = model.load_adapter("reginaboateng/umls_RE_adapter_clinical_bert", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
ageng-anugrah/indobert-large-p2-finetuned-ner
|
ageng-anugrah
| 2023-05-04T19:09:10Z | 163 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"indobert",
"indobenchmark",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-05T09:00:46Z |
---
language: id
tags:
- indobert
- indobenchmark
---
## How to use
### Load model and tokenizer
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("ageng-anugrah/indobert-large-p2-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("ageng-anugrah/indobert-large-p2-finetuned-ner")
```
### Extract NER Tag
```python
import torch
def predict(model, tokenizer, sentence):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = tokenizer(sentence.split(),
is_split_into_words = True,
return_offsets_mapping=True,
return_tensors="pt",
padding='max_length',
truncation=True,
max_length=512)
model.to(device)
# move to gpu
ids = inputs["input_ids"].to(device)
mask = inputs["attention_mask"].to(device)
# forward pass
outputs = model(ids, attention_mask=mask)
logits = outputs[0]
active_logits = logits.view(-1, model.num_labels) # shape (batch_size * seq_len, num_labels)
flattened_predictions = torch.argmax(active_logits, axis=1) # shape (batch_size*seq_len,) - predictions at the token level
tokens = tokenizer.convert_ids_to_tokens(ids.squeeze().tolist())
token_predictions = [model.config.id2label[i] for i in flattened_predictions.cpu().numpy()]
wp_preds = list(zip(tokens, token_predictions)) # list of tuples. Each tuple = (wordpiece, prediction)
prediction = []
for token_pred, mapping in zip(wp_preds, inputs["offset_mapping"].squeeze().tolist()):
#only predictions on first word pieces are important
if mapping[0] == 0 and mapping[1] != 0:
prediction.append(token_pred[1])
else:
continue
return sentence.split(), prediction
sentence = "BJ Habibie adalah Presiden Indonesia ke-3"
words, labels = predict(model, tokenizer, sentence)
```
|
helenai/madlag-albert-base-v2-squad-ov
|
helenai
| 2023-05-04T18:59:59Z | 5 | 0 |
transformers
|
[
"transformers",
"openvino",
"albert",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-04T18:59:46Z |
---
language:
- en
tags:
- openvino
---
# madlag/albert-base-v2-squad
This is the [madlag/albert-base-v2-squad](https://huggingface.co/madlag/albert-base-v2-squad) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForQuestionAnswering
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/madlag-albert-base-v2-squad-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForQuestionAnswering.from_pretrained(model_id)
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
result = pipe("What is OpenVINO?", "OpenVINO is a framework that accelerates deep learning inferencing")
print(result)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.