modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 12:31:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 12:31:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
teilomillet/poca-SoccerTwos
|
teilomillet
| 2023-07-27T19:52:37Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-27T19:52:21Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: teilomillet/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
asenella/MMVAEPlus_beta_10_scale_False_seed_3
|
asenella
| 2023-07-27T19:43:04Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T19:42:50Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
tommilyjones/swin-tiny-patch4-window7-224-cats_dogs
|
tommilyjones
| 2023-07-27T19:38:02Z | 204 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-27T19:31:44Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-cats_dogs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9973147153598282
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-cats_dogs
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0126
- Accuracy: 0.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0832 | 0.98 | 47 | 0.0235 | 0.9909 |
| 0.0788 | 1.99 | 95 | 0.0126 | 0.9973 |
| 0.0534 | 2.95 | 141 | 0.0127 | 0.9957 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
qanastek/LLaMa-2-FrenchMedMCQA-Checkpoint
|
qanastek
| 2023-07-27T19:36:46Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T19:35:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
AdiOO7/llama-2-7B-finetuned
|
AdiOO7
| 2023-07-27T19:34:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T19:34:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
FlexedRope14028/Progetto-13-b-chat
|
FlexedRope14028
| 2023-07-27T19:31:29Z | 0 | 0 | null |
[
"it",
"en",
"license:llama2",
"region:us"
] | null | 2023-07-27T19:11:08Z |
---
license: llama2
language:
- it
- en
---
|
NasimB/bnc-cbt-rarity
|
NasimB
| 2023-07-27T19:05:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-27T16:42:47Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc-cbt-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc-cbt-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3708 | 0.29 | 500 | 5.3361 |
| 5.0503 | 0.59 | 1000 | 4.9318 |
| 4.713 | 0.88 | 1500 | 4.6937 |
| 4.4634 | 1.17 | 2000 | 4.5611 |
| 4.3089 | 1.46 | 2500 | 4.4409 |
| 4.2145 | 1.76 | 3000 | 4.3397 |
| 4.0857 | 2.05 | 3500 | 4.2672 |
| 3.9095 | 2.34 | 4000 | 4.2143 |
| 3.8772 | 2.63 | 4500 | 4.1591 |
| 3.8444 | 2.93 | 5000 | 4.1098 |
| 3.6491 | 3.22 | 5500 | 4.1097 |
| 3.5993 | 3.51 | 6000 | 4.0797 |
| 3.5848 | 3.81 | 6500 | 4.0497 |
| 3.4861 | 4.1 | 7000 | 4.0479 |
| 3.3328 | 4.39 | 7500 | 4.0443 |
| 3.3282 | 4.68 | 8000 | 4.0292 |
| 3.3151 | 4.98 | 8500 | 4.0183 |
| 3.1607 | 5.27 | 9000 | 4.0323 |
| 3.151 | 5.56 | 9500 | 4.0309 |
| 3.1458 | 5.85 | 10000 | 4.0304 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
JBJoyce/DENTAL_CLICK_classifier
|
JBJoyce
| 2023-07-27T19:04:47Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"audio-classification",
"Voice",
"en",
"dataset:JBJoyce/DENTAL_CLICK",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-03-18T19:11:21Z |
---
language:
- en
tags:
- Voice
datasets:
- JBJoyce/DENTAL_CLICK
metrics:
- accuracy
---
### Model Description
Model utilizes Wav2vec2 architecture trained on the Superb dataset for keyword spotting task and was fine
tuned to identify dental dental click utterance (https://en.wikipedia.org/wiki/Dental_click) in speech.
Model was trained for 10 epochs on a limited quantity of speech (~1.5 hours) and with only one speaker.
Thus the model should not be assumed to hold generalizability to other speakers or languages without further
training data or rigorous testing.
Model was evaluated for accuracy on a hold out test set of 20% of the available data and scored 97%.
## Uses
Model can be used via transformers library or via Hugging Face Hosted inference API to the right. I would
caution against the use of the 'Record from browser' option as model may erronously identify user's mouse
click as a speech utterance. Audio files for upload should be 1 sec in length, with 'WAV' format and 16 bit
signed integer PCM encoding.
|
kfkas/Legal-Llama-2-ko-7b-Chat
|
kfkas
| 2023-07-27T19:04:21Z | 38 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"kollama",
"llama-2-ko",
"llama-2-ko-chat",
"legal-llama",
"law-llama",
"legal-gpt",
"law-gpt",
"en",
"ko",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-27T01:55:49Z |
---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- kollama
- llama-2-ko
- llama-2-ko-chat
- legal-llama
- law-llama
- legal-gpt
- law-gpt
---
<img src=https://github.com/taemin6697/Paper_Review/assets/96530685/9f94505c-4fda-41ae-9a67-1e4c96c501cc style="max-width: 500px; width: 100%" />
Llama-2-Ko-7b-Chatμ [kfkas/Llama-2-ko-7b-Chat](https://huggingface.co/kfkas/Llama-2-ko-7b-Chat)λ₯Ό ν λλ‘ λ§λ€μ΄μ‘μ΅λλ€. νμ΅ λ°μ΄ν°λ μ체 λ²λ₯ μ§μ μλ΅ λ°μ΄ν°λ₯Ό ν΅ν΄ νμ΅νμμ΅λλ€.
## Model Details
**Backbone Model** kfkas/Llama-2-ko-7b-Chat
**Model Developers** Boostcamp AI Tech Team YoonSeul
**data** leagl data
**prompts** kullm
**Input** Models input text only.
**Output** Models generate text only.
---
| Model | μ§λ¬Έ : μλΉ κ° λμκ°μκ³ λμ ν μ μΉμΈκ³Ό μμ ν¬κΈ° νκ²°λ¬Έμ λ°μλλ°μ. κ·Έλ¬κ³ λμ κ·Όλ‘ κ±΄μ€κ³΅μ νμμ μ νκ° μλλ°μ. ν΄μ§κΈ 곡μ κ° μλ€κ³ μ μ²νμ¬μ μ°Ύμκ°λΌκ³ νμλλΌκ³ μ. κ·Όλ° μ΄κ² μμμΈμ κ³ μ μ¬μ°μ΄ λ€ μλλ€ νμ¬ λ²μμ μ¬μ°μΌλ‘ μ κ³ νμ¬μΌ νλ€κ³ μ ν΄λ λλ€κ³ ν©λλ€. μ ννκ² μκ³ μΆμ΅λλ€. μμμΈμ κ³ μ μ¬μ°μΌλ‘ λ²μμ μ¬μ° μ κ³ νμ¬μΌ νλμ? μλλ©΄ μ ν΄λ λλμ? |
|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Llama-2-ko-7b-Chat | `[μλ
νμΈμ! μμμΈ κ³ μ μ¬μ°μ΄ μλλλΌλ κ·Όλ‘건μ€κ³΅μ νλ‘λΆν° ν΄μ§κΈμ μ§κΈλ°κΈ° μν΄ λ²μμ μ¬μ°μ μ κ³ ν΄μΌ ν μ μμ΅λλ€. νμ§λ§ ꡬ체μ μΈ μν©μ μμμΈμ΄ μν κ΅κ°μ λ²λ₯ μ λ°λΌ λ¬λΌμ§ μ μμΌλ―λ‘, ν΄λΉ λ²λ₯ μ λ¬Έκ°μκ² λ¬Έμνκ±°λ νμΈνλ κ²μ΄ μ’μ΅λλ€. μμμΈμ΄ μμ μ¬μ°μ μ κ³ νμ§ μκ³ λ ν΄μ§κΈμ λ°μ μ μλ κ²½μ°λ μμ§λ§, κ²½μ°μ λ°λΌ μΆκ°μ μΈ μ μ°¨κ° νμν μλ μμ΅λλ€. μμμΈμ μμ μ¬μ°μ λ²μκ° λΆλΆλͺ
νκ±°λ μμ μ¬μ°μ΄ μ ν μμ μ μλ λ± λ€μν μν©μ μ§λ©΄ν μ μκΈ° λλ¬Έμ, κ΄λ ¨ λ²λ₯ μ λ¬Έκ°μ μ‘°μΈμ ꡬνλ κ²μ΄ μ€μν©λλ€.]` |
| kfkas/Legal-Llama-2-ko-7b-Chat | `[μμμΈμ κ³ μ μ¬μ°μ μμμΈμκ² κ·μλλ μμ μ¬μ°μ
λλ€. μμμΈμ΄ μμ μ¬μ°μ λν κΆλ¦¬μ μ무λ₯Ό νμ¬ν μ μμ΅λλ€. μμ μ¬μ°μλ μμμΈ κ³ μ μ¬μ°κ³Ό μΌλ° μμ μ¬μ°μ΄ ν¬ν¨λ©λλ€. μμμΈμ μμ μ¬μ°μ λ²μ λ΄μμ μμ μ¬μ°μ κ΄λ¦¬νκ³ μ²λΆν μ μμΌλ©°, μμ μ¬μ° μ€ μΌλΆλ₯Ό λ€λ₯Έ μμμΈμκ² μ¦μ¬ν μλ μμ΅λλ€.κ·Έλ¬λ μμ μ¬μ° μ€μμ μμμΈμ΄ κ³ μ μ¬μ°μΌλ‘ μ·¨λν μ¬μ°μ΄ μλ κ²½μ°, μμμΈμ κ·Έ μ¬μ°μ μμ μ¬μ°μμ μ μΈμμΈκ° κ³ΌμΈλ©λλ€.λ°λΌμ μμ μ¬μ° μ€μ μμμΈμ΄ μμ ν κ³ μ μ¬μ°μ μ μΈνκ³ μμ μ¬μ°μ κ³μ°νμ¬ μμμΈ μ κ³ λ₯Ό ν΄μΌ νλ©°, μμμΈ μ κ³ μμλ μμ μ¬μ° λͺ©λ‘μ μμ±νμ¬ μ μΆνμ¬μΌ ν©λλ€. λν, μμμΈλ μμ κ°μμΌ(μ¬λ§μΌ)λ‘λΆν° 6κ°μ μ΄λ΄μ μ κ³ ν΄μΌ νλ©°, μ κ³ νμ§ μμ κ²½μ° κ°μ°μΈκ° λΆκ³Όλ μ μμΌλ―λ‘ μ£Όμν΄μΌ ν©λλ€.(μ°Έκ³ : μμμΈλ² μ 18μ‘°)]` |
---
| Model | μ§λ¬Έ : μ¬νμ΄ λλ νμ μν΄λ°°μ μ²κ΅¬λ₯Ό ν μ μλμ§ κΆκΈν©λλ€. λΆλμ° κ΄λ ¨ μ¬κΈ°λ₯Ό λΉν΄μ μμΈν μλ΄μ λ°κ³ μΆμ΄μ |
|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Llama-2-ko-7b-Chat | `[μ, μ¬νμ΄ λλ νμλ λ―Όμ¬ μμ‘μ μ κΈ°ν μ μμ΅λλ€. λΆλμ° μ¬κΈ° λλ κΈ°ν λ―Όμ¬ λΆμκ³Ό κ΄λ ¨νμ¬ λ³νΈμ¬μ λμμ ꡬν μ μλ λͺ κ°μ§ μ΅μ
μ΄ μμ΅λλ€:1. λ―Όμ¬ μμ‘: λΆλμ° μ¬κΈ°μ κ°μ λ―Όμ¬ λΆμμ΄ μλ κ²½μ° λ³νΈμ¬λ₯Ό κ³ μ©νμ¬ λ―Όμ¬μμ‘μ μ κΈ°νμ¬ μν΄ λ°°μμ μ²κ΅¬νλ κ²μ΄ κ°μ₯ μΌλ°μ μ
λλ€.2. κ°μΈ μμ₯: λ―Όμ¬ μ¬κ±΄μμ λ―Όμ¬ μμ₯μ μμ±νμ¬ λ²μμ μ μΆνλ κ²λ ν κ°μ§ λ°©λ²μ
λλ€.3. μ€μ¬: μ€μ¬ κΈ°κ΄μ ν΅ν΄ λΆμμ ν΄κ²°ν μλ μμ΅λλ€.4. νμ¬ κ³ μ: λΆλμ° μ¬κΈ°λ‘ νμ¬ κ³ μκ° κ°λ₯νμ§ νμΈν΄μΌ ν©λλ€. μ΄λ¬ν μ΅μ
μ€ μ΄λ€ κ²μ μ νν μ§ κ²°μ νκΈ° μ μ λ³νΈμ¬μ μλ΄νμ¬ μ‘°μΈμ ꡬνλ κ²μ΄ μ’μ΅λλ€.]` |
| Legal-Llama-2-ko-7b-Chat | `[μ, μ¬νμ΄ λλ νμλ μν΄λ°°μ μ²κ΅¬κ° κ°λ₯ν©λλ€. λ―Όμ¬μμ‘λ² μ 766μ‘°(μν΄λ°°μμ²κ΅¬κΆμ μλ©Έμν¨)μ λ°λ₯΄λ©΄, λΆλ²νμλ‘ μΈν μν΄λ°°μμ μ²κ΅¬κΆμ νΌν΄μλ κ·Έ λ²μ λ리μΈμ΄ κ·Έ μν΄ λ° κ°ν΄μλ₯Ό μ λ λ‘λΆν° 3λ
κ° μ΄λ₯Ό νμ¬νμ§ μλνλ©΄ μν¨λ‘ μΈνμ¬ μλ©Ένλ€κ³ κ·μ νκ³ μμ΅λλ€.λ°λΌμ λΆλμ° μ¬κΈ°λ‘ μΈν΄ μ
μ μν΄λ₯Ό μ
μ¦ν μ μλ μ¦κ±°μ μ¦μΈμ ν보νκ³ , λ²μμ μν΄λ°°μμ μ²κ΅¬ν μ μμ΅λλ€. λν, λΆλμ° μ¬κΈ°λ₯Ό λΉν μ¬μ€μ κ²½μ°°μ μ κ³ νμ¬ μμ¬λ₯Ό μλ’°νλ κ²λ μ’μ λ°©λ²μ
λλ€.]` |
---
### Inference
```python
def gen(x, model, tokenizer, device):
prompt = (
f"μλλ μμ
μ μ€λͺ
νλ λͺ
λ Ήμ΄μ
λλ€. μμ²μ μ μ ν μλ£νλ μλ΅μ μμ±νμΈμ.\n\n### λͺ
λ Ήμ΄:\n{x}\n\n### μλ΅:"
)
len_prompt = len(prompt)
gened = model.generate(
**tokenizer(prompt, return_tensors="pt", return_token_type_ids=False).to(
device
),
max_new_tokens=1024,
early_stopping=True,
do_sample=True,
top_k=20,
top_p=0.92,
no_repeat_ngram_size=3,
eos_token_id=2,
repetition_penalty=1.2,
num_beams=3
)
return tokenizer.decode(gened[0])[len_prompt:]
def LLM_infer(input):
device = (
torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
)
model_id = "kfkas/Legal-Llama-2-ko-7b-Chat"
model = AutoModelForCausalLM.from_pretrained(
model_id, device_map={"": 0},torch_dtype=torch.float16, low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
model.config.use_cache = (True)
tokenizer.pad_token = tokenizer.eos_token
output = gen(input, model=model, tokenizer=tokenizer, device=device)
return output
if __name__ == "__main__":
text = LLM_infer("μμ£Όμ΄μ μ νλ©΄ μ΄λ»κ² μ²λ² λ°μ?")
print(text)
```
## Note for oobabooga/text-generation-webui
Remove `ValueError` at `load_tokenizer` function(line 109 or near), in `modules/models.py`.
```python
diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
trust_remote_code=shared.args.trust_remote_code,
use_fast=False
)
- except ValueError:
+ except:
tokenizer = AutoTokenizer.from_pretrained(
path_to_model,
trust_remote_code=shared.args.trust_remote_code,
```
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use `use_fast=True` option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
---
> Below is the original model card of the Llama-2 model.
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes β 7B, 13B, and 70B β as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaβs sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2βs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software βbug,β or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Vaibhav9401/llama2-qlora-finetunined-spam
|
Vaibhav9401
| 2023-07-27T18:54:34Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T08:40:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
patonw/dqn-SpaceInvadersNoFrameskip-v4
|
patonw
| 2023-07-27T18:54:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T18:45:18Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 801.00 +/- 400.10
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga patonw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga patonw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga patonw
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
stabilityai/StableBeluga1-Delta
|
stabilityai
| 2023-07-27T18:53:45Z | 1,588 | 58 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:conceptofmind/cot_submix_original",
"dataset:conceptofmind/flan2021_submix_original",
"dataset:conceptofmind/t0_submix_original",
"dataset:conceptofmind/niv2_submix_original",
"arxiv:2302.13971",
"arxiv:2306.02707",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T12:51:49Z |
---
license: cc-by-nc-4.0
datasets:
- conceptofmind/cot_submix_original
- conceptofmind/flan2021_submix_original
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
language:
- en
pipeline_tag: text-generation
---
# Stable Belgua 1
## Model Description
`Stable Beluga 1` is a Llama65B model fine-tuned on an Orca style Dataset
## Usage
### Apply Delta Weights
Stable Beluga 1 cannot be used from the `stabilityai/StableBeluga1-Delta` weights alone. To obtain the correct model, one must add back the difference between LLaMA 65B and `stabilityai/StableBeluga1-Delta` weights. We provide the [`apply_delta.py`](https://huggingface.co/stabilityai/StabelBeluga1-Delta/raw/main/apply_delta.py) script to automate the conversion, which you can run as:
```sh
python3 apply_delta.py --base-model-path /path/to/model_weights/llama-65b --target-model-path StableBeluga1 --delta-path stabilityai/StableBeluga1-Delta
```
Start chatting with `Stable Beluga 1` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_path_to_StableBeluga1", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("your_path_to_StableBeluga1", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_prompt = "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n"
system_prompt += "### Instruction:\nYou are Stable Beluga, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### Input: {message}\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Stable Beluga 1 should be used with prompts formatted similarly to Alpaca as below:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
## Instruction:
This is a system prompt, please behave and help the user.
### Input:
Your prompt here
### Response:
The output of Stable Beluga 1
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: Stable Beluga 1 is an auto-regressive language model fine-tuned on LLaMA65B.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`StableBeluga1`) is licensed under the Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
### Training Dataset
`Stable Beluga 1` is trained on our internal Orca-style dataset
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters:
| Dataset | Batch Size | Learning Rate |Learning Rate Decay| Warm-up | Weight Decay | Betas |
|-------------------|------------|---------------|-------------------|---------|--------------|-------------|
| Orca pt1 packed | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
| Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
## Use and Limitations
### Ethical Considerations and Limitations
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
## Citations
```bibtext
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtext
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
|
Aevermann/rwkv-world-latest
|
Aevermann
| 2023-07-27T18:36:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T18:16:58Z |
---
license: apache-2.0
---
This is a clone of the BlinkDL RWKV World Model
Its a test. Pls load from orignal repo
|
Oussafik/llama2-qlora-finetunined-french
|
Oussafik
| 2023-07-27T18:32:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T18:32:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
dariowsz/ppo-Pyramids
|
dariowsz
| 2023-07-27T18:29:47Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-27T18:28:33Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dariowsz/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
idealflaw/dqn-SpaceInvadersNoFrameskip-v4
|
idealflaw
| 2023-07-27T18:24:12Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T18:08:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 639.00 +/- 185.56
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga idealflaw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga idealflaw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga idealflaw
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Leogrin/eleuther-pythia1b-hh-dpo
|
Leogrin
| 2023-07-27T18:21:11Z | 168 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2305.18290",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-27T14:35:26Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
---
# Infos
Pythia-1b supervised finetuned with Anthropic-hh-rlhf dataset for 1 epoch (sft-model), before DPO [(paper)](https://arxiv.org/abs/2305.18290) with same dataset for 1 epoch.
[wandb log](https://wandb.ai/pythia_dpo/Pythia_DPO_new/runs/jk09pzqb)
See [Pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) for model details [(paper)](https://arxiv.org/abs/2101.00027).
# Benchmark raw results:
Results for the base model are taken from the [Pythia paper](https://arxiv.org/abs/2101.00027).
## Zero shot
| Task | 1B_base | 1B_sft | 1B_dpo |
|------------------|----------------|----------------|-----------------|
| Lambada (OpenAI) | 0.562 Β± 0.007 | 0.563 Β± 0.007 | 0.5575 Β± 0.0069 |
| PIQA | 0.707 Β± 0.011 | 0.711 Β± 0.011 | 0.7122 Β± 0.0106 |
| WinoGrande | 0.537 Β± 0.014 | 0.534 Β± 0.014 | 0.5525 Β± 0.0140 |
| WSC | 0.365 Β± 0.047 | 0.365 Β± 0.047 | 0.3654 Β± 0.0474 |
| ARC - Easy | 0.569 Β± 0.010 | 0.583 Β± 0.010 | 0.5901 Β± 0.0101 |
| ARC - Challenge | 0.244 Β± 0.013 | 0.248 Β± 0.013 | 0.2611 Β± 0.0128 |
| SciQ | 0.840 Β± 0.012 | 0.847 Β± 0.011 | 0.8530 Β± 0.0112 |
| LogiQA | 0.223 Β± 0.016 | N/A | N/A |
## Five shot
| Task | 1B_base | 1B_sft | 1B_dpo |
|------------------|----------------|----------------|-----------------|
| Lambada (OpenAI) | 0.507 Β± 0.007 | 0.4722 Β± 0.007 | 0.4669 Β± 0.0070 |
| PIQA | 0.705 Β± 0.011 | 0.7165 Β± 0.0105| 0.7138 Β± 0.0105 |
| WinoGrande | 0.532 Β± 0.014 | 0.5343 Β± 0.014 | 0.5525 Β± 0.0140 |
| WSC | 0.365 Β± 0.047 | 0.5000 Β± 0.0493| 0.5577 Β± 0.0489 |
| ARC - Easy | 0.594 Β± 0.010 | 0.6010 Β± 0.010 | 0.6170 Β± 0.0100 |
| ARC - Challenge | 0.259 Β± 0.013 | 0.2679 Β± 0.0129| 0.2833 Β± 0.0132 |
| SciQ | 0.920 Β± 0.009 | 0.9100 Β± 0.0091| 0.9020 Β± 0.0094 |
| LogiQA | 0.227 Β± 0.016 | N/A | N/A |
|
grace-pro/no-delete_5e-5_hausa
|
grace-pro
| 2023-07-27T18:18:51Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-27T17:02:05Z |
---
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: no-delete_5e-5_hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no-delete_5e-5_hausa
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1716
- Precision: 0.4009
- Recall: 0.2840
- F1: 0.3325
- Accuracy: 0.9559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1421 | 1.0 | 1283 | 0.1347 | 0.4610 | 0.1779 | 0.2567 | 0.9594 |
| 0.1234 | 2.0 | 2566 | 0.1332 | 0.4847 | 0.1920 | 0.2750 | 0.9603 |
| 0.1041 | 3.0 | 3849 | 0.1412 | 0.4581 | 0.2305 | 0.3067 | 0.9595 |
| 0.0822 | 4.0 | 5132 | 0.1562 | 0.3979 | 0.2752 | 0.3253 | 0.9559 |
| 0.0664 | 5.0 | 6415 | 0.1716 | 0.4009 | 0.2840 | 0.3325 | 0.9559 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
snob/TagMyBookmark-KoAlpaca-QLoRA-v1.0_ALLDATA-Finetune300
|
snob
| 2023-07-27T18:00:43Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T18:00:38Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
kusknish/ppo-LunarLander-v2
|
kusknish
| 2023-07-27T17:59:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T17:59:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -202.29 +/- 149.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
snob/TagMyBookmark-KoAlpaca-QLoRA-v1.0-Finetune300
|
snob
| 2023-07-27T17:57:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T17:57:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
vxbrandon/my_awesome_qa_model
|
vxbrandon
| 2023-07-27T17:48:35Z | 117 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-27T16:13:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3143 | 1.0 | 685 | 1.3187 |
| 1.3356 | 2.0 | 1370 | 1.2095 |
| 1.0967 | 3.0 | 2055 | 1.1920 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
dariowsz/ppo-SnowballTarget
|
dariowsz
| 2023-07-27T17:45:56Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-27T17:45:49Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dariowsz/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
smangrul/peft-lora-starcoderplus-chat-asst-A100-40GB-colab
|
smangrul
| 2023-07-27T17:44:52Z | 28 | 3 |
peft
|
[
"peft",
"tensorboard",
"generated_from_trainer",
"base_model:bigcode/starcoderplus",
"base_model:adapter:bigcode/starcoderplus",
"region:us"
] | null | 2023-07-27T13:20:11Z |
---
base_model: bigcode/starcoderplus
tags:
- generated_from_trainer
model-index:
- name: peft-lora-starcoderplus-chat-asst-A100-40GB-colab
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-starcoderplus-chat-asst-A100-40GB-colab
This model is a fine-tuned version of [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.982 | 0.3 | 203 | 0.9101 |
| 0.9379 | 1.3 | 406 | 0.9078 |
| 0.8899 | 2.3 | 609 | 0.9217 |
### Framework versions
- PEFT 0.5.0.dev0
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
asenella/MMVAEPlus_beta_10_scale_False_seed_2
|
asenella
| 2023-07-27T17:21:14Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T17:21:01Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
badmatr11x/roberta-base-emotions-detection-from-text
|
badmatr11x
| 2023-07-27T17:15:42Z | 136 | 7 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-15T16:56:14Z |
---
license: mit
widget:
- text: With tears of joy streaming down her cheeks, she embraced her long-lost brother after years of separation.
example_title: Joy
- text: As the orchestra played the final note, the audience erupted into thunderous applause, filling the concert hall with joy.
example_title: Joy
- text: The old man sat alone on the park bench, reminiscing about the love he had lost, his eyes filled with sadness.
example_title: Sadness
- text: The news of her best friend moving to a distant country left her feeling a profound sadness and emptiness.
example_title: Sadness
- text: The scientific research paper discussed complex concepts that were beyond the scope of a laymans understanding.
example_title: Neutral
- text: The documentary provided an objective view of the historical events, presenting facts without any bias.
example_title: Neutral
- text: He clenched his fists tightly, trying to control the surge of anger when he heard the offensive remarks.
example_title: Anger
- text: The unfair treatment at work ignited a simmering anger within him, leading him to consider confronting the management.
example_title: Anger
- text: As the magician pulled a rabbit out of an empty hat, the children gasped in amazement and surprise.
example_title: Surprise
- text: He opened the box to find a rare and valuable antique inside, leaving him speechless with surprise.
example_title: Surprise
- text: The moldy and rotting food in the refrigerator evoked a sense of disgust, leading her to clean it immediately.
example_title: Disgust
- text: The movie's graphic scenes of violence and gore left many viewers feeling a sense of disgust and unease.
example_title: Disgust
- text: As the storm raged outside, the little child clung to their parents, seeking comfort from the fear of thunder.
example_title: Fear
- text: The horror movie was so terrifying that some viewers had to cover their eyes in fear, unable to bear the suspense.
example_title: Fear
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
|
asenella/MMVAEPlus_beta_25_scale_False_seed_1
|
asenella
| 2023-07-27T17:11:03Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T17:10:50Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
bochen0909/PyramidsRND
|
bochen0909
| 2023-07-27T17:06:23Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-27T17:06:20Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bochen0909/PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
asenella/MMVAEPlus_beta_25_scale_False_seed_0
|
asenella
| 2023-07-27T17:03:43Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T17:03:30Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
avidoavid/RWKV-1b5-finetuned-overfit
|
avidoavid
| 2023-07-27T17:01:36Z | 21 | 0 |
transformers
|
[
"transformers",
"rwkv",
"text-generation",
"generated_from_trainer",
"base_model:RWKV/rwkv-raven-1b5",
"base_model:finetune:RWKV/rwkv-raven-1b5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T22:19:49Z |
---
base_model: RWKV/rwkv-raven-1b5
tags:
- generated_from_trainer
model-index:
- name: RWKV-1b5-finetuned-overfit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RWKV-1b5-finetuned-overfit
This model is a fine-tuned version of [RWKV/rwkv-raven-1b5](https://huggingface.co/RWKV/rwkv-raven-1b5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 68.7560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6836 | 1.0 | 1 | 1.4341 |
| 1.5494 | 2.0 | 2 | 1.7198 |
| 0.7595 | 3.0 | 3 | 9.1981 |
| 0.3142 | 4.0 | 4 | 35.6430 |
| 0.1007 | 5.0 | 5 | 68.5554 |
| 0.0256 | 6.0 | 6 | 69.8436 |
| 0.0119 | 7.0 | 7 | 69.2797 |
| 0.0082 | 8.0 | 8 | 68.7560 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Rajora1/llama2-pt
|
Rajora1
| 2023-07-27T16:47:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T14:18:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
asenella/MMVAEPlus_beta_25_scale_False_seed_2
|
asenella
| 2023-07-27T16:41:36Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T16:41:23Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
greg-szopinski/Reinforce-pixelcopter-default
|
greg-szopinski
| 2023-07-27T16:29:22Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T16:29:18Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-default
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.60 +/- 60.51
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Luxem/Plant-Disease-Classification
|
Luxem
| 2023-07-27T16:28:31Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-27T16:28:31Z |
---
license: bigscience-openrail-m
---
|
blackmount8/Nous-Hermes-Llama2-13b-ct2-int8
|
blackmount8
| 2023-07-27T16:25:25Z | 1 | 0 |
transformers
|
[
"transformers",
"llama-2",
"self-instruct",
"distillation",
"synthetic instruction",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-27T16:10:32Z |
---
language:
- en
tags:
- llama-2
- self-instruct
- distillation
- synthetic instruction
license:
- mit
---
# blackmount8/Nous-Hermes-Llama2-13b-int8
Int8 version of [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b), quantized using CTranslate2.
# Model Card: Nous-Hermes-Llama2-13b
Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI.
## Model Description
Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
## Example Outputs:




## Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
## Collaborators
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI.
Special mention goes to @winglian for assisting in some of the training issues.
Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
Among the contributors of datasets:
- GPTeacher was made available by Teknium
- Wizard LM by nlpxucan
- Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
- GPT4-LLM and Unnatural Instructions were provided by Microsoft
- Airoboros dataset by jondurbin
- Camel-AI's domain expert datasets are from Camel-AI
- CodeAlpaca dataset by Sahil 2801.
If anyone was left out, please open a thread in the community tab.
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
or
```
### Instruction:
<prompt>
### Input:
<additional context>
### Response:
<leave a newline blank for model to respond>
```
## Benchmark Results
AGI-Eval
```
| Task |Version| Metric |Value | |Stderr|
|agieval_aqua_rat | 0|acc |0.2362|Β± |0.0267|
| | |acc_norm|0.2480|Β± |0.0272|
|agieval_logiqa_en | 0|acc |0.3425|Β± |0.0186|
| | |acc_norm|0.3472|Β± |0.0187|
|agieval_lsat_ar | 0|acc |0.2522|Β± |0.0287|
| | |acc_norm|0.2087|Β± |0.0269|
|agieval_lsat_lr | 0|acc |0.3510|Β± |0.0212|
| | |acc_norm|0.3627|Β± |0.0213|
|agieval_lsat_rc | 0|acc |0.4647|Β± |0.0305|
| | |acc_norm|0.4424|Β± |0.0303|
|agieval_sat_en | 0|acc |0.6602|Β± |0.0331|
| | |acc_norm|0.6165|Β± |0.0340|
|agieval_sat_en_without_passage| 0|acc |0.4320|Β± |0.0346|
| | |acc_norm|0.4272|Β± |0.0345|
|agieval_sat_math | 0|acc |0.2909|Β± |0.0307|
| | |acc_norm|0.2727|Β± |0.0301|
```
GPT-4All Benchmark Set
```
| Task |Version| Metric |Value | |Stderr|
|arc_challenge| 0|acc |0.5102|Β± |0.0146|
| | |acc_norm|0.5213|Β± |0.0146|
|arc_easy | 0|acc |0.7959|Β± |0.0083|
| | |acc_norm|0.7567|Β± |0.0088|
|boolq | 1|acc |0.8394|Β± |0.0064|
|hellaswag | 0|acc |0.6164|Β± |0.0049|
| | |acc_norm|0.8009|Β± |0.0040|
|openbookqa | 0|acc |0.3580|Β± |0.0215|
| | |acc_norm|0.4620|Β± |0.0223|
|piqa | 0|acc |0.7992|Β± |0.0093|
| | |acc_norm|0.8069|Β± |0.0092|
|winogrande | 0|acc |0.7127|Β± |0.0127|
```
BigBench Reasoning Test
```
| Task |Version| Metric |Value | |Stderr|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|Β± |0.0362|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7344|Β± |0.0230|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|Β± |0.0275|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|Β± |0.0073|
| | |exact_str_match |0.0000|Β± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|Β± |0.0200|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|Β± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|Β± |0.0287|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|Β± |0.0192|
|bigbench_navigate | 0|multiple_choice_grade|0.4950|Β± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|Β± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.3728|Β± |0.0229|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|Β± |0.0123|
|bigbench_snarks | 0|multiple_choice_grade|0.6298|Β± |0.0360|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|Β± |0.0155|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|Β± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|Β± |0.0114|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|Β± |0.0083|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|Β± |0.0287|
```
These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores:
- GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1
- 0.3657 on BigBench, up from 0.328 on hermes-llama1
- 0.372 on AGIEval, up from 0.354 on Hermes-llama1
These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position.
## Resources for Applied Use Cases:
For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
## Future Plans
We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward.
## Model Usage
The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
blackmount8/WizardLM-13B-V1.2-ct2-int8
|
blackmount8
| 2023-07-27T16:17:54Z | 2 | 0 |
transformers
|
[
"transformers",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-26T16:16:35Z |
---
license: mit
---
# blackmount8/WizardLM-13B-V1.2-ct2-int8
Int8 version of [WizardLM/WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2), quantized using CTranslate2.
|
NasimB/aochildes-rarity-2
|
NasimB
| 2023-07-27T16:08:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-27T13:44:03Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aochildes-rarity-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aochildes-rarity-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.351 | 0.29 | 500 | 5.3358 |
| 5.0412 | 0.59 | 1000 | 4.9250 |
| 4.7138 | 0.88 | 1500 | 4.6868 |
| 4.4435 | 1.17 | 2000 | 4.5444 |
| 4.3073 | 1.47 | 2500 | 4.4317 |
| 4.205 | 1.76 | 3000 | 4.3274 |
| 4.0796 | 2.05 | 3500 | 4.2630 |
| 3.8987 | 2.35 | 4000 | 4.2145 |
| 3.8749 | 2.64 | 4500 | 4.1579 |
| 3.8421 | 2.93 | 5000 | 4.1113 |
| 3.6388 | 3.23 | 5500 | 4.1089 |
| 3.5906 | 3.52 | 6000 | 4.0804 |
| 3.5776 | 3.81 | 6500 | 4.0451 |
| 3.4712 | 4.11 | 7000 | 4.0519 |
| 3.3209 | 4.4 | 7500 | 4.0435 |
| 3.3179 | 4.69 | 8000 | 4.0297 |
| 3.3071 | 4.99 | 8500 | 4.0193 |
| 3.1447 | 5.28 | 9000 | 4.0337 |
| 3.1394 | 5.57 | 9500 | 4.0322 |
| 3.1343 | 5.87 | 10000 | 4.0318 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Lazycuber/Pygnen-dolly-6B
|
Lazycuber
| 2023-07-27T16:08:06Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T14:12:36Z |
---
license: apache-2.0
---
|
WforGodot/add-lora-7b
|
WforGodot
| 2023-07-27T15:54:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T17:39:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
kejolong/devilnun
|
kejolong
| 2023-07-27T15:54:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-27T15:52:56Z |
---
license: creativeml-openrail-m
---
|
rosiemin/search_embed_distilbert_finetune
|
rosiemin
| 2023-07-27T15:54:11Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-12T16:16:23Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 35 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
aman38649/marian-finetuned-kde4-en-to-fr
|
aman38649
| 2023-07-27T15:50:48Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-27T09:19:00Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_keras_callback
model-index:
- name: aman38649/marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aman38649/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7983
- Validation Loss: 0.8210
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0611 | 0.8791 | 0 |
| 0.7983 | 0.8210 | 1 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.0
- Tokenizers 0.13.3
|
bk6000/a2c-AntBulletEnv-v0
|
bk6000
| 2023-07-27T15:38:45Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T15:37:38Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1166.13 +/- 150.73
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mahmoudzamani/t5_recommendation_sports_equipment_english
|
mahmoudzamani
| 2023-07-27T15:30:14Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-27T15:18:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_recommendation_sports_equipment_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_recommendation_sports_equipment_english
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4517
- Rouge1: 57.9365
- Rouge2: 47.6190
- Rougel: 56.9841
- Rougelsum: 56.6667
- Gen Len: 3.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.96 | 6 | 6.7882 | 8.8278 | 0.9524 | 8.7668 | 8.8278 | 19.0 |
| No log | 1.96 | 12 | 2.3412 | 18.0952 | 0.0 | 18.0952 | 18.0952 | 3.2381 |
| No log | 2.96 | 18 | 0.8550 | 11.9048 | 4.7619 | 11.9048 | 11.9048 | 4.0 |
| No log | 3.96 | 24 | 0.7481 | 32.3810 | 4.7619 | 32.0635 | 32.0635 | 3.9048 |
| No log | 4.96 | 30 | 0.7208 | 21.2698 | 4.7619 | 20.7937 | 20.7937 | 3.6190 |
| No log | 5.96 | 36 | 0.6293 | 31.7460 | 23.8095 | 31.7460 | 31.7460 | 3.6667 |
| No log | 6.96 | 42 | 0.6203 | 43.6508 | 33.3333 | 43.4921 | 42.6984 | 3.9048 |
| No log | 7.96 | 48 | 0.6352 | 48.4127 | 33.3333 | 46.8254 | 46.8254 | 3.8095 |
| No log | 8.96 | 54 | 0.5334 | 53.2540 | 42.8571 | 52.3810 | 52.0635 | 3.9524 |
| No log | 9.96 | 60 | 0.4517 | 57.9365 | 47.6190 | 56.9841 | 56.6667 | 3.9048 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.1+cu118
- Datasets 2.8.0
- Tokenizers 0.13.3
|
alexandremarie/Falcon7b-wiki2-fr
|
alexandremarie
| 2023-07-27T15:14:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-27T15:14:35Z |
---
license: creativeml-openrail-m
---
|
royokong/prompteol-llama-7b
|
royokong
| 2023-07-27T15:07:54Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T15:06:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
cardiffnlp/pcl_robertabase
|
cardiffnlp
| 2023-07-27T15:01:06Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T15:25:52Z |
---
language:
- en
---
## PCL
Someone uses __Patronizing and Condescending Language (PCL)__ when their use of the language denotes a superior attitude towards someone else, or depicts them in a compassionate way, raising a feeling of pity among the audience.
## pcl-roberta-base model for PCL detection
This model is trained on __Don't Patronize Me!__ , a dataset of paragraphs extracted from media articles about vulnerable communities, published in 20 English-speaking countries or areas. The paragraphs have been manually annotated to assess if they contain any type of PCL.
This is the PCL detection model built on roBERTa-base.
- Git Repo: [Don't Patronize Me! official repository](https://github.com/Perez-AlmendrosC/dontpatronizeme)
- Dataset: [Available upon request here] (https://docs.google.com/forms/d/e/1FAIpQLSe5KyzXgpnEOjS-Y6Gb8TTKiWxh4_qLuPL-NGiqKCyF41ALlg/viewform)
<b>Labels</b>:
0 -> Negative;
1 -> Positive
To know more about our work on PCL detection, the PCL detection model and the dataset, please refer to:
## Reference Papers:
```
@inproceedings{perez2020don,
title={Donβt Patronize Me! An Annotated Dataset with Patronizing and Condescending Language towards Vulnerable Communities},
author={P{\'e}rez-Almendros, Carla and Anke, Luis Espinosa and Schockaert, Steven},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={5891--5902},
year={2020}
}
```
```
@inproceedings{perez2022semeval,
title={SemEval-2022 task 4: Patronizing and condescending language detection},
author={P{\'e}rez-Almendros, Carla and Anke, Luis Espinosa and Schockaert, Steven},
booktitle={Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)},
pages={298--307},
year={2022}
}
```
```
@inproceedings{perez2022identifying,
title={Identifying condescending language: A tale of two distinct phenomena?},
author={Perez-Almendros, Carla and Schockaert, Steven},
booktitle={Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)},
pages={130--141},
year={2022}
}
```
|
Pierre-Arthur/distilbert-base-uncased-finetuned-imdb
|
Pierre-Arthur
| 2023-07-27T14:55:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-27T14:51:24Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7026 | 1.0 | 157 | 2.4957 |
| 2.581 | 2.0 | 314 | 2.4286 |
| 2.5363 | 3.0 | 471 | 2.4515 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
aayushi08/segformer-b0-scene-parse-150_pretrained
|
aayushi08
| 2023-07-27T14:52:11Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-07-27T11:52:06Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150_pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150_pretrained
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2284
- Mean Iou: 0.0767
- Mean Accuracy: 0.1574
- Overall Accuracy: 0.5622
- Per Category Iou: [0.5148203561012767, 0.724040099091574, 0.6958825927435793, 0.38401244431532056, 0.29543194795602395, 0.29389807778274474, 0.0, 0.12126925156299818, 0.20467349613092675, 0.04878431281437682, 0.0, 0.1679011093073593, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan]
- Per Category Accuracy: [0.8140876905468601, 0.8295938962384349, 0.867831101268203, 0.8547256107829203, 0.39126018171899396, 0.31410348287229467, 0.0, 0.16157810162353853, 0.7849884441835724, 0.9576966932725199, nan, 0.3186048004107303, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.7729 | 1.0 | 20 | 4.8806 | 0.0109 | 0.0500 | 0.2075 | [0.0325297525314704, 0.24495480446129927, 0.5035687103968282, 0.07590179316096747, 0.0208204321411237, 0.11755765952640118, 0.0012824676676576644, 0.11501857578251874, 0.004708489128929511, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0013707195075028857, nan, 0.0, 0.0, 0.0, 0.10670559106194026, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.012752466783029957, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.038409172339663206, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.039392859389085724, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0] | [0.032714193506590106, 0.2835194865505293, 0.7925572293142232, 0.09808227298140203, 0.023401493632310616, 0.13673498638383258, 0.0016606280193236715, 0.2387377403446556, 0.004989177886202722, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.003921838447777625, nan, nan, nan, nan, 0.1382100892304974, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.11718494271685762, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.038891307502539545, nan, nan, nan, nan, nan, nan, nan, 0.09062118191756158, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.6133 | 2.0 | 40 | 4.5556 | 0.0240 | 0.0928 | 0.4200 | [0.3414124883027797, 0.5189284526020218, 0.511476355875916, 0.1606769579990087, 0.2191685362703107, 0.2398429986223389, 0.015511382795680331, 0.11331394590160879, 0.15028358081340668, 0.01438743301769067, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02806674579347902, 0.0, 0.0, 0.0, 0.0006765899864682003, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.02215046624619006, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03344654459539279, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.011403657777022819, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0] | [0.3974436187647117, 0.6709077053142973, 0.9814366002966801, 0.30133978188970545, 0.24257416429955417, 0.3673578265093243, 0.019345238095238096, 0.2245433220664561, 0.19344069848490406, 0.04469783352337514, nan, 0.0, 0.0, nan, 0.0, 0.07707055214723926, 0.0, nan, 0.0, 0.0013357079252003562, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.02593868716317696, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.14828150572831425, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0161886695389364, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.0018 | 3.0 | 60 | 4.0966 | 0.0381 | 0.1065 | 0.5018 | [0.4579418950126497, 0.5478506343770332, 0.6281485983096435, 0.187622528313154, 0.12857750191310263, 0.2648201387568903, 0.0, 0.17438167563464907, 0.2715138857161505, 0.007824522617422025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0932277924362357, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0007550050195388662, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.015868077162414437, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0001977246456165967, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0] | [0.6575663269835709, 0.750747192817423, 0.9717910146320401, 0.5460234276591875, 0.14223367632950207, 0.35499976111987, 0.0, 0.37980458432611147, 0.3052202942147548, 0.0411630558722919, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0943900267141585, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0008039579468150897, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.01669394435351882, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0002089897755771333, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.7532 | 4.0 | 80 | 3.6052 | 0.0483 | 0.1219 | 0.5263 | [0.5050829619688341, 0.5167095890300885, 0.7748590774250136, 0.18315437529917458, 0.11704024897716543, 0.13685460073575936, 0.0, 0.2130983716844216, 0.29945226721356577, 0.057599769744830505, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan] | [0.8452302398736926, 0.7656999797573261, 0.96594446649813, 0.5077362468593599, 0.1259241144491055, 0.19461564187090918, 0.0, 0.3013058495410133, 0.4392310796434205, 0.7302166476624857, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.1001 | 5.0 | 100 | 3.4344 | 0.0660 | 0.1428 | 0.5519 | [0.5740286466908133, 0.5748238736928366, 0.770694415068295, 0.27976119037100783, 0.13865646665072914, 0.2115060410227592, 0.0, 0.2072166229048963, 0.2555005183734593, 0.047472124273325075, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.044365572315882874, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8366065922675107, 0.764253223943511, 0.9643113330408318, 0.7379712644065285, 0.15274929927199535, 0.28770722851273234, 0.0, 0.4467686226704346, 0.5695733519204667, 0.9087799315849487, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.04452359750667854, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.0427 | 6.0 | 120 | 3.2186 | 0.0650 | 0.1438 | 0.5559 | [0.5735339218911698, 0.6239798677665012, 0.7511513782853694, 0.2645688931826179, 0.12649460613253502, 0.24923481054964644, 0.0, 0.1969366951854885, 0.2184281686899488, 0.051422466461522716, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0623342175066313, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8391044267577821, 0.7624790131101082, 0.9706001156077415, 0.855931347124083, 0.1343328505050885, 0.33846129345627696, 0.0, 0.31312683548512216, 0.5571004072049598, 0.9165336374002281, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.06277827248441674, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.3803 | 7.0 | 140 | 3.0637 | 0.0701 | 0.1427 | 0.5502 | [0.5643446236009608, 0.6478939919910137, 0.7641745041997519, 0.26411100972559143, 0.19549661801352794, 0.1911980999487945, 0.0, 0.16826734984918662, 0.17217137814442804, 0.042858021905894904, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.003116651825467498, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8478744412265063, 0.7597745323346947, 0.9738489717178892, 0.7993762503620577, 0.20660659530841438, 0.27948178937142676, 0.0, 0.2462643837387562, 0.6370006236472358, 0.9530216647662486, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.003116651825467498, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.1859 | 8.0 | 160 | 3.0093 | 0.0584 | 0.1345 | 0.5279 | [0.5304954925773289, 0.630905617211838, 0.7114010240766968, 0.2654748809451504, 0.1130690161527166, 0.18241986166623642, 0.0, 0.1141937010923749, 0.150315689365187, 0.04692530210423179, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0008904719501335708, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7662953107416481, 0.7716819875924317, 0.9095334600839897, 0.7949439905157722, 0.12022423296149289, 0.3263500708677719, 0.0, 0.1470327478251233, 0.626215194981474, 0.9174458380843785, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0008904719501335708, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.5901 | 9.0 | 180 | 2.8961 | 0.0623 | 0.1320 | 0.5360 | [0.5195164676654499, 0.6543788786036646, 0.6849384372869802, 0.30794058237074823, 0.1333599486209231, 0.15503567223107292, 0.0, 0.08126954631008769, 0.22258699934340118, 0.04523293026052965, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8745683977147574, 0.768743823007585, 0.8957162456734151, 0.7660669419427848, 0.14452867811659362, 0.2138693803449429, 0.0, 0.15671118006686247, 0.4974503833596243, 0.9605473204104903, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.6568 | 10.0 | 200 | 2.6655 | 0.0635 | 0.1318 | 0.5376 | [0.5089110011356949, 0.5947639210280143, 0.7501099711752571, 0.2960618158114864, 0.09897355720209366, 0.13247966647434348, 0.0, 0.04938747761057435, 0.25216933229927274, 0.049225711566744525, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8569419883338412, 0.791147700075017, 0.95600637931875, 0.7128461440012933, 0.10955811809853458, 0.1700348764989728, 0.0, 0.06977152250604902, 0.6321948714186141, 0.9732041049030786, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.0827 | 11.0 | 220 | 2.5540 | 0.0664 | 0.1328 | 0.5525 | [0.5477061884254247, 0.6076672388749504, 0.6988056319090914, 0.32100561831494234, 0.0796511455145158, 0.1849044459508501, 0.0, 0.06290754292548194, 0.2377632194665419, 0.04846934405354012, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8413215248067838, 0.8561951512842191, 0.8976592914499022, 0.7920475289140964, 0.10084839820162154, 0.21502396763970505, 0.0, 0.08161097874069559, 0.5591914597013831, 0.9678449258836944, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7122 | 12.0 | 240 | 2.6093 | 0.0663 | 0.1347 | 0.5440 | [0.48626172067111173, 0.6938522126174008, 0.6745183497862148, 0.32800975475961913, 0.13442052689527517, 0.1590950988912591, 0.0, 0.03191117986488059, 0.28731424271802514, 0.055913045911087485, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8683034161065935, 0.8176626260701826, 0.8805792922856208, 0.7411102204678796, 0.1584679922496661, 0.24051247750545443, 0.0, 0.04305424724330914, 0.7284199713855974, 0.9115165336374003, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7849 | 13.0 | 260 | 2.5046 | 0.0657 | 0.1399 | 0.5480 | [0.4882436604761502, 0.6822540965256525, 0.7004956509062636, 0.3247556811491817, 0.13196717267240105, 0.11096064594061923, 0.0, 0.02708401300129288, 0.3101351020607959, 0.04951936249885834, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.858285684121115, 0.8391704671294697, 0.9035476255144893, 0.7431512155034791, 0.1509433962264151, 0.13897249693437166, 0.0, 0.041401156240187656, 0.9297112880149675, 0.9891676168757126, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.9403 | 14.0 | 280 | 2.6340 | 0.0659 | 0.1387 | 0.5293 | [0.48312476897435797, 0.6488606361658413, 0.6648547679594857, 0.3053698024726054, 0.20489118952038876, 0.12576909929926508, 0.0, 0.013371640156689207, 0.34921209139450415, 0.05279407025459233, 0.0, 0.05094082693736073, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8812484853429183, 0.765260892344697, 0.8539546901225025, 0.7465461379389318, 0.21890930980642978, 0.18750497666937396, 0.0, 0.021878059141870302, 0.8853222788803697, 0.9339794754846066, nan, 0.05281735335643691, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3078 | 15.0 | 300 | 2.5251 | 0.0644 | 0.1402 | 0.5368 | [0.5032657249511785, 0.6702271327640467, 0.6718064850372001, 0.30504826506652755, 0.1492842535787321, 0.16564926971140018, 0.0, 0.016966269440517066, 0.18991325708144624, 0.048684350697773514, nan, 0.05048798798798799, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7896420250455297, 0.8229019063835867, 0.8759932863938046, 0.8191058690395199, 0.16890836923192687, 0.20635261892249135, 0.0, 0.026375574887792984, 0.9075901537107011, 0.9379703534777651, nan, 0.051790527531767425, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1205 | 16.0 | 320 | 2.4200 | 0.0671 | 0.1408 | 0.5424 | [0.4851497985135246, 0.6844669447293905, 0.6787579124670596, 0.3294613919560565, 0.20455656925074622, 0.08834832285596292, 0.0, 0.026740147090214036, 0.2962578442229605, 0.05154904633008221, nan, 0.04160365166222124, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8799191862962226, 0.8123995308462628, 0.8699970053416348, 0.7538950672585327, 0.22818337440508663, 0.1226490213877343, 0.0, 0.038178090541364215, 0.9421475475989581, 0.9359179019384265, nan, 0.048549608522654344, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7337 | 17.0 | 340 | 2.4611 | 0.0715 | 0.1478 | 0.5546 | [0.491625569615171, 0.7171170389611052, 0.6864015302376366, 0.3032877086334042, 0.21901611424079653, 0.1455949153673077, 0.0, 0.015259275152876733, 0.3620399802217984, 0.052233755188337394, 0.0, 0.15355001924874884, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8814404418839574, 0.8366475750467368, 0.8821915327775804, 0.7332965101005678, 0.25158486803739727, 0.19115984265762107, 0.0, 0.02233058126004322, 0.9401298653655673, 0.9944127708095781, nan, 0.17918110640482607, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2253 | 18.0 | 360 | 2.5361 | 0.0683 | 0.1453 | 0.5332 | [0.4819568731474902, 0.680265368149286, 0.6843025301041807, 0.2899856590091187, 0.3087323785295647, 0.14888743830235568, 0.0, 0.024825875282443104, 0.17798215487023333, 0.06359447004608294, 0.0, 0.14556064830128054, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8169910332300767, 0.7745591264690823, 0.8805827744465106, 0.7940817879924826, 0.41321319061682876, 0.20704537129934866, 0.0, 0.03936942428104394, 0.7376279393961628, 0.9519954389965792, nan, 0.19769605955589784, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3414 | 19.0 | 380 | 2.4640 | 0.0683 | 0.1439 | 0.5352 | [0.4849713967261648, 0.7036371871576641, 0.6922972055523594, 0.3356658592901123, 0.2402872807341619, 0.1596577580552716, 0.0, 0.047547589564925385, 0.2061945641719802, 0.04166013276880456, 0.0, 0.025974025974025976, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7955566859662973, 0.7954239649444517, 0.8841519893585164, 0.7990663963302506, 0.3058748283451532, 0.21515137037567883, 0.0, 0.08368888642618348, 0.8494075351260134, 0.9996579247434435, nan, 0.026761648055448596, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.508 | 20.0 | 400 | 2.4162 | 0.0730 | 0.1498 | 0.5541 | [0.4861101723555255, 0.7257792619019059, 0.699673591241319, 0.33684785322016975, 0.2880978687290836, 0.1881996877887158, 0.0, 0.04428891975638423, 0.2535444554403875, 0.05175622381069756, 0.0, 0.1379656130528339, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8455877589313779, 0.8375540300782319, 0.8830133227475643, 0.7271667890365561, 0.33845632912583007, 0.23519341327855015, 0.0, 0.061247483422914244, 0.8928794159727063, 0.9576966932725199, nan, 0.2119111795661661, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9607 | 21.0 | 420 | 2.3918 | 0.0702 | 0.1504 | 0.5396 | [0.507695581246897, 0.6985780592300636, 0.6698981931830353, 0.3268301579730071, 0.3054300659810973, 0.21641804793868566, 0.0, 0.019582922325922552, 0.18294713323002632, 0.04517401704445434, 0.0, 0.1149816335083697, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.80723964094529, 0.8035641990450221, 0.8493826128742452, 0.7845975602362973, 0.3866325551646946, 0.2806045259821955, 0.0, 0.02860124489758224, 0.9311786932756154, 0.9948688711516533, nan, 0.14965986394557823, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.9236 | 22.0 | 440 | 2.3403 | 0.0712 | 0.1488 | 0.5518 | [0.5364709094748005, 0.7157040135965486, 0.6919605889395992, 0.3555111122884162, 0.2598097326773754, 0.2303148717750308, 0.0, 0.01760396975425331, 0.2036683013326684, 0.04360612209112998, 0.0, 0.07802606547602146, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8108484239168252, 0.8148301401507484, 0.8874809351691285, 0.9040260816263295, 0.3298218551891495, 0.28440272004841305, 0.0, 0.030272806191241387, 0.8090172053266811, 0.9924743443557583, nan, 0.08817866769349249, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9482 | 23.0 | 460 | 2.3411 | 0.0720 | 0.1514 | 0.5668 | [0.49056244853922376, 0.7266009942762334, 0.7052889865732858, 0.3548955744562617, 0.22703973358581736, 0.19574884192344205, 0.0, 0.05695680486216627, 0.23538302848330728, 0.049893043654919395, 0.0, 0.19959628089062884, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8291610779319563, 0.8825776068396423, 0.8928016770086845, 0.7743386974005941, 0.2616302037284373, 0.2183762521300145, 0.0, 0.08481557414898136, 0.8455189111852967, 0.9521094640820981, nan, 0.3141124374278013, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2529 | 24.0 | 480 | 2.3104 | 0.0725 | 0.1497 | 0.5836 | [0.5113420655011527, 0.8045573464173746, 0.6962598456991187, 0.3590822991203078, 0.27860642520466383, 0.1485592640462252, 0.0, 0.023266297678379122, 0.2656858185022889, 0.046793389845020975, 0.0, 0.1291229211186472, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8680610709735316, 0.9369812814803349, 0.9006539498150973, 0.7750998605656857, 0.33802366485449314, 0.17114965043874317, 0.0, 0.025738349864243365, 0.7962874646905609, 0.9970353477765108, nan, 0.17837889872930304, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.0998 | 25.0 | 500 | 2.4495 | 0.0716 | 0.1541 | 0.5391 | [0.43688653221954715, 0.733287497831381, 0.6921168863095847, 0.3378376929961361, 0.28901953901953903, 0.25230697522202, 0.0, 0.0300467152913023, 0.14611836498363812, 0.051168724933002056, 0.0, 0.1828301028913999, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.6452835078138309, 0.8493051999857111, 0.872507643343153, 0.8492627494830153, 0.37175266652871575, 0.4051645884095361, 0.0, 0.04359912081417041, 0.823948053853773, 0.9557582668187001, nan, 0.34838274932614555, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.5943 | 26.0 | 520 | 2.3525 | 0.0733 | 0.1537 | 0.5399 | [0.4964431098359949, 0.6897805528483147, 0.7012728391399409, 0.3524738700399841, 0.32639010699877213, 0.2303413215501499, 0.0, 0.05761208001790838, 0.1800597813262015, 0.0469448823964735, 0.0, 0.14580612004539711, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8009074745477623, 0.7669056096021719, 0.8890339789259623, 0.8123160241686145, 0.42004176150792905, 0.3254264010319622, 0.0, 0.07843408876821632, 0.8397593455372537, 0.998175598631699, nan, 0.21848928250545502, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6678 | 27.0 | 540 | 2.2825 | 0.0787 | 0.1577 | 0.5861 | [0.5126736049233109, 0.772986405128431, 0.7114639216183581, 0.38754642455125743, 0.29371878188946776, 0.2553816111517934, 0.0, 0.13962258581117482, 0.20097674111646413, 0.04935255174150135, 0.0, 0.1400173193495622, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.828335664805488, 0.9144300496540885, 0.8986586716252638, 0.7745811918602693, 0.4115013450215392, 0.30824295701750193, 0.0, 0.17471971334109085, 0.7820169485307605, 0.9834663625997719, nan, 0.23347452188422538, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1303 | 28.0 | 560 | 2.3182 | 0.0758 | 0.1610 | 0.5611 | [0.5244217877873839, 0.7196797467040382, 0.7154193001600868, 0.3697657853229992, 0.29826594815907514, 0.2688369361764598, 0.0, 0.1257064600856439, 0.16174030561725586, 0.05200386136455641, 0.0, 0.1740916271721959, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.767178310830428, 0.8255051737893094, 0.899459568629909, 0.8811642428447295, 0.3727496755017965, 0.2916328253149236, 0.0, 0.22718457361334293, 0.88686305440405, 0.9521094640820981, nan, 0.335932486202028, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.0753 | 29.0 | 580 | 2.3862 | 0.0727 | 0.1559 | 0.5459 | [0.5125531180422713, 0.6913985422892837, 0.7135806413409606, 0.354653629423774, 0.33716292636466455, 0.23221484314434854, 0.0, 0.07243706665192746, 0.19018123761937145, 0.043853324272872043, 0.0, 0.12527584076264453, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8158584896379459, 0.768236267727224, 0.9021895827674822, 0.8389904147328856, 0.4261931187569367, 0.28621820903603906, 0.0, 0.10892853844590976, 0.9439084339117356, 0.9639680729760547, nan, 0.1821653189577718, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1803 | 30.0 | 600 | 2.3013 | 0.0740 | 0.1545 | 0.5533 | [0.5097797474754986, 0.7108171722194596, 0.7005830611824793, 0.35823921708559114, 0.32186401376318347, 0.26049934774566624, 0.0, 0.05290972927345461, 0.2489013269204167, 0.04425081424655228, 0.0, 0.12392266480316795, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8224881886740842, 0.797866481704195, 0.8769717736038276, 0.8723805546387169, 0.40472920860061323, 0.3196056885321612, 0.0, 0.07988400657542342, 0.858101911295352, 0.9589509692132269, nan, 0.18778077268643306, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.7699 | 31.0 | 620 | 2.3166 | 0.0734 | 0.1549 | 0.5524 | [0.4792127260216525, 0.7026433433542578, 0.7059736466564126, 0.3817381108982241, 0.3173152259075477, 0.20516239705695122, 0.0, 0.13988002699771732, 0.18151654002499318, 0.04790945097194372, 0.0, 0.14376245178245942, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8472337862707883, 0.8057000988318787, 0.8854403888877281, 0.7193463427120311, 0.3885513271506236, 0.24379309795677861, 0.0, 0.17034225448366302, 0.92718001394035, 0.988939566704675, nan, 0.21765498652291104, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.7281 | 32.0 | 640 | 2.3447 | 0.0759 | 0.1618 | 0.5498 | [0.5100269861160145, 0.6908621861959551, 0.7038890577210998, 0.36374877913651393, 0.3435310328652262, 0.27837409064273155, 0.0, 0.1098173826075146, 0.19403438199688816, 0.04861480541801367, 0.0, 0.17298451681793914, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.807661945335576, 0.7712979721603697, 0.8791272311945901, 0.8579656062024694, 0.4408472695122181, 0.3101778860701034, 0.0, 0.1642747640420384, 0.9378553872115631, 0.9482326111744583, nan, 0.3534847901424721, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.8111 | 33.0 | 660 | 2.3018 | 0.0769 | 0.1575 | 0.5579 | [0.48684512810972097, 0.7188739727624192, 0.6881599572448257, 0.38048206300778636, 0.298146582950978, 0.24303529909110125, 0.0, 0.13649151841125362, 0.21524011803518422, 0.049986482197642026, 0.0, 0.1652892561983471, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8288227545283747, 0.8261377573498767, 0.8698437902624853, 0.757714354998417, 0.37160217460825073, 0.2746643734174191, 0.0, 0.18280046545132156, 0.9071132470009905, 0.9697833523375142, nan, 0.3112565781029393, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2645 | 34.0 | 680 | 2.2879 | 0.0764 | 0.1586 | 0.5647 | [0.523132132565937, 0.7332575089854492, 0.6917468223029607, 0.3768701209447672, 0.3189359143399452, 0.2538414921554104, 0.0, 0.11942452590998628, 0.23668586179507545, 0.04691834451901566, 0.0, 0.1351541120553075, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8254851101710573, 0.8271662637977638, 0.8724867503778144, 0.8643512936405828, 0.4140785191595026, 0.3001767712961636, 0.0, 0.1696496185885004, 0.874536850214608, 0.9565564424173318, nan, 0.24088692080605828, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3385 | 35.0 | 700 | 2.2476 | 0.0758 | 0.1573 | 0.5636 | [0.5046784645767062, 0.730853100421637, 0.6906982792843356, 0.3974850939489274, 0.3106473345049795, 0.2400536151853843, 0.0, 0.1487451411188102, 0.2111970669754061, 0.046564458308630825, 0.0, 0.1293910893957243, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8192153296493674, 0.8335992665007561, 0.8784830314299842, 0.8365856780077733, 0.3990105156229425, 0.2738044049495963, 0.0, 0.19507397351360337, 0.8833412817784951, 0.958266818700114, nan, 0.21499165704017456, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3315 | 36.0 | 720 | 2.2589 | 0.0760 | 0.1589 | 0.5651 | [0.5223323082506124, 0.7408631116141162, 0.6837550061879653, 0.3814286554522997, 0.31727334903868076, 0.27626367677228175, 0.0, 0.11901900163268142, 0.19294514689905456, 0.04649480322961883, 0.0, 0.13994002024794178, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8012409990378179, 0.8457299865445755, 0.8638092054405282, 0.8736603865092248, 0.39268985496341163, 0.30881626932938383, 0.0, 0.15820727360041373, 0.9186323782970762, 0.9599771949828962, nan, 0.23507893723527146, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2562 | 37.0 | 740 | 2.2604 | 0.0766 | 0.1562 | 0.5559 | [0.49336952101419806, 0.7146194028978202, 0.6916788345482708, 0.3836707024845504, 0.3132657940350248, 0.27580309286465676, 0.0, 0.13351638033194538, 0.1759989723129005, 0.046297154256623806, 0.0, 0.1402667526292461, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8034389014327157, 0.8187551350900799, 0.8676465467410456, 0.8196649534882154, 0.3943828890686431, 0.30688930294778083, 0.0, 0.19041022515284164, 0.8544333981437323, 0.9685290763968073, nan, 0.22339879347965602, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.67 | 38.0 | 760 | 2.2727 | 0.0739 | 0.1558 | 0.5591 | [0.4765426711244061, 0.7216823770278837, 0.6934404914710587, 0.3876700969962654, 0.30409929078014186, 0.2594024527502083, 0.0, 0.1448988355027462, 0.2239744052840704, 0.0478213699439367, 0.0, 0.14076731509378101, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8122737012340406, 0.8268656005525059, 0.8710033498387759, 0.8125046309705841, 0.40329953535619556, 0.28261908174478045, 0.0, 0.2087973993830923, 0.7961407241644961, 0.9599771949828962, nan, 0.2593697856501091, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.5761 | 39.0 | 780 | 2.2565 | 0.0749 | 0.1581 | 0.5552 | [0.49867026315457086, 0.6929315525434497, 0.6960117538985977, 0.3931985791879709, 0.3232063734899765, 0.24912395255196432, 0.0, 0.13885954321360297, 0.25672207215790005, 0.04745549809317958, 0.0, 0.14958776967762108, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8525893737657795, 0.779177730677177, 0.8759271253368991, 0.8313989909536095, 0.4212645083617073, 0.273422196741675, 0.0, 0.2039304778264162, 0.8402729373784805, 0.9563283922462942, nan, 0.2916827108201771, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6946 | 40.0 | 800 | 2.2601 | 0.0750 | 0.1576 | 0.5534 | [0.4944224623891422, 0.6888145453780341, 0.7037414414835037, 0.3911177333985678, 0.31352842930796604, 0.2932870405087212, 0.0, 0.1482581406124942, 0.21531963361966683, 0.04624139613029104, 0.0, 0.15333549531676235, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8260225884859668, 0.779637656136507, 0.892268906392551, 0.8292569565598119, 0.40610244737485657, 0.31142802541684583, 0.0, 0.19501856264199036, 0.8322022084449173, 0.9728620296465222, nan, 0.25792581183416763, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6296 | 41.0 | 820 | 2.2467 | 0.0751 | 0.1579 | 0.5549 | [0.4864490946546677, 0.7077958079871842, 0.6997543297983504, 0.38735074938756614, 0.3056326068497028, 0.2837938054384179, 0.0, 0.13901094040281914, 0.15548142780975296, 0.05017387576219512, 0.0, 0.16343096368023266, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8073908067213583, 0.8075680808754361, 0.8767872190766702, 0.8115952767468021, 0.4062529392953216, 0.3108069370789738, 0.0, 0.17413789918915423, 0.8344400014674053, 0.960775370581528, nan, 0.3281992042099859, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.8938 | 42.0 | 840 | 2.2589 | 0.0760 | 0.1583 | 0.5496 | [0.47144056301698833, 0.6844253220677846, 0.7050681830729311, 0.38940204180845894, 0.3170161841805334, 0.2829081766277303, 0.0, 0.14849037976661433, 0.19469181838122396, 0.05107984749389309, 0.0, 0.1739419420657248, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8139581198816588, 0.777559805194032, 0.8894065701411668, 0.8039297574381807, 0.4185932767734532, 0.30104470243498477, 0.0, 0.193079182135535, 0.8140430683444, 0.9608893956670468, nan, 0.3610897189064305, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 0.8033 | 43.0 | 860 | 2.2384 | 0.0757 | 0.1567 | 0.5482 | [0.47514768777939764, 0.6799096805891669, 0.7066319360377584, 0.37413968719359536, 0.3190275365914165, 0.28839185669174466, 0.0, 0.12979580141159439, 0.21690195696621556, 0.04958852948626566, 0.0, 0.1692121050969704, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8250364117563783, 0.7744668436908348, 0.8841868109674139, 0.7942299790511731, 0.4112567956507835, 0.3156084276909847, 0.0, 0.15896455551245822, 0.8172713599178253, 0.9598631698973774, nan, 0.32672314208702347, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9207 | 44.0 | 880 | 2.2248 | 0.0778 | 0.1569 | 0.5629 | [0.5048158614958531, 0.7323207951853928, 0.6968780658627216, 0.3717431595955756, 0.30453854251959295, 0.2582376063809487, 0.0, 0.1284858912594632, 0.2154746927320236, 0.04812190423775454, 0.0, 0.16130029364793158, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8147979297487049, 0.8380288398566342, 0.8731274679815306, 0.843052196932445, 0.39180571493067967, 0.2694010478875034, 0.0, 0.17241092702388208, 0.8167944532081147, 0.9571265678449259, nan, 0.3014054678475164, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9397 | 45.0 | 900 | 2.2397 | 0.0739 | 0.1556 | 0.5609 | [0.5104979289382761, 0.7335484971743466, 0.6945526654100276, 0.38850570478760144, 0.30587837075482344, 0.28333104638650364, 0.0, 0.1343887001712975, 0.17689109754822732, 0.04549396448174262, 0.0, 0.12715929031000195, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7987455640043094, 0.8382982460318406, 0.8635097396040087, 0.8525498966030568, 0.40798359638066933, 0.31239150860764736, 0.0, 0.17098871465248147, 0.8069995230932903, 0.9630558722919043, nan, 0.20927993839045053, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9628 | 46.0 | 920 | 2.2622 | 0.0746 | 0.1568 | 0.5597 | [0.5243409717367976, 0.7234668087061871, 0.6961363883642491, 0.3877068557919622, 0.3073133918770582, 0.3141209752305267, 0.0, 0.12196637124161351, 0.16876002030244194, 0.04680424142652056, 0.0, 0.14262956861751236, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7937450961102407, 0.8248353794310618, 0.86889664250047, 0.86718713162734, 0.4213209428318817, 0.34747503702642013, 0.0, 0.16318501690031584, 0.7684434498697678, 0.9628278221208666, nan, 0.2528237710178411, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.3656 | 47.0 | 940 | 2.2377 | 0.0741 | 0.1562 | 0.5470 | [0.4763894452883486, 0.6767419874917132, 0.7003469975886608, 0.3897728739192154, 0.3134054542013075, 0.2600953343490263, 0.0, 0.13592144099973935, 0.24371247768943696, 0.04822769497637688, 0.0, 0.1622893246626674, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8360331221011563, 0.7703438873078434, 0.8707770093809415, 0.8137979347555184, 0.4049173235011945, 0.2745927093784339, 0.0, 0.17819212796217285, 0.8265160130599069, 0.9637400228050171, nan, 0.3099088692080606, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 0.901 | 48.0 | 960 | 2.2366 | 0.0771 | 0.1589 | 0.5660 | [0.5126108756723662, 0.7402906082099798, 0.6984062665053278, 0.3948832465096225, 0.29967275107264923, 0.30957178465350227, 0.0, 0.14647758400448116, 0.1584974262887711, 0.048483525823995954, 0.0, 0.16160454458326798, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7955350908554303, 0.8506849763637013, 0.8702512030865874, 0.8266568770755168, 0.3875919411576591, 0.33030751835395666, 0.0, 0.1980292199996306, 0.834770167651051, 0.9625997719498289, nan, 0.2975869593120267, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.4665 | 49.0 | 980 | 2.2347 | 0.0757 | 0.1564 | 0.5550 | [0.502378498083232, 0.7047110323622909, 0.6973560251743418, 0.39012057813622936, 0.30475148618887915, 0.28088014418744367, 0.0, 0.13636174463126352, 0.19038196980247626, 0.04744125986020268, 0.0, 0.15255730337078652, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8197576068778029, 0.8051047260689917, 0.871222725974831, 0.8227096061485818, 0.39827686751067554, 0.2978198206806491, 0.0, 0.18155372084002883, 0.822443963461609, 0.9635119726339795, nan, 0.27230137337954047, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6342 | 50.0 | 1000 | 2.2284 | 0.0767 | 0.1574 | 0.5622 | [0.5148203561012767, 0.724040099091574, 0.6958825927435793, 0.38401244431532056, 0.29543194795602395, 0.29389807778274474, 0.0, 0.12126925156299818, 0.20467349613092675, 0.04878431281437682, 0.0, 0.1679011093073593, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8140876905468601, 0.8295938962384349, 0.867831101268203, 0.8547256107829203, 0.39126018171899396, 0.31410348287229467, 0.0, 0.16157810162353853, 0.7849884441835724, 0.9576966932725199, nan, 0.3186048004107303, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
bskang/trained_cvpr2023_data_300
|
bskang
| 2023-07-27T14:43:16Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T14:43:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
epsilonai/SargeRVB
|
epsilonai
| 2023-07-27T14:42:04Z | 0 | 0 | null |
[
"rvb",
"red vs blue",
"music",
"rvc",
"text-to-speech",
"en",
"region:us"
] |
text-to-speech
| 2023-07-27T14:38:06Z |
---
language:
- en
pipeline_tag: text-to-speech
tags:
- rvb
- red vs blue
- music
- rvc
---
|
nakcnx/wangchang-math-v2
|
nakcnx
| 2023-07-27T14:29:04Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T10:25:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
Varshitha/flan-t5-small-finetuned-medicine
|
Varshitha
| 2023-07-27T14:27:11Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"text2textgeneration",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-27T14:10:18Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- text2textgeneration
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-small-finetuned-medicine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-finetuned-medicine
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9066
- Rouge1: 9.3596
- Rouge2: 2.6144
- Rougel: 8.94
- Rougelsum: 8.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.1417 | 1.0 | 5 | 2.9168 | 9.5238 | 2.6144 | 8.9947 | 8.9947 |
| 3.1069 | 2.0 | 10 | 2.9066 | 9.3596 | 2.6144 | 8.94 | 8.94 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
flavioloss/gpt2-joker
|
flavioloss
| 2023-07-27T14:16:21Z | 157 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"jokes",
"en",
"dataset:Fraser/short-jokes",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-27T00:05:22Z |
---
license: afl-3.0
datasets:
- Fraser/short-jokes
language:
- en
library_name: transformers
tags:
- jokes
pipeline_tag: text-generation
---
Model trained to tell jokes
Example Prompt:
You are a comedian at a comedy club. The audience is going to ask you to tell jokes about a specific topic. Tell the joke in one output as clear as possible.
Audience: Tell me a joke about dogs
Comedian:
|
oljike/nurtas-lora
|
oljike
| 2023-07-27T14:14:38Z | 12 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:dreamlike-art/dreamlike-photoreal-2.0",
"base_model:adapter:dreamlike-art/dreamlike-photoreal-2.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-25T18:24:29Z |
---
license: creativeml-openrail-m
base_model: dreamlike-art/dreamlike-photoreal-2.0
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - oljike/nurtas-lora
These are LoRA adaption weights for dreamlike-art/dreamlike-photoreal-2.0. The weights were fine-tuned on the ../../../data/people/nurtas dataset. You can find some example images in the following.

|
Andreaa4/Llama-2-7b-chat-hf
|
Andreaa4
| 2023-07-27T14:14:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T14:09:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
prajwalJumde/MRR-Latest-27-7
|
prajwalJumde
| 2023-07-27T14:08:11Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:am-infoweb/MRR-Latest-21-7",
"base_model:finetune:am-infoweb/MRR-Latest-21-7",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-27T12:27:28Z |
---
license: apache-2.0
base_model: am-infoweb/MRR-Latest-21-7
tags:
- generated_from_trainer
model-index:
- name: MRR-Latest-27-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MRR-Latest-27-7
This model is a fine-tuned version of [am-infoweb/MRR-Latest-21-7](https://huggingface.co/am-infoweb/MRR-Latest-21-7) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9033 | 1.0 | 630 | 0.6143 |
| 0.6184 | 2.0 | 1260 | 0.8534 |
| 0.5676 | 3.0 | 1890 | 0.6799 |
| 0.4571 | 4.0 | 2520 | 0.7548 |
| 0.4373 | 5.0 | 3150 | 0.9901 |
| 0.4133 | 6.0 | 3780 | 0.7865 |
| 0.3761 | 7.0 | 4410 | 0.8389 |
| 0.367 | 8.0 | 5040 | 0.8556 |
| 0.3665 | 9.0 | 5670 | 1.0920 |
| 0.3377 | 10.0 | 6300 | 1.0847 |
| 0.2857 | 11.0 | 6930 | 1.1071 |
| 0.2991 | 12.0 | 7560 | 1.0964 |
| 0.2647 | 13.0 | 8190 | 1.3036 |
| 0.2518 | 14.0 | 8820 | 1.3547 |
| 0.2543 | 15.0 | 9450 | 1.5333 |
| 0.2156 | 16.0 | 10080 | 1.4622 |
| 0.1856 | 17.0 | 10710 | 1.4964 |
| 0.2144 | 18.0 | 11340 | 1.7252 |
| 0.1993 | 19.0 | 11970 | 1.7526 |
| 0.1723 | 20.0 | 12600 | 1.8491 |
| 0.1257 | 21.0 | 13230 | 2.0100 |
| 0.1555 | 22.0 | 13860 | 1.9707 |
| 0.1276 | 23.0 | 14490 | 2.0484 |
| 0.1216 | 24.0 | 15120 | 2.1069 |
| 0.1252 | 25.0 | 15750 | 2.1198 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
deinon-daemon/superllama-7b-dollybricks-cqa-lora
|
deinon-daemon
| 2023-07-27T14:06:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T14:05:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
reach-vb/musicgen-large-endpoint
|
reach-vb
| 2023-07-27T14:04:06Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-27T11:46:07Z |
---
inference: false
tags:
- musicgen
license: cc-by-nc-4.0
duplicated_from: facebook/musicgen-large
---
# MusicGen - Large - 3.3B
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre DΓ©fossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [**large** (this checkpoint)](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## π€ Transformers Usage
You can run MusicGen locally with the π€ Transformers library from version 4.31.0 onwards.
1. First install the π€ [Transformers library](https://github.com/huggingface/transformers) from main:
```
pip install git+https://github.com/huggingface/transformers.git
```
2. Run the following Python code to generate text-conditional audio samples:
```py
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-large")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```py
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```py
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the π€ Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("large")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][https://arxiv.org/abs/2306.05284].
**Citation details**:
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre DΓ©fossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** All vocals have been removed from the data source using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). The model is therefore not able to produce vocals.
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
|
NasimB/gutenberg-no-merge-rarity-6p5k
|
NasimB
| 2023-07-27T13:41:34Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-27T11:02:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gutenberg-no-merge-rarity-6p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gutenberg-no-merge-rarity-6p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.2206 | 0.58 | 500 | 5.1052 |
| 4.7954 | 1.16 | 1000 | 4.6615 |
| 4.4101 | 1.74 | 1500 | 4.3895 |
| 4.1133 | 2.33 | 2000 | 4.2397 |
| 3.9572 | 2.91 | 2500 | 4.1257 |
| 3.7283 | 3.49 | 3000 | 4.0785 |
| 3.6356 | 4.07 | 3500 | 4.0313 |
| 3.4289 | 4.65 | 4000 | 4.0056 |
| 3.3393 | 5.23 | 4500 | 3.9986 |
| 3.2334 | 5.81 | 5000 | 3.9949 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
grays-ai/table-detection
|
grays-ai
| 2023-07-27T13:41:05Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"table-transformer",
"object-detection",
"arxiv:2110.00061",
"license:mit",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-27T13:38:56Z |
---
license: mit
widget:
- src: https://www.invoicesimple.com/wp-content/uploads/2018/06/Sample-Invoice-printable.png
example_title: Invoice
---
# Table Transformer (fine-tuned for Table Detection)
Table Transformer (DETR) model trained on PubTables1M. It was introduced in the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Smock et al. and first released in [this repository](https://github.com/microsoft/table-transformer).
Disclaimer: The team releasing Table Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Table Transformer is equivalent to [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a Transformer-based object detection model. Note that the authors decided to use the "normalize before" setting of DETR, which means that layernorm is applied before self- and cross-attention.
## Usage
You can use the raw model for detecting tables in documents. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for more info.
|
luodian/Flamingo-Llama2-Chat7B-CC3M
|
luodian
| 2023-07-27T13:34:33Z | 4 | 10 |
transformers
|
[
"transformers",
"pytorch",
"flamingo",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-26T01:22:21Z |
---
license: mit
---
**TLDR**: We trained a Flamingo with Llama2-Chat7B as LLM on CC3M in less than 5 hours using just 4 A100s.
The model showed promising zero-shot captioning skills. High-quality captioning data really helps fast alignment.
You could test it via following code. Be sure to visit [Otter](https://github.com/Luodian/Otter) to get necessary Flamingo/Otter models.
```python
from flamingo.modeling_flamingo import FlamingoForConditionalGeneration
flamingo_model = FlamingoForConditionalGeneration.from_pretrained("luodian/Flamingo-Llama2-Chat7B-CC3M", device_map=auto)
prompt = "<image>an image of"
simple_prompt = "<image>"
```
|
SaferChat/falcon-7b-test
|
SaferChat
| 2023-07-27T13:33:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T13:19:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
grays-ai/table-transformer-structure-recognition
|
grays-ai
| 2023-07-27T13:19:00Z | 182 | 0 |
transformers
|
[
"transformers",
"pytorch",
"table-transformer",
"object-detection",
"arxiv:2110.00061",
"license:mit",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-26T20:42:23Z |
---
license: mit
widget:
- src: https://documentation.tricentis.com/tosca/1420/en/content/tbox/images/table.png
example_title: Table
---
# Table Transformer (fine-tuned for Table Structure Recognition)
Table Transformer (DETR) model trained on PubTables1M. It was introduced in the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Smock et al. and first released in [this repository](https://github.com/microsoft/table-transformer).
Disclaimer: The team releasing Table Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Table Transformer is equivalent to [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a Transformer-based object detection model. Note that the authors decided to use the "normalize before" setting of DETR, which means that layernorm is applied before self- and cross-attention.
## Usage
You can use the raw model for detecting the structure (like rows, columns) in tables. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for more info.
|
winterbro/distilbert-base-uncased-finetuned-cola
|
winterbro
| 2023-07-27T13:15:59Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-27T11:28:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5425688103069501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5259
- Matthews Correlation: 0.5426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5361 | 0.4307 |
| 0.3492 | 2.0 | 1070 | 0.5128 | 0.4921 |
| 0.2382 | 3.0 | 1605 | 0.5259 | 0.5426 |
| 0.1758 | 4.0 | 2140 | 0.7495 | 0.5301 |
| 0.1251 | 5.0 | 2675 | 0.7982 | 0.5414 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Carloswear/llama2-qlora-finetunined-french
|
Carloswear
| 2023-07-27T13:12:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T13:12:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
xinyangli/woman_photo
|
xinyangli
| 2023-07-27T13:07:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-27T12:41:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of a sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - xinyangli/woman_photo
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of a sks person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ditwoo/distilhubert-finetuned-gtzan
|
ditwoo
| 2023-07-27T13:04:50Z | 161 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-25T19:25:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9570
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1586 | 1.0 | 112 | 2.0855 | 0.45 |
| 1.4771 | 2.0 | 225 | 1.3396 | 0.72 |
| 1.181 | 3.0 | 337 | 0.9735 | 0.76 |
| 0.8133 | 4.0 | 450 | 0.8692 | 0.76 |
| 0.5397 | 5.0 | 562 | 0.7118 | 0.81 |
| 0.3424 | 6.0 | 675 | 0.6237 | 0.81 |
| 0.2717 | 7.0 | 787 | 0.6551 | 0.83 |
| 0.2653 | 8.0 | 900 | 0.6707 | 0.83 |
| 0.0503 | 9.0 | 1012 | 0.7025 | 0.84 |
| 0.0168 | 10.0 | 1125 | 0.7643 | 0.87 |
| 0.1125 | 11.0 | 1237 | 0.8550 | 0.86 |
| 0.155 | 12.0 | 1350 | 0.9796 | 0.82 |
| 0.005 | 13.0 | 1462 | 0.9539 | 0.86 |
| 0.0038 | 14.0 | 1575 | 0.9206 | 0.86 |
| 0.0035 | 15.0 | 1687 | 0.8725 | 0.88 |
| 0.051 | 16.0 | 1800 | 0.9980 | 0.86 |
| 0.003 | 17.0 | 1912 | 0.9579 | 0.86 |
| 0.0025 | 18.0 | 2025 | 0.9735 | 0.86 |
| 0.0023 | 19.0 | 2137 | 0.9589 | 0.86 |
| 0.0022 | 19.91 | 2240 | 0.9570 | 0.86 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aronmal/a2c-AntBulletEnv-v0
|
aronmal
| 2023-07-27T13:03:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T13:02:47Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1527.35 +/- 59.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Younes-c/rlbs
|
Younes-c
| 2023-07-27T13:03:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-27T12:47:02Z |
# Authors
Alexandre Caspers, Grace Jiyoung Yun
# Introduction
This project aims to utilize the brainpy library to imitate realistic brain behavior, in order to choose the order in which to perform a given list of tasks.
Since the brainpy components need to be heavily adapted to the available tasks, we provide a predefined list of 6 possible tasks.
The user provides the program with wich of the 6 tasks he/she needs to do (possible all), and also which type of work style he/she prefers. For the latter we have 3 predefined choices:
- low effort first (lowest - lower - low - high - higher - highest)
- high effort first (highest - higher - high - low - lower - lowest)
- alternating effort (highest - lowest - higher - lower - high - low)
With this information, the program will then return the most optimal order.
In order to do these predictions we train 3 different models (1 for each work style) that get called according to the user input.
# Components
* DRL - Deep Reinforcement Learning:
- N = amount of total possible tasks (10)
- States: complexities of last performed task, actions that need to be performed (1 or 0), and each their physical and mental complexity (size: 2 + 3 * N)
- Output: softmax of which task to perform (size: N)
- Reward Function: Utilizes BrainPy module to compute the brain activity. Taking this information in relation to the work style of the user, generate a list of rewards for each possible task
- Other Features: decaying epsilon, bootstrapping
* BrainPy:

- Decision Making Model: two components E and I
- E is partitioned into N + 1 components (N and the remainder)
- Each of those components (N + 2) are linked with one another (synapses), so everything has an impact on everything
- The N sub-parts of E each receive a unique input signal, generated in relation to the physical and mental complexity of its associated task
- The model computes and records the brain activity of these N sub-parts of this complex network
- These brain activities are signals, from which we take the average. This result is passed to the reward function of the DRL model
- Input signal construction: shaped like a bar graph, with up- and down-time. Physical complexity defines the amplitude, the mental complexity defines the duration of the up-time
* Other
- A frontend app designed with Streamlit, allowing the user to input the data in a cleaner fashion
- Code Generation components that create the needed brain model, synapses, and more in relation to how many tasks we have (to allow for easy flexibility)
- Section 1 of frontend: uses DRL and pre-computed BrainPy results to perform choices of predefined tasks and complexities
- Section 2 of frontend: user-defined tasks and complexities, skips DRL, as tradeoff directly calls BrainPy (user-defined but much much slower) (for complexity reasons the allowed number of tasks is restricted to 18)
# Reasons for DRL & BrainPy
Research in the domain of brain and personality is not advanced, so we wanted to give our contribution, as we believe that the more knowledge we have on this topic, the more facinating techniques and tools could be built in the future.
BrainPy is currently one of the most powerful brain modeling libraries there are. It allows for very close definition of the exact brain components we want, the synapses, the resting / reset states of neurons, number of neurons and other intrinsic behaviors of the brain.
However, BrainPy is very slow to use for multiple iterations (1 iteration with 5 task takes about 10 seconds, while 1 iteration with 17 can take up more than 1 minute).
With DRL we want to develop a model that can mimic the decision of the brain model but in significant quicker complexity (since running a NN is just a few matrix computations).
# Problems & Solutions
* Data and Knowledge on the domain of brain and personality is beyond scarce
- using self-build datasets (built from research and common sense, e.g. complexity of certain tasks)
- DRL allows us to avoid using existing datasets (which do not exist in the first place)
* BrainPy requires a vast amount of computation
- a DRL needs many iterations, so calling the brain model every iteration would take forever to just train 1 model
- what we do instead is to run the brain model with every possible combination in advance, and save the resulting brain activities
- the DRL then uses this cached data to compute the rewards
- this trick saves an unfathomable amount of computation time
* Brainpy documentation exists but only shows a fraction of the capabilities of BrainPy & examples (outside of the documentation) are close to non-existant
- Trial-and-Error
* Running BrainPy for many iterations slows down PC and eventually crashes (BrainPy model can only be called a certain amount of times before it crashes)
- Calls to BrainPy have to be limited
- Cashed Outputs (But you have to restrict how many tasks, because with 5 tasks there are only 2^5=32 possibilities, but with 2^17=131k it is too much to test everything)
- The maximum amount of tasks that avoid crashing is 6
- Outputs have to be cashed (2^6 calls), because during DRL we would vastly surpass that number
- Cashed Outputs Allow us to precompute the results once, and then for any following training session one iteration (where we would call BrainPy before) would yield results almost instantly, instead of waiting several minutes
* This idea restricts the choices from the user, but we gain computation speed. In order to have the other side too, we implement another section where the user can specify tasks with physical and mental complexity him/herself up to 18 tasks (more than that and BrainPy crashes). This allows choice, but computation time takes much longer too. 3 Tasks is only around 17 seconds whereas 18 tasks may take up to 35 minutes.
# Trained Models Steps
Model low-effort first:

Model high-effort first:

Model alternating-effort first:

# Example BrainPy Iteration
Lower Graph: Input Signals (based on difficulty of the tasks, i.e. physical difficulty determines amplitude, mental difficulty determines duration of a high). Given input activities were: ['workout' 'videogame' 'studying']\
Upper Graph: Resulting Brain Activity

# Architecture
```
.
βββ AdaptiveBrain # same Structure as Brain/, used for section 2 of frontend
βββ Brain
β βββ caller.py # Call the brain code
β βββ mini_brain.py # Core brain model with all the components and synapses
β βββ plotter.py # Plots a Brain model iteration
β βββ pre_run_brain.py # Creates the cahed brain activity
β βββ stim.py # Input Signal Generation
βββ Code_Generation # Code to automatically write other python files in relation to tasks.csv (for easy scalability)
β βββ gen_caller.py
β βββ gen_minibrain.py
β βββ gen_plotter.py
β βββ gen_setup.py
β βββ main_generator.py # calles all the code generation functions above
βββ Data
β βββ prerun_brain_activities.csv # Cached brain activities
β βββ tasks.csv # Data about tasks to use, and their complexities
βββ DRL
β βββ agent.py # Agent, States, Calling training functions
β βββ model.py # The NN part of DRL
β βββ plotter.py # Plots the performance of the trained agent
β βββ rewards.py # The different evaluation types of brain activity (3 work styles)
βββ drl-model/ # Saved models
β βββ model-high_effort_first.h5
β βββ model-low_effort_first.h5
β βββ model-alternating_effort.h5
βββ Pictures/ # Pictures used in this README
βββ ResGraphs/ # Folder in which we saw the steps chosen in section 2 (and the related matplotlib images)
βββ Utils/
β βββ utils.py # Auxilary functions to save and access the ResGraphs Folder more easily
βββ app.py # Frontend
βββ main_inference.py # Main Code for section 2 of frontend (user defined inputs)
βββ main_inference.ipynb # Main Code during Prediction Phase
βββ main_training.ipynb # Main Code during Training Phase
βββ README.md
βββ requirements.txt
βββ setup.py # Global variables used by some files
```
# References
Links to the BrainPy documentation & research paper:
- https://brainpy.readthedocs.io/en/latest/quickstart/simulation.html
- https://doi.org/10.1101/2022.10.28.514024
```
@article {Wang2022brainpy,
author = {Wang, Chaoming and Chen, Xiaoyu and Zhang, Tianqiu and Wu, Si},
title = {BrainPy: a flexible, integrative, efficient, and extensible framework towards general-purpose brain dynamics programming},
elocation-id = {2022.10.28.514024},
year = {2022},
doi = {10.1101/2022.10.28.514024},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2022/10/28/2022.10.28.514024},
eprint = {https://www.biorxiv.org/content/early/2022/10/28/2022.10.28.514024.full.pdf},
journal = {bioRxiv}
}
```
|
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaalan/sbert_large_nlu_ru
|
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaalan
| 2023-07-27T13:03:18Z | 46 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"jax",
"bert",
"PyTorch",
"Transformers",
"ru",
"region:us"
] | null | 2023-07-27T09:07:35Z |
---
library_name: sentence-transformers
language:
- ru
tags:
- PyTorch
- Transformers
---
# BERT large model (uncased) for Sentence Embeddings in Russian language.
The model is described [in this article](https://habr.com/ru/company/sberdevices/blog/527576/)
For better quality, use mean token embeddings.
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['ΠΡΠΈΠ²Π΅Ρ! ΠΠ°ΠΊ ΡΠ²ΠΎΠΈ Π΄Π΅Π»Π°?',
'Π ΠΏΡΠ°Π²Π΄Π°, ΡΡΠΎ 42 ΡΠ²ΠΎΠ΅ Π»ΡΠ±ΠΈΠΌΠΎΠ΅ ΡΠΈΡΠ»ΠΎ?']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("sberbank-ai/sbert_large_nlu_ru")
model = AutoModel.from_pretrained("sberbank-ai/sbert_large_nlu_ru")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=24, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
# Authors
- [SberDevices](https://sberdevices.ru/) Team.
- Denis Antykhov: [Github](https://github.com/gaphex);
- Aleksandr Abramov: [Github](https://github.com/Ab1992ao), [Kaggle Competitions Master](https://www.kaggle.com/andrilko)
|
LenixC/whisper-tiny-finetuned
|
LenixC
| 2023-07-27T12:54:46Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-26T22:21:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.2824655894673848
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5951
- Wer Ortho: 0.2846
- Wer: 0.2825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0009 | 17.86 | 500 | 0.5951 | 0.2846 | 0.2825 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
asenella/ms_MMVAEPlus_beta_25_scale_False_seed_1
|
asenella
| 2023-07-27T12:53:01Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:52:59Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Trong-Nghia/bert-large-uncased-detect-dep-v10
|
Trong-Nghia
| 2023-07-27T12:50:00Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-27T11:53:58Z |
---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-detect-dep-v10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-detect-dep-v10
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5321
- Accuracy: 0.74
- F1: 0.8077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6275 | 1.0 | 501 | 0.5638 | 0.733 | 0.8155 |
| 0.5985 | 2.0 | 1002 | 0.5365 | 0.735 | 0.8143 |
| 0.5661 | 3.0 | 1503 | 0.5321 | 0.74 | 0.8077 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
liuyt75/t5-base_prefix_tuning_sentences_66agree_15
|
liuyt75
| 2023-07-27T12:49:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T12:18:50Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Malcolmcjj13/disbert_finetune_for_gentriple
|
Malcolmcjj13
| 2023-07-27T12:48:34Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-27T09:04:42Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: disbert_finetune_for_gentriple
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disbert_finetune_for_gentriple
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0534
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 210 | 0.0534 | 0.0 | 0.0 | 0.0 | 0.9879 |
| No log | 2.0 | 420 | 0.0534 | 0.0 | 0.0 | 0.0 | 0.9879 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
greg-szopinski/Reinforce-10_000s
|
greg-szopinski
| 2023-07-27T12:45:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T12:45:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-10_000s
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 464.40 +/- 106.80
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
iamkaikai/amazing-logos
|
iamkaikai
| 2023-07-27T12:43:06Z | 15 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:iamkaikai/amazing_logos_v2",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-11T19:33:40Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
datasets:
- iamkaikai/amazing_logos_v2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - iamkaikai/amazing-logos
This pipeline was finetuned from **runwayml/stable-diffusion-v1-5** on the **iamkaikai/amazing_logos_v2** dataset.
## Training info
These are the key hyperparameters used during training:
* Dataset size: 10k
* Epochs: 20
* Learning rate: 1e-07
* Batch size: 1
* Gradient accumulation steps: 1
* Image resolution: 512
* Mixed-precision: fp16

## Prompt Format
The prompt format is as follows:
```javascript
{template keywords} + [company name] + [concept & country] + {template keywords}
```
For example:
```text
Simple elegant logo for **[Google]**, **[G circle United states]**, successful vibe, minimalist, thought-provoking, abstract, recognizable, black and white
```
The [concept & country] section can include words such as:
- lines
- circles
- triangles
- dot
- crosses
- waves
- square
- letters (A-Z)
- 3D
- Angled
- Arrows
- cube
- Diamond
- Hexagon
- Loops
- outline
- ovals
- rectangle
- reflection
- rings
- round
- semicircle
- spiral
- woven
- stars
Here are some examples of prompts:
- Simple elegant logo for Digital Art, **D A circle**, successful vibe, minimalist, thought-provoking, abstract, recognizable, black and white
- Simple elegant logo for 3M Technology Products, **3 M square United states**, successful vibe, minimalist, thought-provoking, abstract, recognizable, black and white
- Simple elegant logo for 38Energy, **lines drop fire flame water**, successful vibe, minimalist, thought provoking, abstract, recognizable, relatable, sharp, vector art, even edges, black and white
|
liuyt75/t5-base_prefix_tuning_sentences_66agree_10
|
liuyt75
| 2023-07-27T12:35:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T12:05:05Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
HeinrichWirth/dqn-SpaceInvadersNoFrameskip-v4
|
HeinrichWirth
| 2023-07-27T12:33:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T12:33:11Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 538.00 +/- 145.31
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HeinrichWirth -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HeinrichWirth -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga HeinrichWirth
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 1e-05),
('learning_starts', 100000),
('n_timesteps', 30000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
bwilkie/bwilkie-whisper-small-dv
|
bwilkie
| 2023-07-27T12:32:54Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-27T09:25:35Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: bwilkie-whisper-small-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: all
split: None
metrics:
- name: Wer
type: wer
value: 0.23270055113288426
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bwilkie-whisper-small-dv
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7358
- Wer Ortho: 0.2389
- Wer: 0.2327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0001 | 17.86 | 500 | 0.7358 | 0.2389 | 0.2327 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
asenella/ms_MMVAEPlus_beta_25_scale_True_seed_1
|
asenella
| 2023-07-27T12:28:12Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:28:10Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
liuyt75/t5-base_prefix_tuning_sentences_50agree_15
|
liuyt75
| 2023-07-27T12:25:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T11:48:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
asenella/ms_MMVAEPlus_beta_5_scale_True_seed_0
|
asenella
| 2023-07-27T12:17:13Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:17:11Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
rehanhaider/DBSD-1.5-9-vectors-lr-5e-6
|
rehanhaider
| 2023-07-27T12:17:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-27T11:59:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: in the style of wlat_mntn
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - rehanhaider/DBSD-1.5-9-vectors-lr-5e-6
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on in the style of wlat_mntn using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
asenella/ms_MMVAEPlus_beta_10_scale_False_seed_3
|
asenella
| 2023-07-27T12:15:36Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:15:34Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
jordyvl/rvlcdip-tiny_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
|
jordyvl
| 2023-07-27T12:14:48Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-27T06:53:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rvlcdip-tiny_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rvlcdip-tiny_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6215
- Accuracy: 0.7963
- Brier Loss: 0.3076
- Nll: 1.6291
- F1 Micro: 0.7963
- F1 Macro: 0.7978
- Ece: 0.0919
- Aurc: 0.0682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 125 | 1.3808 | 0.541 | 0.5996 | 3.3159 | 0.541 | 0.5235 | 0.1039 | 0.2209 |
| No log | 2.0 | 250 | 1.0577 | 0.6525 | 0.4662 | 2.6310 | 0.6525 | 0.6396 | 0.0871 | 0.1302 |
| No log | 3.0 | 375 | 0.9165 | 0.7075 | 0.4104 | 2.2685 | 0.7075 | 0.7041 | 0.0788 | 0.1048 |
| 1.3004 | 4.0 | 500 | 0.8505 | 0.7298 | 0.3804 | 2.1171 | 0.7298 | 0.7380 | 0.0622 | 0.0934 |
| 1.3004 | 5.0 | 625 | 0.8063 | 0.745 | 0.3603 | 2.1178 | 0.745 | 0.7359 | 0.0588 | 0.0814 |
| 1.3004 | 6.0 | 750 | 0.7441 | 0.7662 | 0.3348 | 1.9219 | 0.7663 | 0.7636 | 0.0545 | 0.0741 |
| 1.3004 | 7.0 | 875 | 0.6987 | 0.7732 | 0.3193 | 1.8601 | 0.7732 | 0.7741 | 0.0509 | 0.0697 |
| 0.4682 | 8.0 | 1000 | 0.7033 | 0.773 | 0.3240 | 1.8889 | 0.7730 | 0.7733 | 0.0516 | 0.0776 |
| 0.4682 | 9.0 | 1125 | 0.6973 | 0.7865 | 0.3151 | 1.9589 | 0.7865 | 0.7838 | 0.0441 | 0.0760 |
| 0.4682 | 10.0 | 1250 | 0.7068 | 0.7748 | 0.3252 | 2.0362 | 0.7748 | 0.7749 | 0.0515 | 0.0791 |
| 0.4682 | 11.0 | 1375 | 0.6988 | 0.7768 | 0.3285 | 1.9227 | 0.7768 | 0.7801 | 0.0555 | 0.0840 |
| 0.1899 | 12.0 | 1500 | 0.7048 | 0.7762 | 0.3303 | 1.9777 | 0.7762 | 0.7719 | 0.0627 | 0.0809 |
| 0.1899 | 13.0 | 1625 | 0.6842 | 0.7785 | 0.3240 | 1.9360 | 0.7785 | 0.7784 | 0.0614 | 0.0808 |
| 0.1899 | 14.0 | 1750 | 0.6993 | 0.7742 | 0.3319 | 1.9508 | 0.7742 | 0.7727 | 0.0731 | 0.0759 |
| 0.1899 | 15.0 | 1875 | 0.6936 | 0.7742 | 0.3333 | 1.9042 | 0.7742 | 0.7760 | 0.0717 | 0.0853 |
| 0.1304 | 16.0 | 2000 | 0.6818 | 0.7837 | 0.3233 | 1.9541 | 0.7837 | 0.7855 | 0.0713 | 0.0853 |
| 0.1304 | 17.0 | 2125 | 0.6757 | 0.78 | 0.3255 | 1.8818 | 0.78 | 0.7829 | 0.0755 | 0.0834 |
| 0.1304 | 18.0 | 2250 | 0.7018 | 0.781 | 0.3348 | 2.0078 | 0.7810 | 0.7829 | 0.0786 | 0.0876 |
| 0.1304 | 19.0 | 2375 | 0.6872 | 0.7775 | 0.3340 | 1.8345 | 0.7775 | 0.7786 | 0.0864 | 0.0787 |
| 0.11 | 20.0 | 2500 | 0.7054 | 0.7758 | 0.3379 | 1.9542 | 0.7758 | 0.7747 | 0.0731 | 0.0847 |
| 0.11 | 21.0 | 2625 | 0.7006 | 0.782 | 0.3371 | 1.8610 | 0.782 | 0.7813 | 0.0821 | 0.0891 |
| 0.11 | 22.0 | 2750 | 0.7046 | 0.775 | 0.3428 | 1.8464 | 0.775 | 0.7772 | 0.0833 | 0.0814 |
| 0.11 | 23.0 | 2875 | 0.6620 | 0.789 | 0.3201 | 1.8174 | 0.7890 | 0.7908 | 0.0761 | 0.0799 |
| 0.0979 | 24.0 | 3000 | 0.6886 | 0.783 | 0.3324 | 1.8706 | 0.7830 | 0.7848 | 0.0807 | 0.0773 |
| 0.0979 | 25.0 | 3125 | 0.6600 | 0.7847 | 0.3236 | 1.8218 | 0.7847 | 0.7863 | 0.0833 | 0.0749 |
| 0.0979 | 26.0 | 3250 | 0.6777 | 0.7798 | 0.3349 | 1.7189 | 0.7798 | 0.7812 | 0.0951 | 0.0752 |
| 0.0979 | 27.0 | 3375 | 0.6554 | 0.7857 | 0.3212 | 1.7356 | 0.7857 | 0.7888 | 0.0871 | 0.0709 |
| 0.087 | 28.0 | 3500 | 0.6460 | 0.7955 | 0.3140 | 1.7680 | 0.7955 | 0.7970 | 0.0761 | 0.0696 |
| 0.087 | 29.0 | 3625 | 0.6371 | 0.7935 | 0.3136 | 1.6350 | 0.7935 | 0.7946 | 0.0830 | 0.0706 |
| 0.087 | 30.0 | 3750 | 0.6334 | 0.7915 | 0.3127 | 1.7187 | 0.7915 | 0.7933 | 0.0857 | 0.0712 |
| 0.087 | 31.0 | 3875 | 0.6293 | 0.7977 | 0.3075 | 1.7781 | 0.7977 | 0.7999 | 0.0799 | 0.0661 |
| 0.0793 | 32.0 | 4000 | 0.6273 | 0.7973 | 0.3076 | 1.6439 | 0.7973 | 0.7976 | 0.0782 | 0.0695 |
| 0.0793 | 33.0 | 4125 | 0.6320 | 0.7933 | 0.3123 | 1.6486 | 0.7932 | 0.7954 | 0.0899 | 0.0679 |
| 0.0793 | 34.0 | 4250 | 0.6345 | 0.79 | 0.3154 | 1.6402 | 0.79 | 0.7903 | 0.0922 | 0.0675 |
| 0.0793 | 35.0 | 4375 | 0.6209 | 0.793 | 0.3098 | 1.6026 | 0.793 | 0.7943 | 0.0863 | 0.0630 |
| 0.0733 | 36.0 | 4500 | 0.6187 | 0.7947 | 0.3076 | 1.6282 | 0.7947 | 0.7967 | 0.0880 | 0.0666 |
| 0.0733 | 37.0 | 4625 | 0.6146 | 0.7957 | 0.3051 | 1.6186 | 0.7957 | 0.7971 | 0.0885 | 0.0623 |
| 0.0733 | 38.0 | 4750 | 0.6169 | 0.7983 | 0.3062 | 1.6182 | 0.7983 | 0.7996 | 0.0835 | 0.0650 |
| 0.0733 | 39.0 | 4875 | 0.6180 | 0.7953 | 0.3074 | 1.6241 | 0.7953 | 0.7975 | 0.0889 | 0.0655 |
| 0.0693 | 40.0 | 5000 | 0.6204 | 0.7977 | 0.3069 | 1.6048 | 0.7977 | 0.7987 | 0.0824 | 0.0659 |
| 0.0693 | 41.0 | 5125 | 0.6140 | 0.7967 | 0.3055 | 1.6065 | 0.7967 | 0.7986 | 0.0911 | 0.0662 |
| 0.0693 | 42.0 | 5250 | 0.6162 | 0.7957 | 0.3062 | 1.6182 | 0.7957 | 0.7971 | 0.0883 | 0.0655 |
| 0.0693 | 43.0 | 5375 | 0.6169 | 0.796 | 0.3058 | 1.6212 | 0.796 | 0.7976 | 0.0879 | 0.0662 |
| 0.0673 | 44.0 | 5500 | 0.6173 | 0.7973 | 0.3063 | 1.6161 | 0.7973 | 0.7990 | 0.0877 | 0.0666 |
| 0.0673 | 45.0 | 5625 | 0.6193 | 0.797 | 0.3070 | 1.6151 | 0.797 | 0.7986 | 0.0881 | 0.0678 |
| 0.0673 | 46.0 | 5750 | 0.6209 | 0.7963 | 0.3076 | 1.6211 | 0.7963 | 0.7979 | 0.0894 | 0.0678 |
| 0.0673 | 47.0 | 5875 | 0.6211 | 0.7977 | 0.3075 | 1.6284 | 0.7977 | 0.7993 | 0.0905 | 0.0691 |
| 0.0662 | 48.0 | 6000 | 0.6206 | 0.7967 | 0.3072 | 1.6289 | 0.7967 | 0.7983 | 0.0892 | 0.0673 |
| 0.0662 | 49.0 | 6125 | 0.6213 | 0.7965 | 0.3075 | 1.6262 | 0.7965 | 0.7980 | 0.0886 | 0.0684 |
| 0.0662 | 50.0 | 6250 | 0.6215 | 0.7963 | 0.3076 | 1.6291 | 0.7963 | 0.7978 | 0.0919 | 0.0682 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
asenella/ms_MMVAEPlus_beta_25_scale_False_seed_3
|
asenella
| 2023-07-27T12:11:31Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:11:29Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
liuyt75/t5-base_prefix_tuning_sentences_50agree_10
|
liuyt75
| 2023-07-27T12:09:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T11:32:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
nikbhi/spaceinvador_dqn_v1
|
nikbhi
| 2023-07-27T12:08:39Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T12:08:00Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 699.00 +/- 289.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nikbhi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nikbhi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nikbhi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
asenella/ms_MMVAEPlus_beta_5_scale_False_seed_1
|
asenella
| 2023-07-27T12:05:36Z | 0 | 1 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:05:35Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_MMVAEPlus_beta_25_scale_True_seed_0
|
asenella
| 2023-07-27T12:05:35Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:05:33Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_MMVAEPlus_beta_25_scale_False_seed_2
|
asenella
| 2023-07-27T12:05:32Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:05:30Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
alesanm/blip-image-captioning-base-fashionimages-finetuned
|
alesanm
| 2023-07-27T12:05:03Z | 140 | 1 |
transformers
|
[
"transformers",
"pytorch",
"blip",
"image-text-to-text",
"image-to-text",
"dataset:alesanm/balenciaga_short_descriptions",
"region:us"
] |
image-to-text
| 2023-07-24T11:00:40Z |
---
inference: False
datasets:
- alesanm/balenciaga_short_descriptions
library_name: transformers
pipeline_tag: image-to-text
---
The BLIP model was trained on 141 photos of the Balenciaga fashion brand and descriptions produced by GPT3
|
asenella/ms_MMVAEPlus_beta_10_scale_True_seed_2
|
asenella
| 2023-07-27T12:01:53Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T12:01:50Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_MMVAEPlus_beta_5_scale_False_seed_0
|
asenella
| 2023-07-27T11:59:17Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T11:59:15Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
dhinman/Reinforce-Pixelcopter-200000
|
dhinman
| 2023-07-27T11:58:35Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-27T11:58:23Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-200000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 182.70 +/- 200.09
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
snob/TagMyBookmark-KoAlpaca-QLoRA-v1.0_ALLDATA
|
snob
| 2023-07-27T11:58:28Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T11:58:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
asenella/ms_MMVAEPlus_beta_5_scale_True_seed_3
|
asenella
| 2023-07-27T11:58:19Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T11:58:17Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Babon182/tibool
|
Babon182
| 2023-07-27T11:57:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-23T16:39:44Z |
---
license: creativeml-openrail-m
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.