modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-30 00:39:23
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-30 00:39:08
card
stringlengths
11
1.01M
lmqg/t5-small-squad-qa
lmqg
2023-05-17T07:46:26Z
111
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question answering", "en", "dataset:lmqg/qg_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-17T07:45:33Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_squad pipeline_tag: text2text-generation tags: - question answering widget: - text: "question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things." example_title: "Question Answering Example 1" - text: "question: who created the post as we know it today?, context: 'So much of The Post is Ben,' Mrs. Graham said in 1994, three years after Bradlee retired as editor. 'He created it as we know it today.'— Ed O'Keefe (@edatpost) October 21, 2014" example_title: "Question Answering Example 2" model-index: - name: lmqg/t5-small-squad-qa results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squad type: default args: default metrics: - name: BLEU4 (Question Answering) type: bleu4_question_answering value: 59.66 - name: ROUGE-L (Question Answering) type: rouge_l_question_answering value: 79.65 - name: METEOR (Question Answering) type: meteor_question_answering value: 48.74 - name: BERTScore (Question Answering) type: bertscore_question_answering value: 93.34 - name: MoverScore (Question Answering) type: moverscore_question_answering value: 86.84 - name: AnswerF1Score (Question Answering) type: answer_f1_score__question_answering value: 81.84 - name: AnswerExactMatch (Question Answering) type: answer_exact_match_question_answering value: 69.95 --- # Model Card of `lmqg/t5-small-squad-qa` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question answering task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-small](https://huggingface.co/t5-small) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-small-squad-qa") # model prediction answers = model.answer_q(list_question="What is a person called is practicing heresy?", list_context=" Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qa") output = pipe("question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.") ``` ## Evaluation - ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:---------------------------------------------------------------| | AnswerExactMatch | 69.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | AnswerF1Score | 81.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | BERTScore | 93.34 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 71.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 67.5 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 63.3 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 59.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 48.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 86.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 79.65 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_question'] - output_types: ['answer'] - prefix_types: None - model: t5-small - max_length: 512 - max_length_output: 32 - epoch: 13 - batch: 32 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squad-qa/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
sofa566/layoutlmv2-base-uncased_finetuned_docvqa
sofa566
2023-05-17T07:34:39Z
61
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "document-question-answering", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
document-question-answering
2023-05-17T07:00:32Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-base-uncased_finetuned_docvqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-base-uncased_finetuned_docvqa This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.0085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.3352 | 0.22 | 50 | 4.5120 | | 4.3566 | 0.44 | 100 | 4.0171 | | 3.9989 | 0.66 | 150 | 3.9234 | | 3.8014 | 0.88 | 200 | 3.5051 | | 3.5509 | 1.11 | 250 | 3.5408 | | 3.1372 | 1.33 | 300 | 3.2247 | | 2.9307 | 1.55 | 350 | 3.1225 | | 2.928 | 1.77 | 400 | 2.9461 | | 2.7004 | 1.99 | 450 | 2.5206 | | 2.1271 | 2.21 | 500 | 2.6079 | | 2.1387 | 2.43 | 550 | 2.8524 | | 1.9593 | 2.65 | 600 | 2.8749 | | 2.0105 | 2.88 | 650 | 2.6666 | | 1.84 | 3.1 | 700 | 3.0599 | | 1.9359 | 3.32 | 750 | 3.0472 | | 1.547 | 3.54 | 800 | 2.2308 | | 1.4161 | 3.76 | 850 | 2.2889 | | 2.1804 | 3.98 | 900 | 2.1462 | | 1.0261 | 4.2 | 950 | 2.9056 | | 1.392 | 4.42 | 1000 | 3.0021 | | 1.3816 | 4.65 | 1050 | 2.6913 | | 1.0117 | 4.87 | 1100 | 2.8484 | | 1.0094 | 5.09 | 1150 | 2.6936 | | 0.7316 | 5.31 | 1200 | 2.9901 | | 0.9172 | 5.53 | 1250 | 2.6366 | | 0.8608 | 5.75 | 1300 | 2.8584 | | 0.7116 | 5.97 | 1350 | 3.1944 | | 0.321 | 6.19 | 1400 | 3.4703 | | 0.6663 | 6.42 | 1450 | 3.0456 | | 0.6319 | 6.64 | 1500 | 3.3318 | | 0.7001 | 6.86 | 1550 | 3.1439 | | 0.5952 | 7.08 | 1600 | 3.3220 | | 0.39 | 7.3 | 1650 | 3.8266 | | 0.434 | 7.52 | 1700 | 3.8287 | | 0.7599 | 7.74 | 1750 | 3.4079 | | 0.52 | 7.96 | 1800 | 3.3982 | | 0.5257 | 8.19 | 1850 | 3.5208 | | 0.4304 | 8.41 | 1900 | 3.8404 | | 0.4213 | 8.63 | 1950 | 3.9974 | | 0.3033 | 8.85 | 2000 | 3.9492 | | 0.2947 | 9.07 | 2050 | 3.9279 | | 0.2285 | 9.29 | 2100 | 3.5652 | | 0.3472 | 9.51 | 2150 | 3.5741 | | 0.2644 | 9.73 | 2200 | 3.8685 | | 0.3667 | 9.96 | 2250 | 3.5242 | | 0.1528 | 10.18 | 2300 | 3.5848 | | 0.1489 | 10.4 | 2350 | 3.8603 | | 0.1984 | 10.62 | 2400 | 3.6773 | | 0.3131 | 10.84 | 2450 | 3.7021 | | 0.1866 | 11.06 | 2500 | 3.8918 | | 0.1908 | 11.28 | 2550 | 3.9479 | | 0.1955 | 11.5 | 2600 | 3.9596 | | 0.1382 | 11.73 | 2650 | 4.1168 | | 0.2528 | 11.95 | 2700 | 4.1007 | | 0.0538 | 12.17 | 2750 | 4.2003 | | 0.1354 | 12.39 | 2800 | 4.3118 | | 0.1218 | 12.61 | 2850 | 4.1494 | | 0.1956 | 12.83 | 2900 | 4.1475 | | 0.0691 | 13.05 | 2950 | 4.4141 | | 0.0526 | 13.27 | 3000 | 4.7115 | | 0.0984 | 13.5 | 3050 | 4.6013 | | 0.1828 | 13.72 | 3100 | 4.2457 | | 0.0906 | 13.94 | 3150 | 4.4969 | | 0.025 | 14.16 | 3200 | 4.6981 | | 0.0149 | 14.38 | 3250 | 4.8642 | | 0.123 | 14.6 | 3300 | 4.5326 | | 0.0876 | 14.82 | 3350 | 4.5953 | | 0.0771 | 15.04 | 3400 | 4.4175 | | 0.066 | 15.27 | 3450 | 4.6324 | | 0.0542 | 15.49 | 3500 | 4.5058 | | 0.0293 | 15.71 | 3550 | 4.7244 | | 0.0428 | 15.93 | 3600 | 4.9415 | | 0.009 | 16.15 | 3650 | 4.9592 | | 0.0715 | 16.37 | 3700 | 4.9211 | | 0.0044 | 16.59 | 3750 | 4.9854 | | 0.0767 | 16.81 | 3800 | 4.7985 | | 0.0356 | 17.04 | 3850 | 4.7618 | | 0.0562 | 17.26 | 3900 | 4.9239 | | 0.0085 | 17.48 | 3950 | 4.9837 | | 0.0114 | 17.7 | 4000 | 5.0808 | | 0.0057 | 17.92 | 4050 | 5.0377 | | 0.0306 | 18.14 | 4100 | 5.0137 | | 0.0426 | 18.36 | 4150 | 4.9367 | | 0.0429 | 18.58 | 4200 | 5.0050 | | 0.0081 | 18.81 | 4250 | 4.9806 | | 0.0168 | 19.03 | 4300 | 4.9902 | | 0.0074 | 19.25 | 4350 | 4.9939 | | 0.0075 | 19.47 | 4400 | 4.9986 | | 0.0307 | 19.69 | 4450 | 5.0095 | | 0.02 | 19.91 | 4500 | 5.0085 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.12.1 - Datasets 2.11.0 - Tokenizers 0.11.0
Bainbridge/gpt2-kl_01_04-hs_cn-loto_lgbt
Bainbridge
2023-05-17T07:34:37Z
8
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T20:17:15Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-kl_01_04-hs_cn-loto_lgbt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-kl_01_04-hs_cn-loto_lgbt This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 21 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 72.2334 | 0.03 | 10 | 64.5474 | | 30.2905 | 0.06 | 20 | 17.8612 | | 8.886 | 0.08 | 30 | 6.3404 | | 3.4861 | 0.11 | 40 | 2.6744 | | 1.6441 | 0.14 | 50 | 1.1987 | | 0.9746 | 0.17 | 60 | 0.8715 | | 1.0152 | 0.2 | 70 | 0.7307 | | 0.7374 | 0.23 | 80 | 0.6868 | | 0.6436 | 0.25 | 90 | 0.6203 | | 0.7525 | 0.28 | 100 | 0.6001 | | 0.6146 | 0.31 | 110 | 0.5946 | | 0.5676 | 0.34 | 120 | 0.5914 | | 0.563 | 0.37 | 130 | 0.5716 | | 0.6439 | 0.4 | 140 | 0.5743 | | 0.5706 | 0.42 | 150 | 0.5702 | | 0.689 | 0.45 | 160 | 0.5696 | | 0.5986 | 0.48 | 170 | 0.5557 | | 0.6159 | 0.51 | 180 | 0.5606 | | 0.5925 | 0.54 | 190 | 0.5498 | | 0.6124 | 0.57 | 200 | 0.5496 | | 0.559 | 0.59 | 210 | 0.5501 | | 0.6202 | 0.62 | 220 | 0.5544 | | 0.6504 | 0.65 | 230 | 0.5486 | | 0.697 | 0.68 | 240 | 0.5528 | | 0.5171 | 0.71 | 250 | 0.5522 | | 0.6247 | 0.74 | 260 | 0.5390 | | 0.5882 | 0.76 | 270 | 0.5350 | | 0.5941 | 0.79 | 280 | 0.5339 | | 0.5673 | 0.82 | 290 | 0.5321 | | 0.6336 | 0.85 | 300 | 0.5307 | | 0.5581 | 0.88 | 310 | 0.5264 | | 0.5499 | 0.91 | 320 | 0.5251 | | 0.5626 | 0.93 | 330 | 0.5227 | | 0.5443 | 0.96 | 340 | 0.5205 | | 0.6252 | 0.99 | 350 | 0.5215 | | 0.5427 | 1.02 | 360 | 0.5241 | | 0.5231 | 1.05 | 370 | 0.5297 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.11.0+cu113 - Datasets 2.11.0 - Tokenizers 0.13.3
bastienm/poca-SoccerTwos
bastienm
2023-05-17T07:09:41Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-05-17T07:09:27Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: bastienm/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
yogesh0502/distilbert-base-uncased-finetuned-cuad_smaller_2
yogesh0502
2023-05-17T07:09:10Z
120
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:cuad_v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-17T07:01:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cuad_v1 model-index: - name: distilbert-base-uncased-finetuned-cuad_smaller_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cuad_smaller_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the cuad_v1 dataset. It achieves the following results on the evaluation set: - Loss: 0.0929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 40 - eval_batch_size: 40 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 100 | 0.0827 | | No log | 2.0 | 200 | 0.0897 | | No log | 3.0 | 300 | 0.0862 | | No log | 4.0 | 400 | 0.0929 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
rlucasz93/poca-SoccerTwos
rlucasz93
2023-05-17T07:00:10Z
30
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-05-17T07:00:04Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: rlucasz93/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
askmyteapot/GPT4-X-Alpasta-30b-4bit
askmyteapot
2023-05-17T06:51:01Z
9
4
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T04:05:45Z
## This is a 4bit quant of https://huggingface.co/MetaIX/GPT4-X-Alpasta-30b # My secret sauce: * Using comit <a href="https://github.com/0cc4m/GPTQ-for-LLaMa/tree/3c16fd9c7946ebe85df8d951cb742adbc1966ec7">3c16fd9</a> of 0cc4m's GPTQ fork * Using C4 as the calibration dataset * Act-order, True-sequential, percdamp 0.1 (<i>the default percdamp is 0.01</i>) * No groupsize * Will run with CUDA, does not need triton. * Quant completed on a 'Premium GPU' and 'High Memory' Google Colab. ## Benchmark results |<b>Model<b>|<b>C4<b>|<b>WikiText2<b>|<b>PTB<b>| |:---:|---|---|---| |MetaIX's FP16|6.98400259|4.607768536|9.414786339| |This Quant|7.292364597|4.954069614|9.754593849|
douglch/LunarLander-ppo
douglch
2023-05-17T06:45:03Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-17T06:44:44Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.63 +/- 14.90 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
vnktrmnb/fine_tune_bert_output
vnktrmnb
2023-05-17T06:42:34Z
107
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-17T06:40:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann model-index: - name: fine_tune_bert_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine_tune_bert_output This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
kinshuk-h/flan-t5-cbp-lkg-corpus-small-finetuned
kinshuk-h
2023-05-17T06:17:56Z
106
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "legal", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-17T06:14:19Z
--- license: mit language: - en pipeline_tag: text2text-generation tags: - legal --- # flan-t5-cbp-lkg-corpus-small-finetuned [flan-t5-cbp-lkg-corpus-mlm-small](https://huggingface.co/kinshuk-h/flan-t5-cbp-lkg-corpus-mlm-small) finetuned over a cleaned version of the Legal Knowledge Graph using the training method for [KGT-5](https://github.com/apoorvumang/kgt5/).
yuanzhoulvpi/chinese_bloom_560m
yuanzhoulvpi
2023-05-17T06:16:31Z
200
9
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "zh", "dataset:BelleGroup/train_1M_CN", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-17T05:35:51Z
--- license: bigscience-bloom-rail-1.0 datasets: - BelleGroup/train_1M_CN language: - zh library_name: transformers --- ## 介绍 1. ✅ 对`bloom-560m`模型做了sft,在这个数量级和模型规模下,效果非常好! 2. 🚀 训练代码和推理代码全部分享,可以查看链接[https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom](https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom) ## 个人感受 1. 🎯 `bloom`系列的模型,在中文领域,具有极大的潜力,在经过有监督微调训练之后,效果非常惊人! 2. 🔄 `bloom`系列的模型,覆盖中文、英文、代码、法语、西班牙语等。即使拿来做翻译、拿来做代码生成,也都没问题!(后期将会分享相关教程) 3. 😛 当前的这个`bloom-560m`模型,也只是为了跑通整个训练流程,可以无缝切换模型到不同尺寸,如`3b`、`7b`等! ## 如何使用 ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "yuanzhoulvpi/chinese_bloom_560m"#"bigscience/bloomz-3b" #"bigscience/bloom-7b1"# "output_dir/checkpoint-8260"# tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) PROMPT_DICT = { "prompt_input": ( "Below is an instruction that describes a task, paired with an input that provides further context. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" ), "prompt_no_input": ( "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:" ), } from typing import Optional def generate_input(instruction:Optional[str]= None, input_str:Optional[str] = None) -> str: if input_str is None: return PROMPT_DICT['prompt_no_input'].format_map({'instruction':instruction}) else: return PROMPT_DICT['prompt_input'].format_map({'instruction':instruction, 'input':input_str}) for i in range(5): print("*"*80) inputs = tokenizer.encode(generate_input(instruction="你是谁"), return_tensors="pt") outputs = model.generate(inputs,num_beams=3, max_new_tokens=512, do_sample=False, top_k=10, penalty_alpha=0.6, temperature=0.8, repetition_penalty=1.2) print(tokenizer.decode(outputs[0])) ``` ### example1 1. 训练前效果 ```bash Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: 写一篇关于水杯的文章 ### Response: 杯子是用来喝水用的</s> ``` 2. 训练后效果 ```bash Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: 写一篇关于水杯的文章 ### Response: 水杯是人们日常生活中必不可少的物品之一,它能够帮助人们盛放、储存和运输水。以下是一些有关水杯的详细信息: 1. 形状:水杯通常有圆形、方形、长方形等不同形状。 2. 容量:水杯通常有10毫升、20毫升、30毫升、40毫升等不同容量。 3. 材质:水杯通常有玻璃、陶瓷、金属等不同材质。 4. 颜色:水杯通常有红色、蓝色、黄色、绿色等不同颜色。 5. 用途:水杯通常用于盛放、储存和运输水,如饮料、果汁、茶、咖啡等。 6. 功能:水杯可以用来盛放、储存和运输各种液体,如饮料、果汁、茶、咖啡等。 7. 用途广泛:水杯不仅用于盛放、储存和运输水,还可以用于制作各种饮料、果汁、茶、咖啡等。 总之,水杯是一个不可或缺的物品,它能够帮助人们盛放、储存和运输水,同时还可以用于制作各种饮料、果汁、茶、咖啡等。</s> ``` ### example 2 1. 训练前效果 ```bash Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: 你是谁 ### Response: I am a student.</s> ``` 2. 训练后效果 ```bash Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: 你是谁 ### Response:我是一个AI语言模型,没有个人身份。</s> ```
SergeyKazulin/Taxi-v3-var2
SergeyKazulin
2023-05-17T05:57:41Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-17T05:57:39Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-var2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="SergeyKazulin/Taxi-v3-var2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
shrusti333/language_translation
shrusti333
2023-05-17T05:55:43Z
0
0
keras
[ "keras", "tf-keras", "translation", "en", "dataset:shrusti333/konkani_translation", "region:us" ]
translation
2023-05-17T04:54:02Z
--- datasets: - shrusti333/konkani_translation language: - en metrics: - accuracy pipeline_tag: translation library_name: keras ---
ryanwang0601/llm_week01
ryanwang0601
2023-05-17T05:44:06Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-17T05:22:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: llm_week01 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # llm_week01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Tomispipat/imneko
Tomispipat
2023-05-17T05:37:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-17T05:31:52Z
--- license: creativeml-openrail-m ---
amqdn/distilbert-clf-20newsgroups
amqdn
2023-05-17T05:28:23Z
62
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-17T05:16:21Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert-clf-20newsgroups results: [] --- # distilbert-clf-20newsgroups This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on 20newsgroups. It achieves the following results on the evaluation set: * loss: 0.5506 * accuracy: 0.8401 ## Model description ## Intended uses & limitations ## Training and evaluation data ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results * loss: 0.2480 * accuracy: 0.9422 * val_loss: 0.3633 * val_accuracy: 0.8940 ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
nitzansaar/dog_cat_classifier
nitzansaar
2023-05-17T05:27:30Z
63
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-17T05:11:39Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nitzansaar/food_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nitzansaar/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4991 - Validation Loss: 0.5181 - Train Accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6452 | 0.6047 | 1.0 | 0 | | 0.6149 | 0.5802 | 1.0 | 1 | | 0.5612 | 0.5573 | 1.0 | 2 | | 0.5321 | 0.5360 | 1.0 | 3 | | 0.4991 | 0.5181 | 1.0 | 4 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
MDOWNLOAD/BEELPINKLORA
MDOWNLOAD
2023-05-17T05:18:38Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-17T05:16:12Z
--- license: creativeml-openrail-m ---
SergeyKazulin/Taxi-v3-var1
SergeyKazulin
2023-05-17T05:17:17Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-17T05:17:16Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-var1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="SergeyKazulin/Taxi-v3-var1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
SergeyKazulin/FrozenLake-1v-ver1
SergeyKazulin
2023-05-17T05:11:09Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-17T05:11:07Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: FrozenLake-1v-ver1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="SergeyKazulin/FrozenLake-1v-ver1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
XiaominDLUT/textual_inversion_style
XiaominDLUT
2023-05-17T05:10:14Z
4
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-17T02:39:21Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - XiaominDLUT/textual_inversion_style These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
kinshuk-h/flan-t5-cbp-lkg-corpus-w-context-small-finetuned
kinshuk-h
2023-05-17T05:09:29Z
105
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "legal", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-17T04:51:57Z
--- license: mit language: - en pipeline_tag: text2text-generation tags: - legal --- # flan-t5-cbp-lkg-corpus-w-context-small-finetuned [flan-t5-cbp-lkg-corpus-mlm-small](https://huggingface.co/kinshuk-h/flan-t5-cbp-lkg-corpus-mlm-small) finetuned over a cleaned version of the Legal Knowledge Graph using the training method for [KGT-5](https://github.com/apoorvumang/kgt5/) with additional context supplied alongside the prompts.
knat/electra_eli5_mlm_model_kt
knat
2023-05-17T05:03:57Z
61
0
transformers
[ "transformers", "tf", "electra", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-05-17T02:05:01Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: knat/electra_eli5_mlm_model_kt results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # knat/electra_eli5_mlm_model_kt This model is a fine-tuned version of [knat/electra_eli5_mlm_model_kt](https://huggingface.co/knat/electra_eli5_mlm_model_kt) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.4028 - Validation Loss: 2.2653 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.4828 | 2.3100 | 0 | | 2.4366 | 2.2712 | 1 | | 2.4028 | 2.2653 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
sofa566/git-base-pokemon
sofa566
2023-05-17T04:48:39Z
61
0
transformers
[ "transformers", "pytorch", "tensorboard", "git", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-05-17T04:10:35Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: git-base-pokemon results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0371 - Wer Score: 2.4731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 7.3268 | 2.13 | 50 | 4.4847 | 21.4974 | | 2.2464 | 4.26 | 100 | 0.3519 | 11.4118 | | 0.1049 | 6.38 | 150 | 0.0302 | 0.7468 | | 0.0223 | 8.51 | 200 | 0.0270 | 0.4668 | | 0.0137 | 10.64 | 250 | 0.0280 | 3.5742 | | 0.0073 | 12.77 | 300 | 0.0304 | 7.1240 | | 0.0034 | 14.89 | 350 | 0.0309 | 6.4885 | | 0.0018 | 17.02 | 400 | 0.0326 | 5.0499 | | 0.0011 | 19.15 | 450 | 0.0335 | 5.2302 | | 0.0009 | 21.28 | 500 | 0.0342 | 4.3645 | | 0.0007 | 23.4 | 550 | 0.0346 | 5.1445 | | 0.0006 | 25.53 | 600 | 0.0351 | 4.0639 | | 0.0006 | 27.66 | 650 | 0.0355 | 3.8862 | | 0.0006 | 29.79 | 700 | 0.0359 | 3.4514 | | 0.0006 | 31.91 | 750 | 0.0363 | 3.0486 | | 0.0006 | 34.04 | 800 | 0.0363 | 2.8645 | | 0.0006 | 36.17 | 850 | 0.0366 | 2.7199 | | 0.0006 | 38.3 | 900 | 0.0369 | 2.6675 | | 0.0006 | 40.43 | 950 | 0.0369 | 2.6304 | | 0.0006 | 42.55 | 1000 | 0.0370 | 2.4910 | | 0.0006 | 44.68 | 1050 | 0.0370 | 2.4834 | | 0.0006 | 46.81 | 1100 | 0.0371 | 2.4629 | | 0.0006 | 48.94 | 1150 | 0.0371 | 2.4731 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.12.1 - Datasets 2.11.0 - Tokenizers 0.11.0
LecJackS/gpt2-finetuned-folk-mythology-tales
LecJackS
2023-05-17T04:45:34Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-17T03:04:23Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-finetuned-folk-mythology-tales results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-folk-mythology-tales This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3083 | 1.0 | 2844 | 3.1913 | | 3.1865 | 2.0 | 5688 | 3.1404 | | 3.1166 | 3.0 | 8532 | 3.1063 | | 3.0783 | 4.0 | 11376 | 3.0944 | | 3.0474 | 5.0 | 14220 | 3.0895 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Homogoriso/Anime
Homogoriso
2023-05-17T04:35:28Z
0
0
null
[ "license:cc-by-nc-nd-3.0", "region:us" ]
null
2023-05-17T04:35:28Z
--- license: cc-by-nc-nd-3.0 ---
rishabhjain16/whisper_tiny_to_myst_cmu_pf
rishabhjain16
2023-05-17T04:27:32Z
88
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-16T15:24:40Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-tiny.en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-tiny.en This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3883 - Wer: 14.9814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4727 | 2.04 | 500 | 0.4062 | 16.9369 | | 0.2717 | 5.04 | 1000 | 0.3271 | 15.3713 | | 0.2022 | 8.04 | 1500 | 0.3251 | 15.2362 | | 0.1483 | 11.03 | 2000 | 0.3438 | 14.1539 | | 0.0833 | 14.03 | 2500 | 0.3583 | 15.5601 | | 0.0848 | 17.03 | 3000 | 0.3755 | 14.8514 | | 0.0475 | 20.02 | 3500 | 0.3849 | 15.0281 | | 0.0424 | 23.02 | 4000 | 0.3883 | 14.9814 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.14.0a0+44dac51 - Datasets 2.12.0 - Tokenizers 0.13.3
jekQ11/borutouzumakiv1
jekQ11
2023-05-17T04:22:14Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-17T04:19:00Z
--- license: creativeml-openrail-m ---
TimTL/distilgpt2-finetuned-wikitext2
TimTL
2023-05-17T03:27:23Z
198
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T13:43:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikitext model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the wikitext dataset. It achieves the following results on the evaluation set: - Loss: 3.6436 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5784 | 1.0 | 2334 | 3.6436 | | 3.5882 | 2.0 | 4668 | 3.6436 | | 3.5858 | 3.0 | 7002 | 3.6436 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.10.0 - Tokenizers 0.13.2
Sylvia-my/0517trial
Sylvia-my
2023-05-17T03:20:42Z
0
0
open_clip
[ "open_clip", "finance", "medical", "text-generation-inference", "text-classification", "aa", "ar", "dataset:QingyiSi/Alpaca-CoT", "license:openrail", "region:us" ]
text-classification
2023-05-17T03:17:08Z
--- license: openrail datasets: - QingyiSi/Alpaca-CoT language: - aa - ar metrics: - bleu - character library_name: open_clip pipeline_tag: text-classification tags: - finance - medical - text-generation-inference ---
tKah/Textclassification-Bert
tKah
2023-05-17T03:01:48Z
68
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-07T16:39:11Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Textclassification-Bert results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Textclassification-Bert This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1439 - Validation Loss: 0.5583 - Train Matthews Correlation: 0.5803 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.4792 | 0.4276 | 0.5446 | 0 | | 0.2664 | 0.4445 | 0.5602 | 1 | | 0.1439 | 0.5583 | 0.5803 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
WangCo/distilbert-base-uncased_emotion_ft_0416
WangCo
2023-05-17T03:01:18Z
106
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-17T02:50:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: distilbert-base-uncased_emotion_ft_0416 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0416 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Framework versions - Transformers 4.28.1 - Pytorch 1.13.1 - Datasets 2.12.0 - Tokenizers 0.11.0
tKah/DistilBERT-TxtClassification
tKah
2023-05-17T02:52:15Z
68
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-07T16:13:34Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: DistilBERT-TxtClassification results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT-TxtClassification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1885 - Validation Loss: 0.5463 - Train Matthews Correlation: 0.5220 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5236 | 0.4886 | 0.3976 | 0 | | 0.3248 | 0.4883 | 0.5074 | 1 | | 0.1885 | 0.5463 | 0.5220 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
cmpatino/Taxi-v3
cmpatino
2023-05-17T02:49:52Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-17T02:49:50Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="cmpatino/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lferncastro/distilbert_classifier
lferncastro
2023-05-17T02:48:40Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-17T02:48:08Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
cmpatino/q-FrozenLake-v1-4x4-noSlippery
cmpatino
2023-05-17T02:48:14Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-17T02:48:11Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="cmpatino/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kinshuk-h/flan-t5-cbp-lkg-alt-mlm-w-context-small
kinshuk-h
2023-05-17T02:46:29Z
105
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "legal", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-17T02:45:27Z
--- license: mit language: - en pipeline_tag: text2text-generation tags: - legal --- # flan-t5-cbp-lkg-alt-mlm-w-context-small Google's Flan T5 model ([flan-t5-small](https://huggingface.co/google/flan-t5-small)) trained over a cleaned version of the Legal Knowledge Graph using the span-masking MLM objective with additional sentences alongside the prompts.
akmalartsai/AmisaMiyazaki.v1
akmalartsai
2023-05-17T02:45:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-16T16:51:54Z
--- license: creativeml-openrail-m ---
Ktang2k/rl_course_vizdoom_health_gathering_supreme
Ktang2k
2023-05-17T02:24:42Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-17T02:24:33Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.11 +/- 5.23 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Ktang2k/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
zhen182/jokowi-so-vits-svc-modelv1
zhen182
2023-05-17T02:15:54Z
5
0
transformers
[ "transformers", "license:creativeml-openrail-m", "endpoints_compatible", "region:us" ]
null
2023-05-16T07:25:07Z
--- license: creativeml-openrail-m ---
chaoyi-wu/PMC_LLAMA_7B
chaoyi-wu
2023-05-17T02:14:59Z
1,477
62
transformers
[ "transformers", "pytorch", "llama", "text-generation", "medical", "dataset:allenai/s2orc", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-12T12:59:34Z
--- license: apache-2.0 tags: - medical datasets: - allenai/s2orc --- This repo contains PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in S2ORC dataset. The model was trained with the following hyperparameters: * Epochs: 5 * Batch size: 128 * Cutoff length: 512 * Learning rate: 2e-5 Each epoch we sample 512 tokens per paper for training. The model can be loaded as following: ``` import transformers import torch tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B') model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B') sentence = 'Hello, doctor' batch = tokenizer( sentence, return_tensors="pt", add_special_tokens=False ) with torch.no_grad(): generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50) print('model predict: ',tokenizer.decode(generated[0])) ```
AZZLI/ghibli_background
AZZLI
2023-05-17T02:13:28Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-17T02:12:15Z
--- license: creativeml-openrail-m ---
yuuyassssui/imneko
yuuyassssui
2023-05-17T01:54:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-17T01:49:56Z
--- license: creativeml-openrail-m ---
nelson2424/gptj6b-FAQ-NelsMarketplace
nelson2424
2023-05-17T01:53:12Z
7
0
transformers
[ "transformers", "gptj", "text-generation", "dataset:nelson2424/FAQ_NelsMarketplace", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-17T01:27:07Z
--- license: mit datasets: - nelson2424/FAQ_NelsMarketplace --- This is a Finetuning of GPT-J-6B using LoRa - https://huggingface.co/EleutherAI/gpt-j-6B
coding-alt/stable-diffusion-webui
coding-alt
2023-05-17T01:41:10Z
0
0
null
[ "arxiv:2211.06679", "region:us" ]
null
2023-05-17T01:02:19Z
# Stable Diffusion web UI A browser interface based on Gradio library for Stable Diffusion. ![](screenshot.png) ## Features [Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features): - Original txt2img and img2img modes - One click install and run script (but you still must install python and git) - Outpainting - Inpainting - Color Sketch - Prompt Matrix - Stable Diffusion Upscale - Attention, specify parts of text that the model should pay more attention to - a man in a `((tuxedo))` - will pay more attention to tuxedo - a man in a `(tuxedo:1.21)` - alternative syntax - select text and press `Ctrl+Up` or `Ctrl+Down` to automatically adjust attention to selected text (code contributed by anonymous user) - Loopback, run img2img processing multiple times - X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters - Textual Inversion - have as many embeddings as you want and use any names you like for them - use multiple embeddings with different numbers of vectors per token - works with half precision floating point numbers - train embeddings on 8GB (also reports of 6GB working) - Extras tab with: - GFPGAN, neural network that fixes faces - CodeFormer, face restoration tool as an alternative to GFPGAN - RealESRGAN, neural network upscaler - ESRGAN, neural network upscaler with a lot of third party models - SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers - LDSR, Latent diffusion super resolution upscaling - Resizing aspect ratio options - Sampling method selection - Adjust sampler eta values (noise multiplier) - More advanced noise setting options - Interrupt processing at any time - 4GB video card support (also reports of 2GB working) - Correct seeds for batches - Live prompt token length validation - Generation parameters - parameters you used to generate images are saved with that image - in PNG chunks for PNG, in EXIF for JPEG - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI - can be disabled in settings - drag and drop an image/text-parameters to promptbox - Read Generation Parameters Button, loads parameters in promptbox to UI - Settings page - Running arbitrary python code from UI (must run with `--allow-code` to enable) - Mouseover hints for most UI elements - Possible to change defaults/mix/max/step values for UI elements via text config - Tiling support, a checkbox to create images that can be tiled like textures - Progress bar and live image generation preview - Can use a separate neural network to produce previews with almost none VRAM or compute requirement - Negative prompt, an extra text field that allows you to list what you don't want to see in generated image - Styles, a way to save part of prompt and easily apply them via dropdown later - Variations, a way to generate same image but with tiny differences - Seed resizing, a way to generate same image but at slightly different resolution - CLIP interrogator, a button that tries to guess prompt from an image - Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway - Batch Processing, process a group of files using img2img - Img2img Alternative, reverse Euler method of cross attention control - Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions - Reloading checkpoints on the fly - Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one - [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community - [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once - separate prompts using uppercase `AND` - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` - No token limit for prompts (original stable diffusion lets you use up to 75 tokens) - DeepDanbooru integration, creates danbooru style tags for anime prompts - [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args) - via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI - Generate forever option - Training tab - hypernetworks and embeddings options - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime) - Clip skip - Hypernetworks - Loras (same as Hypernetworks but more pretty) - A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt - Can select to load a different VAE from settings screen - Estimated completion time in progress bar - API - Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML - via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)) - [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions - [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions - Now without any bad letters! - Load checkpoints in safetensors format - Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64 - Now with a license! - Reorder elements in the UI from settings screen ## Installation and Running Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. Alternatively, use online services (like Google Colab): - [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) ### Automatic Installation on Windows 1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH". 2. Install [git](https://git-scm.com/download/win). 3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`. 4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user. ### Automatic Installation on Linux 1. Install the dependencies: ```bash # Debian-based: sudo apt install wget git python3 python3-venv # Red Hat-based: sudo dnf install wget git python3 # Arch-based: sudo pacman -S wget git python3 ``` 2. Navigate to the directory you would like the webui to be installed and execute the following command: ```bash bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh) ``` 3. Run `webui.sh`. 4. Check `webui-user.sh` for options. ### Installation on Apple Silicon Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon). ## Contributing Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) ## Documentation The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki). ## Credits Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file. - Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers - k-diffusion - https://github.com/crowsonkb/k-diffusion.git - GFPGAN - https://github.com/TencentARC/GFPGAN.git - CodeFormer - https://github.com/sczhou/CodeFormer - ESRGAN - https://github.com/xinntao/ESRGAN - SwinIR - https://github.com/JingyunLiang/SwinIR - Swin2SR - https://github.com/mv-lab/swin2sr - LDSR - https://github.com/Hafiidz/latent-diffusion - MiDaS - https://github.com/isl-org/MiDaS - Ideas for optimizations - https://github.com/basujindal/stable-diffusion - Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing. - Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion) - Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention) - Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas). - Idea for SD upscale - https://github.com/jquesnelle/txt2imghd - Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot - CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator - Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - xformers - https://github.com/facebookresearch/xformers - DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru - Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6) - Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix - Security advice - RyotaK - UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. - (You)
sofa566/my_awesome_food_model
sofa566
2023-05-17T01:33:38Z
189
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:food101", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-17T01:26:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - food101 metrics: - accuracy model-index: - name: my_awesome_food_model results: - task: name: Image Classification type: image-classification dataset: name: food101 type: food101 config: default split: train[:5000] args: default metrics: - name: Accuracy type: accuracy value: 0.895 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.5900 - Accuracy: 0.895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6922 | 0.99 | 62 | 2.4807 | 0.855 | | 1.8224 | 2.0 | 125 | 1.7453 | 0.9 | | 1.5629 | 2.98 | 186 | 1.5900 | 0.895 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.12.1 - Datasets 2.11.0 - Tokenizers 0.11.0
Elluna/Elise-LoRA
Elluna
2023-05-17T01:19:41Z
0
1
null
[ "en", "region:us" ]
null
2023-05-17T00:31:08Z
--- language: - en --- For @xwhisperhxx thank you for the request ! Use trigger word Elise_league_of_legends I recommend keeping the weight at (1-0.95) and using the negative prompts as needed !! Lower weights tend to lose her likeness ! I would reccomend lowering the weight when using this in combination with other loras ! Please use Trigger words ((naked)), (nude), nipples if your having issues with getting good NSFW generations (specifically with her breast not having nipples) <3 Please click this link to view the example photos i was too dumb to figure out how to upload them here :() https://imgur.com/a/AHGNqh7
Seshouan/Simple_Classifier
Seshouan
2023-05-17T01:17:17Z
62
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-17T01:16:46Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Simple_Classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Simple_Classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1590, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Birchlabs/wd-1-5-beta3-unofficial
Birchlabs
2023-05-17T01:15:46Z
7
5
diffusers
[ "diffusers", "safetensors", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-16T23:54:27Z
--- license: other --- # WD 1.5 Beta 3 (Diffusers-compatible) <img width="582px" height="256px" src="https://birchlabs.co.uk/share/radiance0triptych.jpg" title="Triptych of Reimu, Sanae and Flandre in 'radiance' aesthetic"> This unofficial repository hosts diffusers-compatible float16 checkpoints of WD 1.5 beta 3. Float16 is [all you need](https://twitter.com/Birchlabs/status/1599903883278663681) for inference. ## Usage (via diffusers) ```python # make sure you're logged in with `huggingface-cli login` from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler from diffusers.models.autoencoder_kl import AutoencoderKL from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput import torch from torch import Generator, compile from PIL import Image from typing import List vae: AutoencoderKL = AutoencoderKL.from_pretrained('hakurei/waifu-diffusion', subfolder='vae', torch_dtype=torch.float16) # scheduler args documented here: # https://github.com/huggingface/diffusers/blob/0392eceba8d42b24fcecc56b2cc1f4582dbefcc4/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L83 scheduler: DPMSolverMultistepScheduler = DPMSolverMultistepScheduler.from_pretrained( 'Birchlabs/wd-1-5-beta3-unofficial', subfolder='scheduler', # sde-dpmsolver++ is very new. if your diffusers version doesn't have it: use 'dpmsolver++' instead. algorithm_type='sde-dpmsolver++', solver_order=2, # solver_type='heun' may give a sharper image. Cheng Lu reckons midpoint is better. solver_type='midpoint', use_karras_sigmas=True, ) # variant=None # variant='ink' # variant='mofu' variant='radiance' # variant='illusion' pipe: StableDiffusionPipeline = StableDiffusionPipeline.from_pretrained( 'Birchlabs/wd-1-5-beta3-unofficial', torch_dtype=torch.float16, vae=vae, scheduler=scheduler, variant=variant, ) pipe.to('cuda') compile(pipe.unet, mode='reduce-overhead') # WD1.5 was trained on area=896**2 and no side longer than 1152 sqrt_area=896 # note: pipeline requires width and height to be multiples of 8 height = 1024 width = sqrt_area**2//height prompt = 'artoria pendragon (fate), reddizen, 1girl, best aesthetic, best quality, blue dress, full body, white shirt, blonde hair, looking at viewer, hair between eyes, floating hair, green eyes, blue ribbon, long sleeves, juliet sleeves, light smile, hair ribbon, outdoors, painting (medium), traditional media' negative_prompt = 'lowres, bad anatomy, bad hands, missing fingers, extra fingers, blurry, mutation, deformed face, ugly, bad proportions, monster, cropped, worst quality, jpeg, bad posture, long body, long neck, jpeg artifacts, deleted, bad aesthetic, realistic, real life, instagram' # pipeline invocation args documented here: # https://github.com/huggingface/diffusers/blob/0392eceba8d42b24fcecc56b2cc1f4582dbefcc4/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#LL544C18-L544C18 out: StableDiffusionPipelineOutput = pipe.__call__( prompt, negative_prompt=negative_prompt, height=height, width=width, num_inference_steps=22, generator=Generator().manual_seed(1234) ) images: List[Image.Image] = out.images img, *_ = images img.save('out_pipe/saber.png') ``` Should output the following image: <img height="256px" src="https://birchlabs.co.uk/share/saber-radiance.smol.jpg" title="Saber in 'radiance' aesthetic"> ## How WD1.5b3 CompVis checkpoint was converted I converted the official [CompVis-style checkpoints](https://huggingface.co/waifu-diffusion/wd-1-5-beta3) using [kohya's converter script](https://github.com/bmaltais/kohya_ss/blob/master/tools/convert_diffusers20_original_sd.py). To convert the five aesthetics: I added [converter support](https://github.com/Birch-san/diffusers-play/commit/b8b3cd31081e18a898d888efa7e13dc2a08908be) for [checkpoint variants](https://huggingface.co/docs/diffusers/using-diffusers/loading#checkpoint-variants). I [commented-out](https://github.com/Birch-san/diffusers-play/blob/b8b3cd31081e18a898d888efa7e13dc2a08908be/src/kohya/library/model_util.py#L869-L874) vae-conversion, because WD 1.5 b3 does not distribute a VAE. Instead it re-uses WD1.4's VAE (checkpoints: [CompVis](https://huggingface.co/hakurei/waifu-diffusion-v1-4) [diffusers](https://huggingface.co/hakurei/waifu-diffusion/tree/main/vae)). I told the converter to [load WD 1.4's VAE](https://github.com/Birch-san/diffusers-play/blob/b8b3cd31081e18a898d888efa7e13dc2a08908be/src/kohya/library/model_util.py#L1065-L1066). I invoked my modified [`scripts/convert_diffusers20_original_sd.py`](https://github.com/Birch-san/diffusers-play/blob/b8b3cd31081e18a898d888efa7e13dc2a08908be/scripts/convert_diffusers20_original_sd.py) like so: ```bash python scripts/convert_diffusers20_original_sd.py \ --fp16 \ --v2 \ --unet_use_linear_projection \ --use_safetensors \ --reference_model stabilityai/stable-diffusion-2-1 \ --variant illusion \ in/wd-1-5-beta3/wd-beta3-base-fp16.safetensors \ out/wd1-5-b3 ``` Except the "base" aesthetic was a special case, where I didn't pass any `--variant <whatever>` option. ### Why is there a `vae` folder The `vae` folder contains copies of WD 1.4's VAE, to make it easier to load stable-diffusion via diffusers [pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines#readme). I saved a duplicate of the VAE for each variant. So you _can_ skip the `vae` arg, and load the pipeline like this: ```python pipe: StableDiffusionPipeline = StableDiffusionPipeline.from_pretrained( 'Birchlabs/wd-1-5-beta3-unofficial', torch_dtype=torch.float16, variant='radiance', ) ``` But I recommend to supply the WD1.4 `vae` explicitly, to save disk space (i.e. because you already had WD1.4, or because you intend to try multiple variants of WD1.5 and don't want to download VAE duplicates for each variant): ```python vae: AutoencoderKL = AutoencoderKL.from_pretrained('hakurei/waifu-diffusion', subfolder='vae', torch_dtype=torch.float16) pipe: StableDiffusionPipeline = StableDiffusionPipeline.from_pretrained( 'Birchlabs/wd-1-5-beta3-unofficial', torch_dtype=torch.float16, variant='radiance', vae=vae, ) ``` ## Original model card ![WD 1.5 Radiance](https://i.ibb.co/hYjgvGZ/00160-2195473148.png) For this release, we release five versions of the model: - WD 1.5 Beta3 Base - WD 1.5 Radiance - WD 1.5 Ink - WD 1.5 Mofu - WD 1.5 Illusion The WD 1.5 Base model is only intended for training use. For generation, it is recomended to create your own finetunes and loras on top of WD 1.5 Base or use one of the aesthetic models. More information and sample generations for the aesthetic models are in the release notes ### Release Notes https://saltacc.notion.site/WD-1-5-Beta-3-Release-Notes-1e35a0ed1bb24c5b93ec79c45c217f63 # VAE WD 1.5 uses the same VAE as WD 1.4, which can be found here https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt ## License WD 1.5 is released under the Fair AI Public License 1.0-SD (https://freedevproject.org/faipl-1.0-sd/). If any derivative of this model is made, please share your changes accordingly. Special thanks to ronsor/undeleted (https://undeleted.ronsor.com/) for help with the license.
EarthnDusk/Poltergeist-Mix
EarthnDusk
2023-05-17T00:58:31Z
8
4
diffusers
[ "diffusers", "stable diffusion", "comic book", "illustration", "text-to-image", "en", "dataset:gsdf/EasyNegative", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-16T23:51:37Z
--- license: creativeml-openrail-m datasets: - gsdf/EasyNegative language: - en library_name: diffusers pipeline_tag: text-to-image tags: - stable diffusion - comic book - illustration --- # POODA-BEEP! This is censored language for POLTERBITCH aka Poltergeist. It's an in house nod to some of our alter's truths, and it's kind of a joke for Beetlejuice fans. THIS IS AN ILLUSTRAITON/COMIC MIX and there's several versions of this and will be hosting each version seperate for demo spaces. ### MODEL DOWNLOADS: Since this is just the space for the DIFFUSERS you need the following files for A1111: CKPT: https://huggingface.co/Duskfallcrew/nuri_made_me_do_it/resolve/main/Pooda-BeepOGfp16.ckpt SafeTensors: https://huggingface.co/Duskfallcrew/nuri_made_me_do_it/resolve/main/Pooda-beepMix.safetensors ## VAE: We use KF-L-Anime, but it is entirely up to you. We can provide one if you need ## DOES THIS DO NSFW: IT MAY WE ARE NOT SURE, THERE IS BERRY MIX IN IT AND OTHER MODELS - BUT IT WAS NOT INTENDED FOR THIS, WE MAY MAKE A MORE NSFW UPDATE TO THIS IN FUTURE IF YOU NEED IT. **Will you help us with our target market research?**: https://forms.gle/N1EQwZmZzdHMzP8H8 **Join our Reddit**: https://www.reddit.com/r/earthndusk/ **:Funding for a HUGE ART PROJECT THIS YEAR:**: https://www.buymeacoffee.com/duskfallxcrew / any chance you can spare a coffee or three? https://ko-fi.com/DUSKFALLcrew **:If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US:**: https://discord.gg/Da7s8d3KJ7 **:Listen to the music that we've made that goes with our art:**: https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38
hamishivi/hypertask_T0_11B
hamishivi
2023-05-17T00:17:50Z
11
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "en", "dataset:bigscience/P3", "arxiv:2212.10315", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-16T17:57:28Z
--- datasets: - bigscience/P3 language: - en --- An 11B T5 model trained on the [P3](https://huggingface.co/datasets/bigscience/P3) (T0 split) dataset for 20,000 steps with a batch size of 2048 a maximum input sequence length of 1024, a maximum output sequence length of 256, and the Adafactor optimizer with a constant learning rate of 0.001. The model is trained from the [T5 v1.1 lm-adapt checkpoint](https://huggingface.co/google/t5-xxl-lm-adapt) and fully finetuned. For more details, see [HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation](https://arxiv.org/abs/2212.10315). Performance on T0 held-out tasks (average accuracy across prompts using rank classification): | Model | ANLI (avg) | HellaSwag | StoryCloze | CB | COPA | RTE | WiC | WSC | WinoGrande | Average | |--|--|--|--|--|--|--|--|--|--|--| | [T0-11B](https://huggingface.co/bigscience/T0) | 41.0 | 33.6 | 92.4 | 70.1 | 91.5 | 81.0 | 56.1 | 61.1 | 59.9 | 65.2 | | hypertask_T0_11B (this model) | 46.8 | 34.1 | 98.2 | 81.2 | 96.6 | 84.0 | 52.1 | 62.6 | 64.8 | 68.9 |
hamishivi/hypertask_T0_3B
hamishivi
2023-05-17T00:17:20Z
4
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "en", "dataset:bigscience/P3", "arxiv:2212.10315", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-16T23:09:23Z
--- datasets: - bigscience/P3 language: - en --- A 3B T5 model trained on the [P3](https://huggingface.co/datasets/bigscience/P3) (T0 split) dataset for 20,000 steps with a batch size of 2048 a maximum input sequence length of 1024, a maximum output sequence length of 256, and the Adafactor optimizer with a constant learning rate of 0.001. The model is trained from the [T5 v1.1 lm-adapt checkpoint](https://huggingface.co/google/t5-xl-lm-adapt) and fully finetuned. For more details, see [HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation](https://arxiv.org/abs/2212.10315). Performance on T0 held-out tasks (average accuracy across prompts using rank classification): | Model | ANLI (avg) | HellaSwag | StoryCloze | CB | COPA | RTE | WiC | WSC | WinoGrande | Average | |--|--|--|--|--|--|--|--|--|--|--| | [T0-3B](https://huggingface.co/bigscience/T0_3B) | 33.4 | 27.2 | 84.0 | 45.4 | 75.9 | 64.6 | 50.7 | 65.1 | 51.0 | 55.2 | | hypertask_T0_3B (this model) | 41.7 | 30.1 | 96.9 | 72.7 | 89.1 | 81.2 | 51.7 | 57.2 | 59.2 | 64.4 |
kreepy/rl_course_vizdoom_health_gathering_supreme
kreepy
2023-05-17T00:15:23Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T16:05:41Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.93 +/- 5.75 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r kreepy/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .home.kevin..local.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .home.kevin..local.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
ZavGeorge/SD_1.4_simpson_tune_lora
ZavGeorge
2023-05-17T00:06:04Z
2
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-16T23:26:18Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - ZavGeorge/SD_1.4_simpson_tune_lora These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the Norod78/simpsons-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_4](./image_4.png) ![img_5](./image_5.png) ![img_8](./image_8.png) ![img_9](./image_9.png)
irow/poca-SoccerTwos
irow
2023-05-17T00:03:05Z
31
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-05-17T00:02:59Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: irow/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Kardbord/Protogen_x3.4_Official_Release-unsafe
Kardbord
2023-05-16T23:48:54Z
29
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "art", "artistic", "protogen", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-16T21:06:40Z
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - art - artistic - diffusers - protogen inference: true --- # Overview This is simply darkstorm2150/Protogen_x3.4_Official_Release with the safety checker disabled. **DO NOT** attempt to use this model to generate harmful or illegal content. <center><img src="https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/Protogen_x3.4-512.png" style="height:690px; border-radius: 8%; border: 10px solid #663380; padding-top:0px;" span title="Protogen x3.4 Raw Output"></center> <center><h1>Protogen x3.4 (Photorealism) Official Release</h1></center> <center><p><em>Research Model by <a href="https://instagram.com/officialvictorespinoza">darkstorm2150</a></em></p></center> </div> ## Table of contents * [General info](#general-info) * [Granular Adaptive Learning](#granular-adaptive-learning) * [Trigger Words](#trigger-words) * [Setup](#setup) * [Space](#space) * [CompVis](#compvis) * [Diffusers](#🧨-diffusers) * [Checkpoint Merging Data Reference](#checkpoint-merging-data-reference) * [License](#license) ## General info Protogen x3.4 Protogen was warm-started with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) and fine-tuned on various high quality image datasets. Version 3.4 continued training from [ProtoGen v2.2](https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release) with added photorealism. ## Granular Adaptive Learning Granular adaptive learning is a machine learning technique that focuses on adjusting the learning process at a fine-grained level, rather than making global adjustments to the model. This approach allows the model to adapt to specific patterns or features in the data, rather than making assumptions based on general trends. Granular adaptive learning can be achieved through techniques such as active learning, which allows the model to select the data it wants to learn from, or through the use of reinforcement learning, where the model receives feedback on its performance and adapts based on that feedback. It can also be achieved through techniques such as online learning where the model adjust itself as it receives more data. Granular adaptive learning is often used in situations where the data is highly diverse or non-stationary and where the model needs to adapt quickly to changing patterns. This is often the case in dynamic environments such as robotics, financial markets, and natural language processing. ## Trigger Words modelshoot style, analog style, mdjrny-v4 style, nousr robot Trigger words are available for the hassan1.4 and f222, might have to google them :) ## Setup To run this model, download the model.ckpt or model.safetensor and install it in your "stable-diffusion-webui\models\Stable-diffusion" directory ## Space We support a [Gradio](https://github.com/gradio-app/gradio) Web UI: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/darkstorm2150/Stable-Diffusion-Protogen-webui) ### CompVis ## CKPT [Download ProtoGen x3.4.ckpt (5.98GB)](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4.ckpt) [Download ProtoGen X3.4-pruned-fp16.ckpt (1.89 GB)](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4-pruned-fp16.ckpt) ## Safetensors [Download ProtoGen x3.4.safetensors (5.98GB)](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4.safetensors) [Download ProtoGen x3.4-pruned-fp16.safetensors (1.89GB)](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4-pruned-fp16.safetensors) ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler import torch prompt = ( "modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, " "english medieval witch, black silk vale, pale skin, black silk robe, black cat, necromancy magic, medieval era, " "photorealistic painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, " "trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski" ) model_id = "darkstorm2150/Protogen_x3.4_Official_Release" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") image = pipe(prompt, num_inference_steps=25).images[0] image.save("./result.jpg") ``` ![img](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/protogen/rswf5qk9be9a1.jpg) ## PENDING DATA FOR MERGE, RPGv2 not accounted.. ## Checkpoint Merging Data Reference <style> .myTable { border-collapse:collapse; } .myTable th { background-color:#663380; color:white; } .myTable td, .myTable th { padding:5px; border:1px solid #663380; } </style> <table class="myTable"> <tr> <th>Models</th> <th>Protogen v2.2 (Anime)</th> <th>Protogen x3.4 (Photo)</th> <th>Protogen x5.3 (Photo)</th> <th>Protogen x5.8 (Sci-fi/Anime)</th> <th>Protogen x5.9 (Dragon)</th> <th>Protogen x7.4 (Eclipse)</th> <th>Protogen x8.0 (Nova)</th> <th>Protogen x8.6 (Infinity)</th> </tr> <tr> <td>seek_art_mega v1</td> <td>52.50%</td> <td>42.76%</td> <td>42.63%</td> <td></td> <td></td> <td></td> <td>25.21%</td> <td>14.83%</td> </tr> <tr> <td>modelshoot v1</td> <td>30.00%</td> <td>24.44%</td> <td>24.37%</td> <td>2.56%</td> <td>2.05%</td> <td>3.48%</td> <td>22.91%</td> <td>13.48%</td> </tr> <tr> <td>elldreth v1</td> <td>12.64%</td> <td>10.30%</td> <td>10.23%</td> <td></td> <td></td> <td></td> <td>6.06%</td> <td>3.57%</td> </tr> <tr> <td>photoreal v2</td> <td></td> <td></td> <td>10.00%</td> <td>48.64%</td> <td>38.91%</td> <td>66.33%</td> <td>20.49%</td> <td>12.06%</td> </tr> <tr> <td>analogdiffusion v1</td> <td></td> <td>4.75%</td> <td>4.50%</td> <td></td> <td></td> <td></td> <td>1.75%</td> <td>1.03%</td> </tr> <tr> <td>openjourney v2</td> <td></td> <td>4.51%</td> <td>4.28%</td> <td></td> <td></td> <td>4.75%</td> <td>2.26%</td> <td>1.33%</td> </tr> <tr> <td>hassan1.4</td> <td>2.63%</td> <td>2.14%</td> <td>2.13%</td> <td></td> <td></td> <td></td> <td>1.26%</td> <td>0.74%</td> </tr> <tr> <td>f222</td> <td>2.23%</td> <td>1.82%</td> <td>1.81%</td> <td></td> <td></td> <td></td> <td>1.07%</td> <td>0.63%</td> </tr> <tr> <td>hasdx</td> <td></td> <td></td> <td></td> <td>20.00%</td> <td>16.00%</td> <td>4.07%</td> <td>5.01%</td> <td>2.95%</td> </tr> <tr> <td>moistmix</td> <td></td> <td></td> <td></td> <td>16.00%</td> <td>12.80%</td> <td>3.86%</td> <td>4.08%</td> <td>2.40%</td> </tr> <tr> <td>roboDiffusion v1</td> <td></td> <td>4.29%</td> <td></td> <td>12.80%</td> <td>10.24%</td> <td>3.67%</td> <td>4.41%</td> <td>2.60%</td> </tr> <tr> <td>RPG v3</td> <td></td> <td>5.00%</td> <td></td> <td></td> <td>20.00%</td> <td>4.29%</td> <td>4.29%</td> <td>2.52%</td> </tr> <tr> <td>anything&everything</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.51%</td> <td>0.56%</td> <td>0.33%</td> </tr> <tr> <td>dreamlikediff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>5.0%</td> <td>0.63%</td> <td>0.37%</td> </tr> <tr> <td>sci-fidiff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.10%</td> </tr> <tr> <td>synthwavepunk v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.26%</td> </tr> <tr> <td>mashupv2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>11.51%</td> </tr> <tr> <td>dreamshaper 252</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.04%</td> </tr> <tr> <td>comicdiff v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.25%</td> </tr> <tr> <td>artEros</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>15.00%</td> </tr> </table> ## License By downloading you agree to the terms of these licenses <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">CreativeML Open RAIL-M</a> <a href="https://huggingface.co/coreco/seek.art_MEGA/blob/main/LICENSE.txt">Seek Art Mega License</a>
yuuyassssui/anatasya
yuuyassssui
2023-05-16T23:28:06Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-16T23:24:04Z
--- license: creativeml-openrail-m ---
lsimon/bert-finetuned-squad
lsimon
2023-05-16T23:24:24Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-16T22:53:17Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
rohitp1/dgx1_whisper_base_libri360_noisy_teacher_distil_epochs_50_batch_8
rohitp1
2023-05-16T22:54:23Z
75
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-03T08:15:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: dgx1_whisper_base_libri360_noisy_teacher_distil_epochs_50_batch_8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dgx1_whisper_base_libri360_noisy_teacher_distil_epochs_50_batch_8 This model is a fine-tuned version of [rohitp1/subhadeep_whisper_base_finetune_teacher_babble_noise_libri_360_hours_100_epochs_batch_8](https://huggingface.co/rohitp1/subhadeep_whisper_base_finetune_teacher_babble_noise_libri_360_hours_100_epochs_batch_8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5761 - Wer: 10.6733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 256 - total_train_batch_size: 2048 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0423 | 1.48 | 150 | 0.1620 | 10.8902 | | 0.0999 | 2.96 | 300 | 0.2030 | 10.7882 | | 0.1577 | 4.45 | 450 | 0.2511 | 10.7937 | | 0.2078 | 5.94 | 600 | 0.2966 | 10.7827 | | 0.252 | 7.42 | 750 | 0.3321 | 10.7524 | | 0.2841 | 8.91 | 900 | 0.3625 | 10.7588 | | 0.3189 | 10.39 | 1050 | 0.3858 | 10.7772 | | 0.341 | 11.88 | 1200 | 0.4090 | 10.7505 | | 0.5277 | 13.36 | 1350 | 0.5461 | 11.1926 | | 0.8342 | 14.85 | 1500 | 0.5250 | 10.8415 | | 0.8278 | 16.33 | 1650 | 0.5543 | 10.7478 | | 0.8255 | 17.82 | 1800 | 0.5481 | 10.6761 | | 0.822 | 19.31 | 1950 | 0.5504 | 10.6650 | | 0.8204 | 20.79 | 2100 | 0.5556 | 10.6650 | | 0.8246 | 22.28 | 2250 | 0.5598 | 10.6586 | | 0.8228 | 23.76 | 2400 | 0.5634 | 10.6770 | | 0.8282 | 25.25 | 2550 | 0.5670 | 10.6706 | | 0.8264 | 26.73 | 2700 | 0.5702 | 10.6752 | | 0.8298 | 28.22 | 2850 | 0.5731 | 10.6908 | | 0.8273 | 29.7 | 3000 | 0.5761 | 10.6733 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.8.0 - Tokenizers 0.13.2
richardllz/Qtable_taxi-v1
richardllz
2023-05-16T22:52:53Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T22:52:51Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Qtable_taxi-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="richardllz/Qtable_taxi-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
futuredatascience/welcome_video_model
futuredatascience
2023-05-16T22:51:17Z
104
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "en", "dataset:futuredatascience/autotrain-data-welcome_message_2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-16T22:49:40Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - futuredatascience/autotrain-data-welcome_message_2 co2_eq_emissions: emissions: 0.5524527127969758 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 59180133582 - CO2 Emissions (in grams): 0.5525 ## Validation Metrics - Loss: 0.347 - Accuracy: 0.865 - Precision: 0.852 - Recall: 0.958 - AUC: 0.814 - F1: 0.902 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/futuredatascience/autotrain-welcome_message_2-59180133582 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("futuredatascience/autotrain-welcome_message_2-59180133582", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("futuredatascience/autotrain-welcome_message_2-59180133582", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
PhilSad/ppo-SnowballTarget
PhilSad
2023-05-16T22:41:30Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-05-16T22:41:25Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: PhilSad/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nolanaatama/models
nolanaatama
2023-05-16T22:33:27Z
0
5
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-04-20T04:27:14Z
--- license: creativeml-openrail-m ---
Eldund/q-Taxi-v3
Eldund
2023-05-16T22:32:32Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T22:32:30Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Eldund/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
JdThe65th/GPT2-Glitchfur-Zenith-JD
JdThe65th
2023-05-16T22:23:46Z
197
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-06-22T23:45:52Z
--- language: en thumbnail: http://www.huggingtweets.com/glitchfur-jdthe65th-zenitho_o/1655941045991/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1536036266818555907/0Mq-Q1NY_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1516278022135029761/snP1qGDO_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1516504502442016773/iEfei2hf_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Glitch 💻😺 & The 65th JD & zenith</div> <div style="text-align: center; font-size: 14px;">@glitchfur-jdthe65th-zenitho_o</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Glitch 💻😺 & The 65th JD & zenith. | Data | Glitch 💻😺 | The 65th JD | zenith | | --- | --- | --- | --- | | Tweets downloaded | 3206 | 3231 | 3245 | | Retweets | 663 | 328 | 205 | | Short tweets | 551 | 645 | 717 | | Tweets kept | 1992 | 2258 | 2323 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pavbr60/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glitchfur-jdthe65th-zenitho_o's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pg3exi1g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pg3exi1g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='JdThe65th/GPT2-Glitchfur-Zenith-JD') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
nolanaatama/pvsqttngcwgrl
nolanaatama
2023-05-16T22:17:34Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-16T22:15:38Z
--- license: creativeml-openrail-m ---
Eldund/q-FrozenLake-v1-4x4-noSlippery
Eldund
2023-05-16T22:15:56Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T22:15:54Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Eldund/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
radames/stable-diffusion-x4-upscaler-img2img
radames
2023-05-16T22:01:10Z
43
18
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "image-to-image", "arxiv:2112.10752", "arxiv:2202.00512", "arxiv:1910.09700", "license:openrail++", "diffusers:StableDiffusionUpscalePipeline", "region:us" ]
image-to-image
2023-05-16T19:43:42Z
--- license: openrail++ tags: - stable-diffusion - image-to-image duplicated_from: stabilityai/stable-diffusion-x4-upscaler pipeline_tag: image-to-image --- # Stable Diffusion x4 upscaler model card This model card focuses on the model associated with the Stable Diffusion Upscaler, available [here](https://github.com/Stability-AI/stablediffusion). This model is trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). ![Image](https://github.com/Stability-AI/stablediffusion/raw/main/assets/stable-samples/upscaling/merged-dog.png) - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `x4-upscaler-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.ckpt). - Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` ```python import requests from PIL import Image from io import BytesIO from diffusers import StableDiffusionUpscalePipeline import torch # load model and scheduler model_id = "stabilityai/stable-diffusion-x4-upscaler" pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipeline = pipeline.to("cuda") # let's download an image url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" response = requests.get(url) low_res_img = Image.open(BytesIO(response.content)).convert("RGB") low_res_img = low_res_img.resize((128, 128)) prompt = "a white cat" upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] upscaled_image.save("upsampled_cat.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
Lendalf/dqn-SpaceInvadersNoFrameskip-v4-7
Lendalf
2023-05-16T21:28:18Z
3
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T21:28:00Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 545.00 +/- 178.82 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Lendalf -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Lendalf -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Lendalf ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0003), ('learning_starts', 50000), ('n_timesteps', 2000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Gaivoronsky/Taxi
Gaivoronsky
2023-05-16T21:24:13Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T21:24:10Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Gaivoronsky/Taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lint/anime_control
lint
2023-05-16T21:07:20Z
0
0
diffusers
[ "diffusers", "safetensors", "license:openrail", "region:us" ]
null
2023-04-28T21:00:25Z
--- license: openrail --- ## Source The controlnets `canny_control` and `lineart_anime_control` were converted to `diffusers` format directly from the v1.1 originals in https://huggingface.co/lllyasviel ## [Try Style Controlnet with A1111 WebUI](https://github.com/1lint/style_controlnet) Use anime styling controlnet with A1111 Stable Diffusion WebUI by downloading the weights from the A1111_webui_weights folder inside this repository. These weights can be used directly with the existing [A1111 Webui Controlnet Extension](https://github.com/Mikubill/sd-webui-controlnet), see this reddit post for [instructions](https://www.reddit.com/r/StableDiffusion/comments/119o71b/a1111_controlnet_extension_explained_like_youre_5/) on using the controlnet extension. For each anime controlnet there is a standard variant, and a no-hint variant. ### Pass a black square as the controlnet conditioning image if you only want to add anime style guidance to image generation, or pass an anime image with canny preprocessing if you want to add both anime style and canny guidance to the image. See `assets` folder for example hints. _________________________________________________ ### Generated using `anime_control_dreamshaper.safetensors` controlnet with canny hint ![](./assets/hint_grid.png) _________________________________________________ ### Generated using `anime_control_dreamshaper.safetensors` controlnet with black square (numpy array of zeros) as hint ![](./assets/zerohint_grid.png) _________________________________________________ ### Generated using `anime_styler_dreamshaper.safetensors` controlnet with no controlnet conditioning hint ![](./assets/nohint_grid.png) _________________________________________________ ### Grid from left to right: Controlnet weight 0.0 (base model output), Controlnet weight 0.5, Controlnet weight 1.0, Controlnet hint (white means no controlnet hint passed) Generation settings for examples: Prompt: "1girl, blue eyes", Seed: 2048, all other settings are A1111 Webui defaults Base model used for examples: [Dreamshaper](https://civitai.com/models/4384/dreamshaper) _________________________________________________ ## Details These controlnets were initialized from a distinct UNet (`andite/anything-v4.5`), and predominantly trained without any controlnet conditioning image on a synthetically generated anime image dataset from the base model. (see `lint/anybooru` for a subset example of the training set). Then the main controlnet weights were frozen, the input hint block weights added back in and trained on the same dataset using canny image processing to generate the controlnet conditioning image.
Ziyu23/Taxi-v3
Ziyu23
2023-05-16T21:02:39Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T21:02:34Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Ziyu23/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Kardbord/Counterfeit-V2.5-unsafe
Kardbord
2023-05-16T21:01:45Z
29
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-16T16:56:55Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Overview This is simply gsdf/Counterfeit-V2.5 with the safety checker disabled. **DO NOT** attempt to use this model to generate harmful or illegal content. # Update V2.5 has been updated for ease of use as anime-style model. I use this embedding for negative prompts. https://huggingface.co/datasets/gsdf/EasyNegative Share by-products V2.1…Feeling of use similar to V2.0 V2.2…NSFW model # Counterfeit-V2.5 e.g. ![sample1](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample01.png) ``` ((masterpiece,best quality)),1girl, solo, animal ears, rabbit, barefoot, knees up, dress, sitting, rabbit ears, short sleeves, looking at viewer, grass, short hair, smile, white hair, puffy sleeves, outdoors, puffy short sleeves, bangs, on ground, full body, animal, white dress, sunlight, brown eyes, dappled sunlight, day, depth of field Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 448x768, Denoising strength: 0.6, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample2](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample02.png) ``` ((masterpiece,best quality)),1girl, from below, solo, school uniform, serafuku, sky, cloud, black hair, skirt, sailor collar, looking at viewer, short hair, building, bangs, neckerchief, long sleeves, cloudy sky, power lines, shirt, cityscape, pleated skirt, scenery, blunt bangs, city, night, black sailor collar, closed mouth, black skirt, medium hair, school bag , holding bag Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 832x512, Denoising strength: 0.6, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample3](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample03.png) ``` ((masterpiece,best quality)),2girls, black kimono, black legwear, black ribbon, black hair, cherry blossoms, day, flower, hair bun, hair ribbon, japanese clothes, kimono, long hair, looking at viewer, looking back, multiple girls, obi, outdoors, red eyes, red hair, ribbon, sandals, single hair bun, stairs, standing, statue, torii, tree, white kimono, yellow eyes Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 640x960, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample4](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample04.png) ``` ((masterpiece,best quality)),1girl, bangs, blue eyes, blurry background, branch, brown hair, dappled sunlight, flower, from side, hair flower, hair ornament, japanese clothes, kimono, leaf, (maple leaf:1.9), obi, outdoors, sash, solo, sunlight, upper body Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample5](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample05.png) ``` ((masterpiece,best quality))1girl, solo, black skirt, blue eyes, electric guitar, guitar, headphones, holding, holding plectrum, instrument, long hair, , music, one side up, pink hair, playing guiter, pleated skirt, black shirt, indoors Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample6](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample06.png) ``` ((masterpiece,best quality)), 1girl, food, fruit, solo, skirt, shop, indoors, jacket, shopping, basket, jewelry, shirt, shelf, short hair, black hair, plaid skirt, black jacket, dutch angle, yellow eyes, looking at viewer Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent ```
wa976/ast_10-finetuned-ICBHI
wa976
2023-05-16T20:56:49Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "audio-spectrogram-transformer", "audio-classification", "generated_from_trainer", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
audio-classification
2023-05-16T18:07:39Z
--- license: bsd-3-clause tags: - generated_from_trainer metrics: - accuracy model-index: - name: ast_10-finetuned-ICBHI results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ast_10-finetuned-ICBHI This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2536 - Accuracy: 0.6514 - Sensitivity: 0.4559 - Specificity: 0.8264 - Score: 0.6411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Sensitivity | Specificity | Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:-----------:|:------:| | 0.9062 | 1.0 | 258 | 1.0048 | 0.5906 | 0.5257 | 0.6486 | 0.5872 | | 0.8257 | 2.0 | 517 | 0.8676 | 0.6435 | 0.3645 | 0.8929 | 0.6287 | | 0.6324 | 3.0 | 776 | 0.9607 | 0.6409 | 0.4912 | 0.7749 | 0.6330 | | 0.1494 | 4.0 | 1035 | 1.2536 | 0.6514 | 0.4559 | 0.8264 | 0.6411 | | 0.0472 | 4.99 | 1290 | 1.5260 | 0.6486 | 0.5035 | 0.7783 | 0.6409 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Ziyu23/q-FrozenLake-v1-4x4-noSlippery
Ziyu23
2023-05-16T20:52:54Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T20:52:50Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Ziyu23/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Bainbridge/gpt2-kl_01_04-hs_cn-loto_muslims
Bainbridge
2023-05-16T20:48:04Z
11
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T20:19:04Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-kl_01_04-hs_cn-loto_muslim results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-kl_01_04-hs_cn-loto_muslim This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5380 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 21 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 73.5693 | 0.03 | 10 | 65.1086 | | 31.2617 | 0.06 | 20 | 18.3949 | | 12.0113 | 0.08 | 30 | 7.2956 | | 3.702 | 0.11 | 40 | 2.9472 | | 1.8413 | 0.14 | 50 | 1.2727 | | 1.3358 | 0.17 | 60 | 0.9255 | | 0.8787 | 0.2 | 70 | 0.7903 | | 0.7065 | 0.23 | 80 | 0.7346 | | 0.6537 | 0.25 | 90 | 0.6680 | | 0.8109 | 0.28 | 100 | 0.6131 | | 0.6697 | 0.31 | 110 | 0.5983 | | 0.6555 | 0.34 | 120 | 0.5935 | | 0.6505 | 0.37 | 130 | 0.5838 | | 0.684 | 0.4 | 140 | 0.5768 | | 0.6723 | 0.42 | 150 | 0.5736 | | 0.687 | 0.45 | 160 | 0.5709 | | 0.6504 | 0.48 | 170 | 0.5710 | | 0.711 | 0.51 | 180 | 0.5685 | | 0.7001 | 0.54 | 190 | 0.5695 | | 0.5758 | 0.57 | 200 | 0.5651 | | 0.6491 | 0.59 | 210 | 0.5652 | | 0.6248 | 0.62 | 220 | 0.5617 | | 0.579 | 0.65 | 230 | 0.5515 | | 0.5784 | 0.68 | 240 | 0.5500 | | 0.5178 | 0.71 | 250 | 0.5550 | | 0.6129 | 0.74 | 260 | 0.5530 | | 0.5729 | 0.76 | 270 | 0.5467 | | 0.5687 | 0.79 | 280 | 0.5429 | | 0.6217 | 0.82 | 290 | 0.5413 | | 0.5902 | 0.85 | 300 | 0.5402 | | 0.6314 | 0.88 | 310 | 0.5362 | | 0.5481 | 0.91 | 320 | 0.5354 | | 0.6007 | 0.93 | 330 | 0.5333 | | 0.5496 | 0.96 | 340 | 0.5326 | | 0.6287 | 0.99 | 350 | 0.5329 | | 0.5383 | 1.02 | 360 | 0.5366 | | 0.5227 | 1.05 | 370 | 0.5380 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.11.0+cu113 - Datasets 2.11.0 - Tokenizers 0.13.3
2rtl3/mn-xlm-roberta-base-named-entity
2rtl3
2023-05-16T20:43:17Z
534
2
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "mn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-16T18:48:38Z
--- language: - mn license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: mn-xlm-roberta-base-named-entity results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mn-xlm-roberta-base-named-entity This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1224 - Precision: 0.9275 - Recall: 0.9364 - F1: 0.9319 - Accuracy: 0.9783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2015 | 1.0 | 477 | 0.0915 | 0.8830 | 0.9076 | 0.8951 | 0.9724 | | 0.0837 | 2.0 | 954 | 0.0872 | 0.9089 | 0.9202 | 0.9145 | 0.9757 | | 0.0605 | 3.0 | 1431 | 0.0814 | 0.9134 | 0.9275 | 0.9204 | 0.9768 | | 0.0447 | 4.0 | 1908 | 0.0906 | 0.9219 | 0.9316 | 0.9267 | 0.9774 | | 0.0317 | 5.0 | 2385 | 0.0969 | 0.9229 | 0.9330 | 0.9280 | 0.9782 | | 0.0254 | 6.0 | 2862 | 0.1121 | 0.9216 | 0.9343 | 0.9279 | 0.9777 | | 0.0195 | 7.0 | 3339 | 0.1143 | 0.9298 | 0.9364 | 0.9331 | 0.9790 | | 0.0145 | 8.0 | 3816 | 0.1175 | 0.9229 | 0.9337 | 0.9283 | 0.9773 | | 0.0114 | 9.0 | 4293 | 0.1205 | 0.9233 | 0.9332 | 0.9282 | 0.9774 | | 0.0091 | 10.0 | 4770 | 0.1224 | 0.9275 | 0.9364 | 0.9319 | 0.9783 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
radames/stable-diffusion-2-depth-img2img
radames
2023-05-16T20:29:18Z
45
8
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "image-to-image", "arxiv:2112.10752", "arxiv:2202.00512", "arxiv:1910.09700", "license:openrail++", "diffusers:StableDiffusionDepth2ImgPipeline", "region:us" ]
image-to-image
2023-05-16T20:19:33Z
--- license: openrail++ tags: - stable-diffusion - image-to-image duplicated_from: stabilityai/stable-diffusion-2-depth pipeline_tag: image-to-image --- # Stable Diffusion v2 Model Card This model card focuses on the model associated with the Stable Diffusion v2 model, available [here](https://github.com/Stability-AI/stablediffusion). This `stable-diffusion-2-depth` model is resumed from [stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) (`512-base-ema.ckpt`) and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. ![image](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/depth2image.png) - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-depth-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/512-depth-ema.ckpt). - Use it with 🧨 [`diffusers`](#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install -U git+https://github.com/huggingface/transformers.git pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler): ```python import torch import requests from PIL import Image from diffusers import StableDiffusionDepth2ImgPipeline pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", torch_dtype=torch.float16, ).to("cuda") url = "http://images.cocodataset.org/val2017/000000039769.jpg" init_image = Image.open(requests.get(url, stream=True).raw) prompt = "two tigers" n_propmt = "bad, deformed, ugly, bad anotomy" image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
seokochin/Symptom2Disease
seokochin
2023-05-16T20:24:41Z
165
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-16T20:09:23Z
--- license: openrail --- Dataset of Symptom2Diease is trained using BERT Model
mxalmeida/mt5-small-finetuned-amazon-en-es
mxalmeida
2023-05-16T20:11:18Z
62
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-16T19:27:16Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: mxalmeida/mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mxalmeida/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.4556 - Validation Loss: 3.3695 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 17608, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 8.4019 | 3.9033 | 0 | | 5.1452 | 3.5258 | 1 | | 4.4556 | 3.3695 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
skuan/cat_dog_classifier
skuan
2023-05-16T20:06:58Z
63
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-16T20:00:30Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: skuan/cat_dog_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # skuan/cat_dog_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5070 - Validation Loss: 0.5766 - Train Accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 10, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6654 | 0.6772 | 1.0 | 0 | | 0.6124 | 0.6468 | 1.0 | 1 | | 0.5732 | 0.6197 | 1.0 | 2 | | 0.5481 | 0.5959 | 1.0 | 3 | | 0.5070 | 0.5766 | 1.0 | 4 | ### Framework versions - Transformers 4.29.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
gufte/ppo-LunarLander-v2-2
gufte
2023-05-16T19:46:46Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T19:46:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 281.07 +/- 18.27 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ShipItMind/starcoder-gptq-4bit-128g
ShipItMind
2023-05-16T19:39:44Z
8
5
transformers
[ "transformers", "gpt_bigcode", "text-generation", "license:bigcode-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T19:17:31Z
--- license: bigcode-openrail-m ---
PIE666/segformer-b0-scene-parse-150
PIE666
2023-05-16T18:55:49Z
37
0
transformers
[ "transformers", "pytorch", "tensorboard", "segformer", "generated_from_trainer", "dataset:scene_parse_150", "license:other", "endpoints_compatible", "region:us" ]
null
2023-05-16T18:27:24Z
--- license: other tags: - generated_from_trainer datasets: - scene_parse_150 model-index: - name: segformer-b0-scene-parse-150 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-scene-parse-150 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. It achieves the following results on the evaluation set: - Loss: 2.2558 - Mean Iou: 0.1047 - Mean Accuracy: 0.1682 - Overall Accuracy: 0.5909 - Per Category Iou: [0.3967343788921505, 0.5770491690370748, 0.7963295576349257, 0.43501455272564443, 0.8318786278934446, 0.10981793225949699, 0.6076017945730807, nan, 0.0, 0.0, 0.0, 0.0, 0.48939227048513, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4669454430589191, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] - Per Category Accuracy: [0.4947546235856163, 0.9332973159633069, 0.91608103210541, 0.6720029576860083, 0.9739501767533055, 0.5394102833574587, 0.6625135135135135, nan, 0.0, 0.0, 0.0, 0.0, 0.6784173139480847, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.691088403570872, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 4.0275 | 1.0 | 20 | 3.6902 | 0.0528 | 0.1101 | 0.4364 | [0.35771810946969523, 0.2785957707095421, 0.6546127996415503, 0.3896280028715816, 0.39066375222386096, 0.04831494184312556, 0.0007294790593193845, nan, 0.0, 0.0, 0.0, 0.0, 0.4530879499892218, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01337963890558561, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.47300768076600563, 0.39066685974947507, 0.912915064589903, 0.8759799322594495, 0.9814897125837553, 0.08673236699649026, 0.0007297297297297297, nan, 0.0, 0.0, 0.0, 0.0, 0.5581557458673571, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.01337963890558561, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.3855 | 2.0 | 40 | 3.5181 | 0.0613 | 0.1037 | 0.4927 | [0.4514849468237105, 0.558609657315111, 0.5913604484088032, 0.2739746294691534, 0.6703838864786129, 0.00239279539141052, 0.002620107497508782, nan, 0.0, 0.002514688601645123, 0.0, 0.0, 0.06930645394282779, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.014559540506244573, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6479862099490589, 0.8411828921205505, 0.6926553719386969, 0.8760276368724855, 0.8890005857160067, 0.0057900894291209775, 0.0026293436293436294, nan, 0.0, 0.002514688601645123, 0.0, 0.0, 0.07166567084910044, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.014559540506244573, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.3577 | 3.0 | 60 | 3.1700 | 0.0697 | 0.1235 | 0.5136 | [0.37770061549540196, 0.4983254837328516, 0.6789617622859351, 0.3696766131887887, 0.6701310753516471, 0.056922334780511334, 0.0531323399176004, nan, 0.0, 0.0, 0.0, 0.0, 0.4977316723559575, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0032725572697522207, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4962509821756688, 0.9464362297611157, 0.8490833274318588, 0.8632587021164947, 0.9178917546971516, 0.12635085562451667, 0.05327799227799228, nan, 0.0, 0.0, 0.0, 0.0, 0.5590187877580827, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0032725572697522207, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.8311 | 4.0 | 80 | 3.2201 | 0.0711 | 0.1234 | 0.5012 | [0.4154244233552964, 0.5123775723798121, 0.705617271362714, 0.4164726040365027, 0.5541755365391755, 0.06951253308869203, 0.026212081404983732, nan, 0.0, 0.005186310010512791, 0.0, 0.0, 0.4246301722652012, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0008008898776418243, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5512142432222266, 0.8282333480002005, 0.8792724939909103, 0.6995801994052825, 0.9797491806206928, 0.144236679819952, 0.030046332046332048, nan, 0.0, 0.0052173913043478265, 0.0, 0.0, 0.6950806612228639, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0008014425966740132, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.7916 | 5.0 | 100 | 3.0167 | 0.0753 | 0.1231 | 0.5123 | [0.39202477808981157, 0.5595321660553452, 0.7088930732927575, 0.35445364205630525, 0.697718654547019, 0.05059953594781847, 0.011660625991500596, nan, 0.0, 0.004841363102232667, 0.0, 0.0, 0.45763881787503835, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0014693114272356909, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5288703812364348, 0.945650902569301, 0.8641300046239788, 0.7828605276130202, 0.9431813933394536, 0.12367393071721758, 0.011664092664092664, nan, 0.0, 0.004841363102232667, 0.0, 0.0, 0.5942043417645887, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0014693114272356909, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.1321 | 6.0 | 120 | 2.9472 | 0.0748 | 0.1285 | 0.5070 | [0.4010326427580704, 0.5457090767026956, 0.6806887940901051, 0.3695574936842199, 0.6411717018279561, 0.1176204446643809, 0.03472632065648459, nan, 0.0, 0.0003172737955346651, 0.0, 0.0, 0.41663907726811555, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.009131003302231776, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5183552643600682, 0.8402332588850582, 0.8175819506525642, 0.8228687964126131, 0.9889046279872555, 0.3299161230195713, 0.03566795366795367, nan, 0.0, 0.0003172737955346651, 0.0, 0.0, 0.6474805815574587, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.00917206527304704, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.1047 | 7.0 | 140 | 2.8380 | 0.0740 | 0.1287 | 0.5297 | [0.4536325193340699, 0.5206640113722101, 0.7111692935337857, 0.31132489823671566, 0.7728195657317198, 0.11542959964012596, 0.10328305303216302, nan, 0.0, 0.006874265569917744, 0.0, 0.0, 0.3305169222005981, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0047412353923205345, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6044009591313405, 0.9781890689138535, 0.8554277597030656, 0.673465899152448, 0.9091226306541326, 0.30528841387241973, 0.10394980694980695, nan, 0.0, 0.006874265569917744, 0.0, 0.0, 0.5778231428002389, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.004741868696987912, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.804 | 8.0 | 160 | 2.8786 | 0.0820 | 0.1318 | 0.5309 | [0.38668904033111334, 0.5698320402421349, 0.7425273485843187, 0.33707742023334675, 0.762005489336024, 0.11132311389691793, 0.15495613581226358, nan, 0.0, 0.007473560517038778, 0.0, 0.0, 0.35143104943625325, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.02022871100255357, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.49641746983426355, 0.9306350010860908, 0.9022715816923763, 0.8069473818118212, 0.9756782466736731, 0.2718764252146497, 0.1561042471042471, nan, 0.0, 0.007473560517038778, 0.0, 0.0, 0.5716324769302263, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.020280950154722945, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.3287 | 9.0 | 180 | 2.9391 | 0.0785 | 0.1382 | 0.5456 | [0.4041112867939692, 0.5642736291943612, 0.7828118941202882, 0.34189381643350353, 0.7097520391615656, 0.10455548014930438, 0.39567120712559717, nan, 0.0, 0.017896592244418332, 0.0, 0.0, 0.28782929925404005, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.2230620456616944e-05, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5329269951617062, 0.9482853689645377, 0.7989277367914586, 0.6422829837645301, 0.9763096568368843, 0.39102932719954, 0.41283783783783784, nan, 0.0, 0.017896592244418332, 0.0, 0.0, 0.6698532828785766, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 2.226229435205592e-05, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.6804 | 10.0 | 200 | 2.7155 | 0.0801 | 0.1365 | 0.5479 | [0.39415762500721097, 0.5625280336320829, 0.7581174553954939, 0.41228677126783375, 0.7017040123668229, 0.09985108440197331, 0.19462930275222654, nan, 0.0, 0.004030552291421857, 0.0, 0.0, 0.39035154933030924, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0046973441082837995, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.513281451447529, 0.9464891418768763, 0.9227378953814366, 0.8338011035667149, 0.9786359048066099, 0.2805417302849438, 0.20562548262548264, nan, 0.0, 0.004030552291421857, 0.0, 0.0, 0.6341366261700856, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0046973441082837995, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.1016 | 11.0 | 220 | 2.7363 | 0.0877 | 0.1409 | 0.5655 | [0.42005750031692474, 0.5737626427850931, 0.797948595355578, 0.34482063940034663, 0.7266471435933358, 0.1010609801225015, 0.4020008799832659, nan, 0.0, 0.001527614571092832, 0.0, 0.0, 0.3876454988293517, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.013952781876573214, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5449364402908256, 0.937766997320976, 0.8899118111417063, 0.7552872612781656, 0.9781789632411281, 0.3628323848426563, 0.43037065637065636, nan, 0.0, 0.001527614571092832, 0.0, 0.0, 0.5798147779326828, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.014314655268371959, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.0697 | 12.0 | 240 | 2.7157 | 0.0849 | 0.1377 | 0.5497 | [0.41033092003554045, 0.5899327901877239, 0.7297805032364822, 0.3668676927169273, 0.7223076330586151, 0.09331087810041243, 0.31744503471789587, nan, 0.0, 0.0, 0.0, 0.0, 0.3985599982194723, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.023684899241036377, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5372719169835685, 0.9215369020234707, 0.9369680862142944, 0.7017984639114603, 0.9738795585113674, 0.35486109734092125, 0.32461389961389964, nan, 0.0, 0.0, 0.0, 0.0, 0.5944201022372702, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.02417685166633273, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.6941 | 13.0 | 260 | 2.6572 | 0.0928 | 0.1391 | 0.5652 | [0.387319402912614, 0.5602778028321458, 0.7731223213658399, 0.3914202617547261, 0.7283644875665773, 0.048844968357543365, 0.5315080129319971, nan, 0.0, 0.0012338425381903642, 0.0, 0.0, 0.468894141698077, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.006309148264984227, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5183349609870688, 0.9514406019727865, 0.9167850433029373, 0.6420245837772512, 0.9810535410894318, 0.23630306755765304, 0.5604864864864865, nan, 0.0, 0.0012338425381903642, 0.0, 0.0, 0.6090752174201687, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.006322491595983882, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.9651 | 14.0 | 280 | 2.6005 | 0.0853 | 0.1441 | 0.5698 | [0.42359933145135437, 0.5828151125819783, 0.8016857776989279, 0.3647511273877555, 0.7220412495373727, 0.1014313380025336, 0.5014586955971614, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42157605506055595, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005079954059545896, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5526557827051808, 0.9367198943985563, 0.9223796411625766, 0.6160812250544627, 0.9806049075524133, 0.4143879756498979, 0.5355675675675675, nan, 0.0, 0.0, 0.0, 0.0, 0.6557126734382261, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.005120327700972862, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.92 | 15.0 | 300 | 2.5568 | 0.0973 | 0.1456 | 0.5783 | [0.3944965361991909, 0.5852970895190182, 0.7551627510305197, 0.3887908794111479, 0.7501863705303206, 0.06816195591934178, 0.5226555329602102, nan, 0.0, 0.0, 0.0, 0.0, 0.507793056463162, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.11358206212093365, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5102257938111259, 0.922709323114797, 0.9295322282995838, 0.8033854373717938, 0.9698127785785794, 0.2536336776982412, 0.5606640926640927, nan, 0.0, 0.0, 0.0, 0.0, 0.6061541525592511, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.12382288118613505, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.6495 | 16.0 | 320 | 2.6344 | 0.0836 | 0.1402 | 0.5262 | [0.37510457956664106, 0.5915469197275239, 0.7560391805201077, 0.3511745752641164, 0.7645919329151364, 0.09673264257301668, 0.2350728241563055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.43285326990717876, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.15680145988235522, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.48609929567599064, 0.9276858468444885, 0.913156677900297, 0.5779970423139917, 0.977198615882458, 0.439293291824473, 0.2554942084942085, nan, 0.0, 0.0, 0.0, 0.0, 0.7128228108610503, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.1798125514815557, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.8256 | 17.0 | 340 | 2.5547 | 0.0994 | 0.1514 | 0.5874 | [0.3882847113407701, 0.5839540671451918, 0.8503572440397877, 0.41563053281441426, 0.7617775035718924, 0.09515821984371048, 0.5829186801841203, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5017243964612386, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.09232554936947926, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4900361197005659, 0.9363745732220137, 0.9127526004673968, 0.8240455102008364, 0.974519276703042, 0.42521464971941864, 0.6278146718146718, nan, 0.0, 0.0, 0.0, 0.0, 0.610867689039368, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.10447694739419845, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.5233 | 18.0 | 360 | 2.4925 | 0.0992 | 0.1516 | 0.5710 | [0.3658521663971187, 0.5797108167315629, 0.782243761527324, 0.5350559094006883, 0.7869394682507334, 0.0788111137337912, 0.4403969004596214, nan, 0.0, 0.0, 0.0, 0.0, 0.4881709248186051, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.2082087905474533, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.47352744711478917, 0.9403401970558585, 0.9205008893869271, 0.7584198642008683, 0.9784448201519539, 0.41048164819258, 0.5123822393822394, nan, 0.0, 0.0, 0.0, 0.0, 0.6856203943437562, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.23106035307998843, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4739 | 19.0 | 380 | 2.5494 | 0.0995 | 0.1487 | 0.5811 | [0.37598514294779967, 0.547329810579353, 0.8449461704582197, 0.4281559627120977, 0.7781294384354857, 0.0703622382890373, 0.6081830413322648, nan, 0.0, 0.0, 0.0, 0.0, 0.4945571967355794, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.13045440276251766, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4907954658507418, 0.9619951766429212, 0.8817636105360066, 0.7666886637937889, 0.9467455375502116, 0.3037417461482025, 0.6474942084942085, nan, 0.0, 0.0, 0.0, 0.0, 0.6507335856071168, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.14802199514681982, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.3316 | 20.0 | 400 | 2.4590 | 0.0987 | 0.1527 | 0.5670 | [0.3707166898876297, 0.5799075527646956, 0.758816456935973, 0.4414466836768303, 0.7687225598774706, 0.09952620255867041, 0.3912764615543591, nan, 0.0, 0.0, 0.0, 0.0, 0.492028282268127, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.24323761175935749, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4716209603901496, 0.925853416730254, 0.9359641412521401, 0.7992152591155565, 0.984077663450075, 0.4806765679839781, 0.423972972972973, nan, 0.0, 0.0, 0.0, 0.0, 0.6479286994622585, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.28587012177475013, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4074 | 21.0 | 420 | 2.4728 | 0.1015 | 0.1542 | 0.5812 | [0.36823279422073796, 0.5793167039888871, 0.8241560671926152, 0.4342393370516701, 0.7868911901585791, 0.07732290460117988, 0.5765004427285649, nan, 0.0, 0.0, 0.0, 0.0, 0.45919836033484174, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.25730788455486236, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4788591128644202, 0.9430582088970331, 0.895598055429426, 0.7174336508340357, 0.9826112964263015, 0.3693164918403363, 0.6561042471042471, nan, 0.0, 0.0, 0.0, 0.0, 0.6618867423488017, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.3094236291992253, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.2391 | 22.0 | 440 | 2.4467 | 0.1006 | 0.1557 | 0.5893 | [0.3788436682677447, 0.5381408166470731, 0.7775924131247404, 0.47477334691836715, 0.7825552076131603, 0.09840374000784964, 0.5734492350486787, nan, 0.0, 0.0, 0.0, 0.0, 0.48172677464025526, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2194672442620459, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.48025395458947595, 0.9446065844950792, 0.9358849920642525, 0.8002528344490912, 0.9591909641882433, 0.3927544565842438, 0.6367722007722008, nan, 0.0, 0.0, 0.0, 0.0, 0.6639613622784306, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.257151762060598, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.9425 | 23.0 | 460 | 2.3630 | 0.1039 | 0.1539 | 0.6001 | [0.44226881116081396, 0.5719733589351602, 0.8073611856247322, 0.45141977862832317, 0.8090855302084018, 0.07083077819003207, 0.5932456484892233, nan, 0.0, 0.0, 0.0, 0.0, 0.49263398361953714, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.22984142419029097, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5831453579383143, 0.9523289685479245, 0.9340687264895668, 0.6939232273760872, 0.9713456098300592, 0.31835577323471675, 0.6473088803088803, nan, 0.0, 0.0, 0.0, 0.0, 0.6299209984730797, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.27362585988111937, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.9718 | 24.0 | 480 | 2.4326 | 0.1046 | 0.1616 | 0.5961 | [0.3854594939373053, 0.5595217187923154, 0.8459924617044924, 0.46429596401741247, 0.81394594141909, 0.09841178012600903, 0.6550875350140056, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4658326474622771, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3154107889700149, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.48710837331405865, 0.949346396127947, 0.875169233460944, 0.7653330577066803, 0.9689612056610906, 0.4317582439372608, 0.7223667953667954, nan, 0.0, 0.0, 0.0, 0.0, 0.7045243311425347, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.39622431487789134, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.3063 | 25.0 | 500 | 2.4125 | 0.1048 | 0.1572 | 0.5945 | [0.4339197360326954, 0.575998434291711, 0.7846661749727812, 0.4236056052364709, 0.8382826307060869, 0.07242578499067587, 0.5352796311674923, nan, 0.0, 0.0, 0.0, 0.0, 0.48547568078856973, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3553445572194172, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5628440152762578, 0.9507360353787115, 0.9307028031309752, 0.7306518040294496, 0.9595689794833238, 0.3134381630346414, 0.5773745173745174, nan, 0.0, 0.0, 0.0, 0.0, 0.6473975967602735, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.4590707718337452, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.6461 | 26.0 | 520 | 2.3511 | 0.1034 | 0.1582 | 0.5863 | [0.3918853183847721, 0.5865648860306603, 0.8357720940406242, 0.414199125122702, 0.812040453831961, 0.06763547967293353, 0.5832265022122076, nan, 0.0, 0.0, 0.0, 0.0, 0.4773048642533937, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.38067087795155896, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5119678232144707, 0.9332054159727753, 0.923191961775108, 0.6911046798225389, 0.9799361112611171, 0.32771509587356984, 0.6305984555984556, nan, 0.0, 0.0, 0.0, 0.0, 0.6722764389563832, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.5017475901066364, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.6924 | 27.0 | 540 | 2.4398 | 0.1066 | 0.1639 | 0.5924 | [0.4189059360789876, 0.5709643276497924, 0.854552066533852, 0.4067968034036249, 0.8451572294054395, 0.09958204988331029, 0.6019613980919588, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4647975988398889, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42681063290288995, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5240889368950864, 0.9614549160925238, 0.9005844542663495, 0.6811503172356767, 0.9370417603050708, 0.48143007277269934, 0.6533861003861003, nan, 0.0, 0.0, 0.0, 0.0, 0.6862344818429263, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.5658407355462054, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4609 | 28.0 | 560 | 2.4223 | 0.0946 | 0.1627 | 0.5786 | [0.38162632486184994, 0.5833968289846407, 0.7371900003319943, 0.4394218110865579, 0.794704134017833, 0.0910006162178113, 0.5896439496055634, nan, 0.0, 0.0, 0.0, 0.0, 0.3880034244703911, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.44282189042293996, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4805178984384676, 0.8912850960494143, 0.9249998958563317, 0.5944670599646986, 0.9774229326509672, 0.41581566893379074, 0.7211891891891892, nan, 0.0, 0.0, 0.0, 0.0, 0.7070636659364005, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6312028317638416, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.3102 | 29.0 | 580 | 2.3546 | 0.1059 | 0.1626 | 0.5869 | [0.4002116588764441, 0.5609280256570249, 0.8112336299621963, 0.42095098161521793, 0.8274571016121118, 0.1042207173046305, 0.550699765712391, nan, 0.0, 0.0, 0.0, 0.0, 0.4697812553543892, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4097182863810635, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.49753821602382803, 0.9488144901221435, 0.924325044885921, 0.7243627458775264, 0.9667180379759981, 0.4659832246039143, 0.5835482625482625, nan, 0.0, 0.0, 0.0, 0.0, 0.6825831507667796, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.5471849328791826, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.8873 | 30.0 | 600 | 2.3563 | 0.1088 | 0.1713 | 0.5915 | [0.3792430751687285, 0.582817174837512, 0.8373395672204808, 0.4249890131573047, 0.8236423938369823, 0.1395398871421292, 0.570555083323214, nan, 0.0, 0.0, 0.0, 0.0, 0.4773630530549703, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4435969041729833, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4645635079355733, 0.9347008794550609, 0.9191470216993747, 0.7534943629048929, 0.9779588004868505, 0.7330610140588131, 0.6168030888030888, nan, 0.0, 0.0, 0.0, 0.0, 0.6618369514704906, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6188472583984506, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 0.8044 | 31.0 | 620 | 2.3226 | 0.1082 | 0.1625 | 0.6016 | [0.44100464287549895, 0.577089789656841, 0.8091792676428978, 0.43005059390780875, 0.8244009524444086, 0.06979871766877742, 0.582220658026501, nan, 0.0, 0.0, 0.0, 0.0, 0.46711019382213564, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.45307970730637254, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.574892036814076, 0.9365611580512746, 0.9173224246312273, 0.7190715092149411, 0.9765547436765518, 0.27889591719378953, 0.6195637065637065, nan, 0.0, 0.0, 0.0, 0.0, 0.6723760207130054, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6437365034840491, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 0.7663 | 32.0 | 640 | 2.2912 | 0.1060 | 0.1655 | 0.5824 | [0.38385033954449344, 0.5842612978239918, 0.7961930840187866, 0.40901546773746755, 0.8142293034891013, 0.10536870141534269, 0.5506216004841158, nan, 0.0, 0.0, 0.0, 0.0, 0.47346482190167793, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4427832087406556, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.48395728982455855, 0.9200887809605498, 0.9229920059320234, 0.6867595846518358, 0.9816101789964733, 0.5397077194582697, 0.6007451737451738, nan, 0.0, 0.0, 0.0, 0.0, 0.6673471420035849, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6513947327411563, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.1438 | 33.0 | 660 | 2.3414 | 0.1054 | 0.1628 | 0.5774 | [0.38953495128385185, 0.546188956788058, 0.7597065895389762, 0.4242586789837889, 0.8404363833124331, 0.10983865508292903, 0.5294314441426695, nan, 0.0, 0.0, 0.0, 0.0, 0.4889847387006513, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.44591555939932387, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.476404435068798, 0.9500788112040013, 0.9081869420502973, 0.6948773196368089, 0.954912329529641, 0.5018936765084968, 0.5682393822393822, nan, 0.0, 0.0, 0.0, 0.0, 0.6641937197105491, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6313141432356019, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.429 | 34.0 | 680 | 2.3328 | 0.0993 | 0.1653 | 0.5873 | [0.3858811553304856, 0.5392610429583562, 0.783429323559783, 0.4442381816893192, 0.8403611728330174, 0.10540054659197182, 0.6237087806413261, nan, 0.0, 0.0, 0.0, 0.0, 0.47919343450127905, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.465145241134531, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.48641196762018074, 0.9504519808625231, 0.897572619379887, 0.6478564727209121, 0.9630583514379121, 0.4603715968352799, 0.6886563706563706, nan, 0.0, 0.0, 0.0, 0.0, 0.687080926774215, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.665197355239431, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.258 | 35.0 | 700 | 2.3367 | 0.1076 | 0.1641 | 0.5913 | [0.4257915724982548, 0.5727099621711168, 0.802090944451109, 0.4149440205722881, 0.834135800264314, 0.10527472148746801, 0.5503598817919756, nan, 0.0, 0.0, 0.0, 0.0, 0.4736129256511369, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4457585514361312, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5424066400151057, 0.9406158970274531, 0.9191636846862984, 0.6799577019097747, 0.9700994055605634, 0.48456306636790863, 0.586027027027027, nan, 0.0, 0.0, 0.0, 0.0, 0.6558122551948483, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6225873238495959, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4854 | 36.0 | 720 | 2.3196 | 0.1054 | 0.1666 | 0.5908 | [0.3992855425522928, 0.5793354386688501, 0.771859474435221, 0.44725521169489013, 0.8233898898023438, 0.09923499754207919, 0.5717298112263686, nan, 0.0, 0.0, 0.0, 0.0, 0.48194347004054844, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.46173336199748166, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4990345746138806, 0.9231604685228608, 0.9261329789671447, 0.7079881374528917, 0.977713713647183, 0.4843449465606472, 0.6354324324324324, nan, 0.0, 0.0, 0.0, 0.0, 0.6726747659828719, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6694049288719696, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4238 | 37.0 | 740 | 2.3053 | 0.1029 | 0.1649 | 0.5921 | [0.4222531184649964, 0.5785986812093061, 0.767094602930895, 0.43757602483333374, 0.8217713817388768, 0.0963897972531066, 0.5495871771962496, nan, 0.0, 0.0, 0.0, 0.0, 0.4924034324330894, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4639497438012596, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5259832415959264, 0.9324841402895128, 0.9324149250373875, 0.7021641992780702, 0.9829810036929186, 0.43836132537526523, 0.5913706563706563, nan, 0.0, 0.0, 0.0, 0.0, 0.6723760207130054, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6510830606202275, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.7195 | 38.0 | 760 | 2.3691 | 0.1053 | 0.1642 | 0.5889 | [0.39693867634196117, 0.5568268459889724, 0.7973380702486511, 0.4365703970247878, 0.8324985015840397, 0.09383062860600298, 0.5766985565445832, nan, 0.0, 0.0, 0.0, 0.0, 0.4782216177556835, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4653177322386709, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4975585193968274, 0.94775903265513, 0.8994013821947653, 0.7191311399812362, 0.9693226048992444, 0.4305090123138546, 0.6221196911196911, nan, 0.0, 0.0, 0.0, 0.0, 0.676956781517626, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6409759789843942, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.2559 | 39.0 | 780 | 2.2857 | 0.1108 | 0.1700 | 0.6012 | [0.40930741065784954, 0.5718378288951689, 0.8158719044125041, 0.4549673300556352, 0.8315914321118782, 0.11864365353201647, 0.6035027775015273, nan, 0.0, 0.0, 0.0, 0.0, 0.5024426295219961, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.45678441624689153, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5009187276282209, 0.9433255543240338, 0.9125109871570029, 0.7603956302574459, 0.9757073247732947, 0.5775217624080426, 0.656042471042471, nan, 0.0, 0.0, 0.0, 0.0, 0.6657206399787559, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6379260446581625, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.3177 | 40.0 | 800 | 2.3700 | 0.1083 | 0.1679 | 0.5890 | [0.3925093431830425, 0.5427540246522963, 0.8096709992328134, 0.43425042923260676, 0.8321904612398676, 0.11816001355291432, 0.5803838871031883, nan, 0.0, 0.0, 0.0, 0.0, 0.4717476853179748, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.474692118226601, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.47487965175654634, 0.9439632845613586, 0.9100615280792158, 0.7400217851066198, 0.9689362815757007, 0.5255497610596657, 0.6142007722007722, nan, 0.0, 0.0, 0.0, 0.0, 0.6824337781318462, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6864801086399964, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4803 | 41.0 | 820 | 2.2936 | 0.1097 | 0.1693 | 0.5954 | [0.3914792942573023, 0.5711356903426855, 0.8057361110602504, 0.4483536331656391, 0.8252060259303974, 0.11079010122233895, 0.6043386010601685, nan, 0.0, 0.0, 0.0, 0.0, 0.48778896233369184, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.47384295053355174, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.47941745562190247, 0.9309636131734459, 0.9165809217131217, 0.753931655191057, 0.979578866037195, 0.5269576252701711, 0.6576370656370656, nan, 0.0, 0.0, 0.0, 0.0, 0.6699528646351989, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6890180101961308, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.1043 | 42.0 | 840 | 2.3383 | 0.1092 | 0.1695 | 0.5912 | [0.3927433542564336, 0.5607850464817099, 0.8050951244359285, 0.4412874216069104, 0.8358976732264404, 0.12641542991522242, 0.5681801814777474, nan, 0.0, 0.0, 0.0, 0.0, 0.5037432621281693, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4596238850425387, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4772774801077703, 0.9444450632995995, 0.9119277826146726, 0.741274031198817, 0.9753916196916891, 0.6342329122960084, 0.6092471042471043, nan, 0.0, 0.0, 0.0, 0.0, 0.670052446391821, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6458514214474944, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.0691 | 43.0 | 860 | 2.3504 | 0.1063 | 0.1701 | 0.5895 | [0.39644566885695864, 0.5678878003422858, 0.8018330182757583, 0.43328388414671715, 0.8324305989467107, 0.1288126769784116, 0.5587162370558394, nan, 0.0, 0.0, 0.0, 0.0, 0.49329068784235874, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4646632598581344, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4756593012797216, 0.9388558729663646, 0.9206050330552004, 0.7268513365242419, 0.9684793400102188, 0.6476571949792786, 0.6084980694980695, nan, 0.0, 0.0, 0.0, 0.0, 0.6680940051782513, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6781317482579755, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 0.5684 | 44.0 | 880 | 2.3429 | 0.1014 | 0.1670 | 0.5904 | [0.4043913376533286, 0.562379276319647, 0.8028939978104622, 0.4340475207071469, 0.8263926275453652, 0.10600550546758593, 0.5770733825445071, nan, 0.0, 0.0, 0.0, 0.0, 0.4859626943993894, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4648218839114625, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4994731274706667, 0.9348624006505405, 0.9134816061453096, 0.6976720148838392, 0.9755910123748084, 0.4993952132616843, 0.6311467181467182, nan, 0.0, 0.0, 0.0, 0.0, 0.6762763061807077, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6858345021037868, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4391 | 45.0 | 900 | 2.2992 | 0.1046 | 0.1668 | 0.5908 | [0.39562823702132865, 0.5522411685702769, 0.8086844050258685, 0.44139030826529374, 0.8383672536995924, 0.10806558244083035, 0.6096033477210375, nan, 0.0, 0.0, 0.0, 0.0, 0.4892085473548725, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4652919277496055, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.49241770535339036, 0.9424037695705207, 0.9115903571294672, 0.6789042250385612, 0.975703170759063, 0.4861890503856755, 0.6614478764478764, nan, 0.0, 0.0, 0.0, 0.0, 0.6812885879306911, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6761281417662904, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4742 | 46.0 | 920 | 2.3211 | 0.1017 | 0.1689 | 0.5901 | [0.3946552382031496, 0.5684616535715067, 0.7963743263106321, 0.4378010329419502, 0.8375652198893282, 0.11700780241460781, 0.5767992096637862, nan, 0.0, 0.0, 0.0, 0.0, 0.48337846947102026, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4655084081731059, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.48088132881515605, 0.9415348969327682, 0.9140856394212945, 0.7197990045637414, 0.9689072034760791, 0.5715333822450477, 0.6244285714285714, nan, 0.0, 0.0, 0.0, 0.0, 0.6783841200292107, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6877490594180636, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 0.8597 | 47.0 | 940 | 2.2974 | 0.1069 | 0.1688 | 0.5919 | [0.39275316555100176, 0.5575040569544051, 0.808067854239042, 0.4411409891284807, 0.8456804707917895, 0.12342948077051256, 0.5705226281547151, nan, 0.0, 0.0, 0.0, 0.0, 0.5011138906520305, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4617493147271368, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.48233708065920994, 0.9490762658527484, 0.910828025477707, 0.7413893173469874, 0.966456335079404, 0.5875156153952926, 0.6162084942084942, nan, 0.0, 0.0, 0.0, 0.0, 0.6682599747726217, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.660032502949754, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 0.8673 | 48.0 | 960 | 2.3572 | 0.1049 | 0.1713 | 0.5936 | [0.39307686618345145, 0.550272711088093, 0.7999977848498497, 0.454064119643326, 0.8407673791885094, 0.13695119940125078, 0.5848812924940139, nan, 0.0, 0.0, 0.0, 0.0, 0.48768643359980907, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.47367211205443344, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.47478016522884947, 0.9468400327498148, 0.9026714933785456, 0.7425302526754337, 0.9706975836099214, 0.6422240288711308, 0.6347181467181467, nan, 0.0, 0.0, 0.0, 0.0, 0.6783675230697737, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6881052561276966, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.5564 | 49.0 | 980 | 2.3697 | 0.1052 | 0.1712 | 0.5919 | [0.3943045129667074, 0.5415540000031681, 0.8002179279058907, 0.44582630059824024, 0.8455276568348229, 0.14713080731395733, 0.5961510625002207, nan, 0.0, 0.0, 0.0, 0.0, 0.4922660806356183, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4687791529323963, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.47234579080622663, 0.9520727625137154, 0.8994263766751509, 0.7098327158236202, 0.9656213782188418, 0.6782732842894251, 0.65196138996139, nan, 0.0, 0.0, 0.0, 0.0, 0.6766248423288853, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6712081747144861, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.2436 | 50.0 | 1000 | 2.2558 | 0.1047 | 0.1682 | 0.5909 | [0.3967343788921505, 0.5770491690370748, 0.7963295576349257, 0.43501455272564443, 0.8318786278934446, 0.10981793225949699, 0.6076017945730807, nan, 0.0, 0.0, 0.0, 0.0, 0.48939227048513, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4669454430589191, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4947546235856163, 0.9332973159633069, 0.91608103210541, 0.6720029576860083, 0.9739501767533055, 0.5394102833574587, 0.6625135135135135, nan, 0.0, 0.0, 0.0, 0.0, 0.6784173139480847, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.691088403570872, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
lowrollr/rl_course_vizdoom_health_gathering_supreme
lowrollr
2023-05-16T18:45:43Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T03:06:49Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 8.23 +/- 2.87 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r lowrollr/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Bainbridge/gpt2-no_ear-loto_women
Bainbridge
2023-05-16T18:33:41Z
125
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T18:20:46Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-no_ear-loto_women results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-no_ear-loto_women This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5473 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 21 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 72.3557 | 0.03 | 10 | 64.4036 | | 30.9725 | 0.06 | 20 | 18.3129 | | 9.4309 | 0.08 | 30 | 6.6428 | | 3.7579 | 0.11 | 40 | 2.9526 | | 1.6919 | 0.14 | 50 | 1.2236 | | 1.09 | 0.17 | 60 | 0.8718 | | 0.8348 | 0.2 | 70 | 0.7829 | | 0.6574 | 0.23 | 80 | 0.6525 | | 0.6218 | 0.25 | 90 | 0.6155 | | 0.6654 | 0.28 | 100 | 0.6041 | | 0.6137 | 0.31 | 110 | 0.5944 | | 0.5564 | 0.34 | 120 | 0.5821 | | 0.5956 | 0.37 | 130 | 0.5755 | | 0.5782 | 0.4 | 140 | 0.5706 | | 0.6667 | 0.42 | 150 | 0.5674 | | 0.5486 | 0.45 | 160 | 0.5663 | | 0.6661 | 0.48 | 170 | 0.5636 | | 0.6784 | 0.51 | 180 | 0.5622 | | 0.6032 | 0.54 | 190 | 0.5580 | | 0.5356 | 0.57 | 200 | 0.5538 | | 0.5262 | 0.59 | 210 | 0.5511 | | 0.5859 | 0.62 | 220 | 0.5503 | | 0.5721 | 0.65 | 230 | 0.5459 | | 0.5463 | 0.68 | 240 | 0.5513 | | 0.4686 | 0.71 | 250 | 0.5461 | | 0.5127 | 0.74 | 260 | 0.5473 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.11.0+cu113 - Datasets 2.11.0 - Tokenizers 0.13.3
Schnitzl/marian-finetuned-kde4-en-to-fr
Schnitzl
2023-05-16T18:25:32Z
61
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-14T22:21:46Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Schnitzl/marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Schnitzl/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6848 - Validation Loss: 0.8038 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0608 | 0.8769 | 0 | | 0.7975 | 0.8197 | 1 | | 0.6848 | 0.8038 | 2 | ### Framework versions - Transformers 4.30.0.dev0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
lemoniada/kicerobot
lemoniada
2023-05-16T18:23:45Z
127
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-04T20:52:26Z
--- license: mit language: - en tags: - conversational ---
2rtl3/mn-bert-base-demo-named-entity
2rtl3
2023-05-16T18:16:21Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "mn", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-16T16:12:22Z
--- language: - mn license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: mn-bert-base-demo-named-entity results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mn-bert-base-demo-named-entity This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1468 - Precision: 0.9092 - Recall: 0.9187 - F1: 0.9139 - Accuracy: 0.9757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.172 | 1.0 | 477 | 0.1117 | 0.8614 | 0.8811 | 0.8711 | 0.9662 | | 0.0846 | 2.0 | 954 | 0.1033 | 0.8748 | 0.8957 | 0.8852 | 0.9698 | | 0.0562 | 3.0 | 1431 | 0.1005 | 0.8808 | 0.9024 | 0.8915 | 0.9716 | | 0.0398 | 4.0 | 1908 | 0.1105 | 0.8978 | 0.9073 | 0.9025 | 0.9731 | | 0.0276 | 5.0 | 2385 | 0.1181 | 0.9031 | 0.9121 | 0.9076 | 0.9740 | | 0.0204 | 6.0 | 2862 | 0.1309 | 0.9039 | 0.9153 | 0.9096 | 0.9747 | | 0.0138 | 7.0 | 3339 | 0.1322 | 0.9023 | 0.9132 | 0.9077 | 0.9745 | | 0.0103 | 8.0 | 3816 | 0.1434 | 0.9071 | 0.9153 | 0.9112 | 0.9748 | | 0.0071 | 9.0 | 4293 | 0.1458 | 0.9041 | 0.9156 | 0.9098 | 0.9750 | | 0.0052 | 10.0 | 4770 | 0.1468 | 0.9092 | 0.9187 | 0.9139 | 0.9757 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
pvduy/gptj-soda-chai
pvduy
2023-05-16T18:14:18Z
15
1
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T17:36:10Z
# Training This is the 10k steps English supervised-fine-tuning (SFT) model of GPT-J using SODA dataset for Chai Competition. - **Language:** English - **Finetuned from:** [EleutherAI / GPT-J](https://huggingface.co/EleutherAI/gpt-j-6b) - **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training) - **Dataset:** 10 percent from [SODA dataset](https://huggingface.co/datasets/allenai/soda) # Why OpenAssistant framework: - Easy to setup training with change config from dataset and model is all you need - Data processing available for almost popular conversation datasets: SODA, Vicuna, OpenAssistant, ... # Configuration: You need to add this to default config file `configs/config.yaml` ``` data: soda-only: datasets: - soda: fraction: 0.1 input_max_length: 1024 ``` ``` gptj-chai: dtype: fp16 log_dir: gptj-soda model_name: EleutherAI/gpt-j-6b output_dir: output/gptj-soda-chai max_length: 1024 warmup_steps: 100 gradient_checkpointing: true gradient_accumulation_steps: 1 per_device_train_batch_size: 8 per_device_eval_batch_size: 8 eval_steps: 5000 save_steps: 5000 num_train_epochs: 1 save_total_limit: 1 use_flash_attention: false ``` # Command to train: ```bash deepspeed trainer_sft.py --local_rank=0 --configs defaults gptj-chai soda-only --cache_dir data_cache --deepspeed ``` # Interactive Demo Code: ```python from transformers import AutoTokenizer, AutoModelForCausalLM class ChatBot(): def __init__(self, path="/mnt/hdd/duyphung/gptj-soda-chai/checkpoint-10000/"): self.tokenizer = AutoTokenizer.from_pretrained(path) self.model = AutoModelForCausalLM.from_pretrained(path).half().cuda().eval() self.model.pad_token_id = self.tokenizer.eos_token_id self.tokenizer.pad_token_id = self.tokenizer.eos_token_id def chat(self, message): enc_dict = self.tokenizer( message, return_tensors='pt' ) for x in enc_dict: enc_dict[x] = enc_dict[x].cuda() chat_history_ids = self.model.generate( input_ids=enc_dict['input_ids'], attention_mask=enc_dict['attention_mask'], max_new_tokens=64, temperature=0.7, do_sample=True, top_k=0, top_p=0.95, ) out = chat_history_ids[:, enc_dict['input_ids'].shape[-1]:][0] return self.tokenizer.decode(out, skip_special_tokens=True) if __name__ == "__main__": bot_name = 'Bot:' prompt = "<|prompter|>" chat_history = [] bot = ChatBot() while True: message = input("Me: ") chat_history.append(f'Me: {message}') prompt = prompt + message + "<|endoftext|><|assistant|>" response = bot.chat(prompt) print(f'{bot_name} {response}') prompt = prompt + response + "<|endoftext|><|prompter|>" ```
XuYipei/chinese-llama-7b-ift
XuYipei
2023-05-16T18:10:34Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T10:54:41Z
chinese-llama-7b-ift is trained from Llama-7b with Chinese and English Corpus and an ift dataset. The tokenizer is from ziqingyang/chinese-llama-lora-7b
Bainbridge/gpt2-no_ear-loto_lgbt
Bainbridge
2023-05-16T17:44:36Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-16T17:26:24Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-no_ear-loto_lgbt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-no_ear-loto_lgbt This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 21 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 72.2061 | 0.03 | 10 | 64.5518 | | 30.2689 | 0.06 | 20 | 17.8618 | | 8.8488 | 0.08 | 30 | 6.3771 | | 3.4282 | 0.11 | 40 | 2.6682 | | 1.6187 | 0.14 | 50 | 1.1887 | | 0.9184 | 0.17 | 60 | 0.9858 | | 1.0938 | 0.2 | 70 | 0.7108 | | 0.7254 | 0.23 | 80 | 0.6445 | | 0.5847 | 0.25 | 90 | 0.6457 | | 0.7175 | 0.28 | 100 | 0.6043 | | 0.5929 | 0.31 | 110 | 0.5937 | | 0.5411 | 0.34 | 120 | 0.5871 | | 0.5367 | 0.37 | 130 | 0.5809 | | 0.6046 | 0.4 | 140 | 0.5761 | | 0.5401 | 0.42 | 150 | 0.5660 | | 0.6756 | 0.45 | 160 | 0.5609 | | 0.5675 | 0.48 | 170 | 0.5586 | | 0.5903 | 0.51 | 180 | 0.5584 | | 0.5611 | 0.54 | 190 | 0.5530 | | 0.5807 | 0.57 | 200 | 0.5526 | | 0.538 | 0.59 | 210 | 0.5481 | | 0.5831 | 0.62 | 220 | 0.5438 | | 0.6179 | 0.65 | 230 | 0.5417 | | 0.6591 | 0.68 | 240 | 0.5424 | | 0.4662 | 0.71 | 250 | 0.5494 | | 0.5973 | 0.74 | 260 | 0.5390 | | 0.5606 | 0.76 | 270 | 0.5359 | | 0.5623 | 0.79 | 280 | 0.5429 | | 0.5374 | 0.82 | 290 | 0.5337 | | 0.6014 | 0.85 | 300 | 0.5343 | | 0.5345 | 0.88 | 310 | 0.5328 | | 0.5215 | 0.91 | 320 | 0.5304 | | 0.5398 | 0.93 | 330 | 0.5274 | | 0.5192 | 0.96 | 340 | 0.5256 | | 0.591 | 0.99 | 350 | 0.5252 | | 0.5141 | 1.02 | 360 | 0.5301 | | 0.4937 | 1.05 | 370 | 0.5254 | | 0.4945 | 1.08 | 380 | 0.5225 | | 0.4774 | 1.1 | 390 | 0.5252 | | 0.4898 | 1.13 | 400 | 0.5278 | | 0.4856 | 1.16 | 410 | 0.5240 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.11.0+cu113 - Datasets 2.11.0 - Tokenizers 0.13.3
SharkGaming/SroomCalc
SharkGaming
2023-05-16T17:42:38Z
0
0
null
[ "arxiv:1910.09700", "license:mit", "region:us" ]
null
2023-05-16T17:20:24Z
--- license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
laion
2023-05-16T16:59:39Z
176,011
110
open_clip
[ "open_clip", "pytorch", "clip", "zero-shot-image-classification", "dataset:mlfoundations/datacomp_pools", "arxiv:2304.14108", "license:mit", "region:us" ]
zero-shot-image-classification
2023-04-26T01:41:18Z
--- license: mit widget: - src: >- https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog library_name: open_clip datasets: - mlfoundations/datacomp_pools pipeline_tag: zero-shot-image-classification --- # Model card for CLIP ViT-L-14 trained DataComp-1B # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) 7. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A CLIP ViT-L/14 model trained with the DataComp-1B (https://github.com/mlfoundations/datacomp) using OpenCLIP (https://github.com/mlfoundations/open_clip). Model training done on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the DataComp paper (https://arxiv.org/abs/2304.14108) include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. # Training Details ## Training Data This model was trained with the 1.4 Billion samples of the DataComp-1B dataset (https://arxiv.org/abs/2304.14108). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure Please see https://arxiv.org/abs/2304.14108. # Evaluation Evaluation done on 38 datasets, using the [DataComp repo](https://github.com/mlfoundations/datacomp) and the [LAION CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed on a suite of 38 datasets. See our paper for more details (https://arxiv.org/abs/2304.14108). ## Results The model achieves a 79.2% zero-shot top-1 accuracy on ImageNet-1k. See our paper for more details and results (https://arxiv.org/abs/2304.14108). # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model. # Citation **BibTeX:** DataComp ```bibtex @article{datacomp, title={DataComp: In search of the next generation of multimodal datasets}, author={Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt}, journal={arXiv preprint arXiv:2304.14108}, year={2023} } ``` OpenAI CLIP paper ``` @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` OpenCLIP software ``` @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` # How to Get Started with the Model See https://github.com/mlfoundations/open_clip
akmalartsai/MomokoArata.v1
akmalartsai
2023-05-16T16:53:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-16T16:50:36Z
--- license: creativeml-openrail-m ---
Kardbord/Dungeons-and-Diffusion-unsafe
Kardbord
2023-05-16T16:49:26Z
8
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-16T14:01:59Z
--- license: creativeml-openrail-m tags: - text-to-image --- # Overview This is simply 0xJustin/Dungeons-and-Diffusion with the safety checker disabled. **DO NOT** attempt to use this model to generate harmful or illegal content. FOR THE NEW VERSION DOWNLOAD 'D&Diffusion3.0_Protogen.ckpt' The newest version is finetuned from Protogen to great effect. Also works great at resolutions great than 512x512! Species in new version: aarakocra, aasimar, air_genasi, centaur, dragonborn, drow, dwarf, earth_genasi, elf, firbolg, fire_genasi, gith, gnome, goblin, goliath, halfling, human, illithid, kenku, kobold, lizardfolk, minotaur, orc, tabaxi, thrikreen, tiefling, tortle, warforged, water_genasi Classes in new version: Artificer, Bard, Barbarian, Cleric, Fighter, Druid, Monk, Paladin, Rogue, Ranger, Sorcerer, Warlock, Wizard, Noble, Townsperson See the training dataset here for a list of races: https://huggingface.co/datasets/0xJustin/Dungeons-and-Diffusion Model16000 is trained used `D&D character` as the class prompt, and for whatever reason it ~ seems ~ to work better for centaurs and aarakocra Model30000 is trained using all of the images as the class images, and I think it emulates the commission DnD character style better. It works VERY well for most races, though sometimes I have to fight to get aarakocra to not be birds or centaurs to not be horses. Tieflings work great, but reining in their horns can be trouble. There is some bleed through between classes- especially for elf ears and horns. Including `elf ears` and `horns` as negative prompts seems to help. Good prompts to try things out: modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, english medieval pink (dragonborn druid) witch, black silk robe, nature magic, medieval era, painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, painting art by midjourney and greg rutkowski, teal and gold, petals, countryside, action pose, casting a spell, green swirling magic Negative prompt: canvas frame, cartoon, 3d, photorealistic Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Seed: 2603924688, Size: 512x768, Batch size: 4, Batch pos: 1, Clip skip: 2 `[natural colors], full body tiefling (knight), [watercolor digital 2D painting], (strong shading), hard shadows, blurry, elegant, wearing robes, style of (saga comic) Lois van Baarle and charlie bowater and Sui Ishida, messy, disheveled, thick brushwork, detailed face and eyes, concept art` `portrait (painting) of tabaxi, de Rivia closeup, suit, collar, formal attire, D&D, fantasy, intricate, elegant, highly detailed, artstation, concept art, matte, sharp focus, (brush strokes), (oil on canvas), hearthstone, art by Titian and Greg Rutkowski and Rembrandt van Rijn and Alphonse Mucha` (inspired by Reddit post)
google/mobilenet_v2_0.75_160
google
2023-05-16T16:38:37Z
358
2
transformers
[ "transformers", "pytorch", "mobilenet_v2", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1801.04381", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-10T16:05:04Z
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileNet V2 MobileNet V2 model pre-trained on ImageNet-1k at resolution 160x160. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet). Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_0.75\_160**, where **0.75** is the depth multiplier and **160** is the resolution of the input images the model was trained on. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_0.75_160") model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_0.75_160") inputs = preprocessor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0). Currently, both the feature extractor and model support PyTorch. ### BibTeX entry and citation info ```bibtex @inproceedings{mobilenetv22018, title={MobileNetV2: Inverted Residuals and Linear Bottlenecks}, author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen}, booktitle={CVPR}, year={2018} } ```
google/mobilenet_v2_1.4_224
google
2023-05-16T16:38:32Z
2,215
1
transformers
[ "transformers", "pytorch", "mobilenet_v2", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1801.04381", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-10T16:03:54Z
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileNet V2 MobileNet V2 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet). Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.4\_224**, where **1.4** is the depth multiplier and **224** is the resolution of the input images the model was trained on. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_1.4_224") model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_1.4_224") inputs = preprocessor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0). Currently, both the feature extractor and model support PyTorch. ### BibTeX entry and citation info ```bibtex @inproceedings{mobilenetv22018, title={MobileNetV2: Inverted Residuals and Linear Bottlenecks}, author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen}, booktitle={CVPR}, year={2018} } ```
google/mobilenet_v2_0.35_96
google
2023-05-16T16:38:30Z
502
0
transformers
[ "transformers", "pytorch", "mobilenet_v2", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1801.04381", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-10T16:05:31Z
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileNet V2 MobileNet V2 model pre-trained on ImageNet-1k at resolution 96x96. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet). Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_0.35\_96**, where **0.35** is the depth multiplier and **96** is the resolution of the input images the model was trained on. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_0.35_96") model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_0.35_96") inputs = preprocessor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0). Currently, both the feature extractor and model support PyTorch. ### BibTeX entry and citation info ```bibtex @inproceedings{mobilenetv22018, title={MobileNetV2: Inverted Residuals and Linear Bottlenecks}, author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen}, booktitle={CVPR}, year={2018} } ```
google/mobilenet_v1_1.0_224
google
2023-05-16T16:38:27Z
2,443
1
transformers
[ "transformers", "pytorch", "mobilenet_v1", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1704.04861", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-10T16:06:24Z
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileNet V1 MobileNet V1 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Howard et al, and first released in [this repository](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md). Disclaimer: The team releasing MobileNet V1 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v1) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v1_1.0_224") model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v1_1.0_224") inputs = preprocessor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0). Currently, both the feature extractor and model support PyTorch.