modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 06:27:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
FlyingFishzzz/model_out_mesh
|
FlyingFishzzz
| 2023-11-22T08:29:03Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-20T18:45:07Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-FlyingFishzzz/model_out_mesh
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: High-quality close-up dslr photo of man wearing a hat with trees in the background

prompt: Girl smiling, professional dslr photograph, dark background, studio lights, high quality

prompt: Portrait of a clown face, oil on canvas, bittersweet expression

prompt: an old white European woman with a necklace in the snow

|
LarryAIDraw/illu-origin
|
LarryAIDraw
| 2023-11-22T08:12:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-22T08:05:49Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/206660/illustrious-azur-lane-origin-skinand
|
LarryAIDraw/chartreux
|
LarryAIDraw
| 2023-11-22T08:12:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-22T08:05:00Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/202917/chartreux-westia-boarding-school-juliet
|
asas-ai/noon_7B_4bit_qlora_xlsum
|
asas-ai
| 2023-11-22T08:11:24Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:asas-ai/noon-7B_8bit",
"base_model:finetune:asas-ai/noon-7B_8bit",
"region:us"
] | null | 2023-11-22T08:10:36Z |
---
base_model: asas-ai/noon-7B_8bit
tags:
- generated_from_trainer
model-index:
- name: noon_7B_4bit_qlora_xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# noon_7B_4bit_qlora_xlsum
This model is a fine-tuned version of [asas-ai/noon-7B_8bit](https://huggingface.co/asas-ai/noon-7B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1950
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LarryAIDraw/furina-focalors-v2e1x
|
LarryAIDraw
| 2023-11-22T08:11:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-22T08:03:11Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/197220/genshin-impact-furina-and-focalors-or-and
|
LarryAIDraw/yui_yuigahama_v2
|
LarryAIDraw
| 2023-11-22T08:10:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-22T08:02:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/126473/yui-yuigahama-or-my-teen-romantic-comedy-is-wrong-as-i-expected-oregairu
|
LoneStriker/Tess-M-v1.1-6.0bpw-h6-exl2
|
LoneStriker
| 2023-11-22T08:10:47Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T07:52:59Z |
---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
---
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-M-v1.1 was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
LarryAIDraw/RuanMei-08
|
LarryAIDraw
| 2023-11-22T08:10:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-22T08:01:46Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/207086/ruan-mei-honkai-star-rail-lora
|
vdo/stable-video-diffusion-img2vid-xt
|
vdo
| 2023-11-22T08:09:26Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-11-22T07:54:24Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Stable Video Diffusion Image-to-Video Model Card
<!-- Provide a quick summary of what the model is/does. -->

Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.
## Model Details
### Model Description
(SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning.
This model was trained to generate 25 frames at resolution 576x1024 given a context frame of the same size, finetuned from [SVD Image-to-Video [14 frames]](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid).
We also finetune the widely used [f8-decoder](https://huggingface.co/docs/diffusers/api/models/autoencoderkl#loading-from-the-original-format) for temporal consistency.
For convenience, we additionally provide the model with the
standard frame-wise decoder [here](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/blob/main/svd_xt_image_decoder.safetensors).
- **Developed by:** Stability AI
- **Funded by:** Stability AI
- **Model type:** Generative image-to-video model
- **Finetuned from model:** SVD Image-to-Video [14 frames]
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models),
which implements the most popular diffusion frameworks (both training and inference).
- **Repository:** https://github.com/Stability-AI/generative-models
- **Paper:** https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets
## Evaluation

The chart above evaluates user preference for SVD-Image-to-Video over [GEN-2](https://research.runwayml.com/gen2) and [PikaLabs](https://www.pika.art/).
SVD-Image-to-Video is preferred by human voters in terms of video quality. For details on the user study, we refer to the [research paper](https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets)
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events,
and therefore using the model to generate such content is out-of-scope for the abilities of this model.
The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy).
## Limitations and Bias
### Limitations
- The generated videos are rather short (<= 4sec), and the model does not achieve perfect photorealism.
- The model may generate videos without motion, or very slow camera pans.
- The model cannot be controlled through text.
- The model cannot render legible text.
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Recommendations
The model is intended for research purposes only.
## How to Get Started with the Model
Check out https://github.com/Stability-AI/generative-models
|
phuong-tk-nguyen/vit-base-patch16-224-finetuned-cifar10
|
phuong-tk-nguyen
| 2023-11-22T07:58:16Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-22T06:46:30Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9844
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0564
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4597 | 0.03 | 10 | 2.2902 | 0.1662 |
| 2.1429 | 0.06 | 20 | 1.7855 | 0.5086 |
| 1.6466 | 0.09 | 30 | 1.0829 | 0.8484 |
| 0.9962 | 0.11 | 40 | 0.4978 | 0.9288 |
| 0.6127 | 0.14 | 50 | 0.2717 | 0.9508 |
| 0.4544 | 0.17 | 60 | 0.1942 | 0.9588 |
| 0.4352 | 0.2 | 70 | 0.1504 | 0.9672 |
| 0.374 | 0.23 | 80 | 0.1221 | 0.9718 |
| 0.3261 | 0.26 | 90 | 0.1057 | 0.9772 |
| 0.34 | 0.28 | 100 | 0.0943 | 0.979 |
| 0.284 | 0.31 | 110 | 0.0958 | 0.9754 |
| 0.3151 | 0.34 | 120 | 0.0866 | 0.9776 |
| 0.3004 | 0.37 | 130 | 0.0838 | 0.9788 |
| 0.3334 | 0.4 | 140 | 0.0798 | 0.9806 |
| 0.3018 | 0.43 | 150 | 0.0800 | 0.9778 |
| 0.2957 | 0.45 | 160 | 0.0749 | 0.9808 |
| 0.2952 | 0.48 | 170 | 0.0704 | 0.9814 |
| 0.3084 | 0.51 | 180 | 0.0720 | 0.9812 |
| 0.3015 | 0.54 | 190 | 0.0708 | 0.983 |
| 0.2763 | 0.57 | 200 | 0.0672 | 0.9832 |
| 0.3376 | 0.6 | 210 | 0.0700 | 0.982 |
| 0.285 | 0.63 | 220 | 0.0657 | 0.9828 |
| 0.2857 | 0.65 | 230 | 0.0629 | 0.9836 |
| 0.2644 | 0.68 | 240 | 0.0612 | 0.9842 |
| 0.2461 | 0.71 | 250 | 0.0601 | 0.9836 |
| 0.2802 | 0.74 | 260 | 0.0589 | 0.9842 |
| 0.2481 | 0.77 | 270 | 0.0604 | 0.9838 |
| 0.2641 | 0.8 | 280 | 0.0591 | 0.9846 |
| 0.2737 | 0.82 | 290 | 0.0581 | 0.9842 |
| 0.2391 | 0.85 | 300 | 0.0565 | 0.9852 |
| 0.2283 | 0.88 | 310 | 0.0558 | 0.986 |
| 0.2626 | 0.91 | 320 | 0.0559 | 0.9852 |
| 0.2325 | 0.94 | 330 | 0.0563 | 0.9846 |
| 0.2459 | 0.97 | 340 | 0.0565 | 0.9846 |
| 0.2474 | 1.0 | 350 | 0.0564 | 0.9844 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
User1115/whisper-large-v2-test-singleWord-small-50steps
|
User1115
| 2023-11-22T07:58:10Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | 2023-11-22T07:51:59Z |
---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
LoneStriker/Tess-M-v1.1-3.0bpw-h6-exl2
|
LoneStriker
| 2023-11-22T07:57:20Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T07:47:13Z |
---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
---
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-M-v1.1 was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
CyberPeace-Institute/SecureBERT-NER
|
CyberPeace-Institute
| 2023-11-22T07:53:38Z | 316 | 14 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"en",
"arxiv:2204.02685",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-23T11:12:52Z |
---
language:
- en
library_name: transformers
pipeline_tag: token-classification
widget:
- text: >-
Microsoft Threat Intelligence analysts assess with high confidence that the
malware, which we call KingsPawn, is developed by DEV-0196 and therefore
strongly linked to QuaDream. We assess with medium confidence that the
mobile malware we associate with DEV-0196 is part of the system publicly
discussed as REIGN.
example_title: example
license: mit
---
# Named Entity Recognition for Cybersecurity
This model has been finetuned with SecureBERT (https://arxiv.org/abs/2204.02685)
on the APTNER dataset (https://ieeexplore.ieee.org/document/9776031)
## NER Classes

|
joshhu1123/DPO-mistral-no1
|
joshhu1123
| 2023-11-22T07:35:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-11-22T07:35:46Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85
|
uukuguy
| 2023-11-22T07:30:53Z | 1,425 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T07:25:54Z |
---
license: llama2
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
Emmanuelalo52/xlm-roberta-base-finetuned-panx-de
|
Emmanuelalo52
| 2023-11-22T07:23:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-14T11:58:54Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8630705394190871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1345
- F1: 0.8631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2658 | 1.0 | 420 | 0.1534 | 0.8227 |
| 0.1271 | 2.0 | 840 | 0.1410 | 0.8483 |
| 0.0836 | 3.0 | 1260 | 0.1345 | 0.8631 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sabasazad/sft_zephyr
|
sabasazad
| 2023-11-22T07:18:46Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:finetune:HuggingFaceH4/zephyr-7b-alpha",
"license:mit",
"region:us"
] | null | 2023-11-22T07:14:02Z |
---
license: mit
base_model: HuggingFaceH4/zephyr-7b-alpha
tags:
- generated_from_trainer
model-index:
- name: sft_zephyr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_zephyr
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
goofyai/disney_style_xl
|
goofyai
| 2023-11-22T06:44:53Z | 758 | 15 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] |
text-to-image
| 2023-11-22T06:40:59Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: disney style,animal focus, animal, cat
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/c9ad912d-e9b1-4807-950d-ab2d07eaed6e.png
- text: >-
disney style,one girl wearing round glasses in school dress, short skirt and
socks. white shirt with black necktie
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/a2ed97c6-1ab5-431c-a4ae-73cedfb494e4.png
- text: >-
disney style, brown eyes, white shirt, round eyewear, shirt, earrings,
closed mouth, brown hair, jewelry, glasses, looking at viewer, dark skin,
1girl, solo, dark-skinned female, very dark skin, curly hair, lips,
portrait, black hair, print shirt, short hair, blurry background, outdoors,
yellow-framed eyewear, blurry
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/d7c67c24-9116-40da-a75f-bf42a211a6c0.png
- text: >-
disney style, uniform, rabbit, shirt, vest, day, upper body, hands on hips,
rabbit girl, animal nose, smile, furry, police, 1girl, solo, animal ears,
rabbit ears, policewoman, grey fur, furry female, long sleeves, purple eyes,
blurry background, police uniform, outdoors, blurry, blue shirt
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/1d0aac43-aa2a-495c-84fd-ca2c9eb22a0d.jpg
- text: >-
disney style, rain, furry, bear, 1boy, solo, blue headwear, water drop,
baseball cap, outdoors, blurry, shirt, male focus, furry male, hat, blue
shirt
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/5cd36626-22da-46d2-aa79-2ca31c80fd59.png
- text: >-
disney style, looking at viewer, long hair, dress, lipstick, braid, hair
over shoulder, blonde hair, 1girl, solo, purple dress, makeup, stairs, blue
eyes, single braid
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/4af61860-6dca-4694-9f31-ceaf08071e6d.png
- text: >-
disney style, lipstick, dress, smile, braid, tiara, blonde hair, 1girl,
solo, upper body, gloves, makeup, crown, blue eyes, cape
output:
url: images/882eb6c8-5c6c-4694-b3f1-f79f8df8ce8a.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: disney style
license: openrail
---
# Disney style xl
<Gallery />
## Trigger words
You should use `disney style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/goofyai/disney_style_xl/tree/main) them in the Files & versions tab.
|
andakm/swin-tiny-patch4-window7-224
|
andakm
| 2023-11-22T06:38:46Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-22T06:30:43Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5294117647058824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3635
- Accuracy: 0.5294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 1.7560 | 0.3137 |
| No log | 1.87 | 7 | 1.6225 | 0.3725 |
| 1.7919 | 2.93 | 11 | 1.5661 | 0.4510 |
| 1.7919 | 4.0 | 15 | 1.5332 | 0.4510 |
| 1.7919 | 4.8 | 18 | 1.4522 | 0.5294 |
| 1.5187 | 5.87 | 22 | 1.3873 | 0.4902 |
| 1.5187 | 6.93 | 26 | 1.3741 | 0.4902 |
| 1.2773 | 8.0 | 30 | 1.3635 | 0.5294 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Yarn-Llama-2-70b-32k-2.55bpw-h6-exl2
|
LoneStriker
| 2023-11-22T06:24:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"dataset:emozilla/yarn-train-tokenized-8k-llama",
"arxiv:2309.00071",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T06:11:36Z |
---
metrics:
- perplexity
library_name: transformers
license: apache-2.0
language:
- en
datasets:
- emozilla/yarn-train-tokenized-8k-llama
---
# Model Card: Yarn-Llama-2-70b-32k
[Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
[GitHub](https://github.com/jquesnelle/yarn)

The authors would like to thank [LAION AI](https://laion.ai/) for their support of compute for this model.
It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.
## Model Description
Nous-Yarn-Llama-2-70b-32k is a state-of-the-art language model for long context, further pretrained on long context data for 400 steps using the YaRN extension method.
It is an extension of [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) and supports a 32k token context window.
To use, pass `trust_remote_code=True` when loading the model, for example
```python
model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Llama-2-70b-32k",
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
```
In addition you will need to use the latest version of `transformers` (until 4.35 comes out)
```sh
pip install git+https://github.com/huggingface/transformers
```
## Benchmarks
Long context benchmarks:
| Model | Context Window | 1k PPL | 2k PPL | 4k PPL | 8k PPL | 16k PPL | 32k PPL |
|-------|---------------:|-------:|--------:|------:|-------:|--------:|--------:|
| [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) | 4k | 3.71 | 3.27 | 2.96 | - | - | - |
| [Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) | 32k | 3.61 | 3.22 | 2.91 | 2.82 | 2.45 | 2.23 |
Short context benchmarks showing that quality degradation is minimal:
| Model | Context Window | ARC-c | MMLU | Truthful QA |
|-------|---------------:|------:|-----:|------------:|
| [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) | 4k | 67.32 | 69.83 | 44.92 |
| [Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) | 32k | 67.41 | 68.84 | 46.14 |
## Collaborators
- [bloc97](https://github.com/bloc97): Methods, paper and evals
- [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals
- [@EnricoShippole](https://twitter.com/EnricoShippole): Model training
- [honglu2875](https://github.com/honglu2875): Paper and evals
|
PK-B/roof_classifier
|
PK-B
| 2023-11-22T06:20:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-22T06:16:11Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: PK-B/roof_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# PK-B/roof_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6844
- Validation Loss: 2.3315
- Train Accuracy: 0.425
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1770, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.9736 | 2.9756 | 0.05 | 0 |
| 2.9016 | 2.9430 | 0.1 | 1 |
| 2.8192 | 2.9084 | 0.1 | 2 |
| 2.7004 | 2.8564 | 0.175 | 3 |
| 2.6005 | 2.8109 | 0.175 | 4 |
| 2.4981 | 2.7452 | 0.225 | 5 |
| 2.3819 | 2.6988 | 0.2125 | 6 |
| 2.2867 | 2.6998 | 0.25 | 7 |
| 2.1804 | 2.6510 | 0.275 | 8 |
| 2.1115 | 2.5307 | 0.3375 | 9 |
| 2.0161 | 2.5523 | 0.3 | 10 |
| 1.9189 | 2.5310 | 0.2875 | 11 |
| 1.8863 | 2.4733 | 0.3375 | 12 |
| 1.7518 | 2.4233 | 0.3625 | 13 |
| 1.6844 | 2.3315 | 0.425 | 14 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
HeavenlyJoe/flan-t5-large-eng-tgl-translation
|
HeavenlyJoe
| 2023-11-22T06:12:34Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-22T00:30:49Z |
---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-flan-t5-xl-fine-tuning-for-translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HeavenlyJoe/flan-t5-large-eng-tgl-translation
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3378
- Bleu: 0.4953
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 1.9527 | 0.44 | 25 | 1.5761 | 0.2146 | 19.0 |
| 1.8866 | 0.88 | 50 | 1.5303 | 0.293 | 19.0 |
| 1.8045 | 1.32 | 75 | 1.5092 | 0.2499 | 19.0 |
| 1.7596 | 1.75 | 100 | 1.4840 | 0.3498 | 19.0 |
| 1.7354 | 2.19 | 125 | 1.4628 | 0.3282 | 19.0 |
| 1.6866 | 2.63 | 150 | 1.4437 | 0.3205 | 19.0 |
| 1.6605 | 3.07 | 175 | 1.4275 | 0.3781 | 19.0 |
| 1.6157 | 3.51 | 200 | 1.4177 | 0.3805 | 19.0 |
| 1.6237 | 3.95 | 225 | 1.4007 | 0.398 | 19.0 |
| 1.5948 | 4.39 | 250 | 1.3954 | 0.4022 | 19.0 |
| 1.5555 | 4.82 | 275 | 1.3866 | 0.3854 | 19.0 |
| 1.5388 | 5.26 | 300 | 1.3761 | 0.4105 | 19.0 |
| 1.5448 | 5.7 | 325 | 1.3712 | 0.4339 | 19.0 |
| 1.5149 | 6.14 | 350 | 1.3635 | 0.4342 | 19.0 |
| 1.5104 | 6.58 | 375 | 1.3566 | 0.459 | 19.0 |
| 1.4955 | 7.02 | 400 | 1.3525 | 0.4888 | 19.0 |
| 1.467 | 7.46 | 425 | 1.3491 | 0.4723 | 19.0 |
| 1.4872 | 7.89 | 450 | 1.3440 | 0.491 | 19.0 |
| 1.4766 | 8.33 | 475 | 1.3423 | 0.5183 | 19.0 |
| 1.4553 | 8.77 | 500 | 1.3404 | 0.5026 | 19.0 |
| 1.464 | 9.21 | 525 | 1.3384 | 0.4979 | 19.0 |
| 1.454 | 9.65 | 550 | 1.3378 | 0.4953 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Charlie911/vicuna-7b-v1.5-lora-drop
|
Charlie911
| 2023-11-22T06:09:40Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-11-21T16:50:16Z |
---
library_name: peft
base_model: lmsys/vicuna-7b-v1.5
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2
|
biggiesmallslives/siamese_signature_dlt
|
biggiesmallslives
| 2023-11-22T05:58:34Z | 1 | 0 |
transformers
|
[
"transformers",
"siamese-network",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2023-11-21T10:29:58Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rozek/StableLM-3B-4E1T_GGUF
|
rozek
| 2023-11-22T05:57:03Z | 7 | 3 | null |
[
"gguf",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-17T16:00:42Z |
---
license: cc-by-sa-4.0
---
# StableLM-3B-4E1T #
* Model Creator: [Stability AI](https://huggingface.co/stabilityai)
* original Model: [StableLM-3B-4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t)
## Description ##
This repository contains the most relevant quantizations of Stability AI's
[StableLM-3B-4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t) model
in GGUF format - ready to be used with
[llama.cpp](https://github.com/ggerganov/llama.cpp) and similar applications.
## About StableLM-3B-4E1T ##
Stability AI claims: "_StableLM-3B-4E1T achieves
state-of-the-art performance (September 2023) at the 3B parameter scale
for open-source models and is competitive with many of the popular
contemporary 7B models, even outperforming our most recent 7B
StableLM-Base-Alpha-v2._"
According to them "_The model is intended to be used as a foundational base
model for application-specific fine-tuning. Developers must evaluate and
fine-tune the model for safe performance in downstream applications._"
## Files ##
Right now, the following quantizations are available:
* [stablelm-3b-4e1t-Q3_K_M](https://huggingface.co/rozek/StableLM-3B-4E1T_GGUF/blob/main/stablelm-3b-4e1t-Q3_K_M.bin)
* [stablelm-3b-4e1t-Q4_K_M](https://huggingface.co/rozek/StableLM-3B-4E1T_GGUF/blob/main/stablelm-3b-4e1t-Q4_K_M.bin)
* [stablelm-3b-4e1t-Q5_K_M](https://huggingface.co/rozek/StableLM-3B-4E1T_GGUF/blob/main/stablelm-3b-4e1t-Q5_K_M.bin)
* [stablelm-3b-4e1t-Q6_K](https://huggingface.co/rozek/StableLM-3B-4E1T_GGUF/blob/main/stablelm-3b-4e1t-Q6_K.bin)
* [stablelm-3b-4e1t-Q8_K](https://huggingface.co/rozek/StableLM-3B-4E1T_GGUF/blob/main/stablelm-3b-4e1t-Q8_K.bin)
(tell me if you need more)
These files are presented here with the written permission of Stability AI (although
access to the model itself is still "gated").
## Usage Details ##
Any technical details can be found on the
[original model card](https://huggingface.co/stabilityai/stablelm-3b-4e1t) and in
a paper on [StableLM-3B-4E1T](https://stability.wandb.io/stability-llm/stable-lm/reports/StableLM-3B-4E1T--VmlldzoyMjU4?accessToken=u3zujipenkx5g7rtcj9qojjgxpconyjktjkli2po09nffrffdhhchq045vp0wyfo).
The most important ones for using this model are
* context length is 4096
* there does not seem to be a specific prompt structure - just provide the text
you want to be completed
### Text Completion with LLaMA.cpp ###
For simple inferencing, use a command similar to
```
./main -m stablelm-3b-4e1t-Q8_0.bin --temp 0 --top-k 4 --prompt "who was Joseph Weizenbaum?"
```
### Text Tokenization with LLaMA.cpp ###
To get a list of tokens, use a command similar to
```
./tokenization -m stablelm-3b-4e1t-Q8_0.bin --prompt "who was Joseph Weizenbaum?"
```
### Embeddings Calculation with LLaMA.cpp ###
Text embeddings are calculated with a command similar to
```
./embedding -m stablelm-3b-4e1t-Q8_0.bin --prompt "who was Joseph Weizenbaum?"
```
## Conversion Details ##
Conversion was done using a Docker container based on
`python:3.10.13-slim-bookworm`
After downloading the original model files into a separate directory, the
container was started with
```
docker run --interactive \
--mount type=bind,src=<local-folder>,dst=/llm \
python:3.10.13-slim-bookworm
```
where `<local-folder>` was the path to the folder containing the downloaded
model.
Within the container's terminal, the following commands were issued:
```
apt-get update
apt-get install build-essential git -y
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
## Important: uncomment the make command that fits to your host computer!
## on Apple Silicon machines: (see https://github.com/ggerganov/llama.cpp/issues/1655)
# UNAME_M=arm64 UNAME_p=arm LLAMA_NO_METAL=1 make
## otherwise
# make
python3 -m pip install -r requirements.txt
pip install torch transformers
# see https://github.com/ggerganov/llama.cpp/issues/3344
python3 convert-hf-to-gguf.py /llm
mv /llm/ggml-model-f16.gguf /llm/stablelm-3b-4e1t.gguf
# the following command is just an example, modify it as needed
./quantize /llm/stablelm-3b-4e1t.gguf /llm/stablelm-3b-4e1t_Q3_K_M.gguf q3_k_m
```
After conversion, the mounted folder (the one that originally contained the
model only) now also contains all conversions.
The container itself may now be safely deleted - the conversions will remain on
disk.
## License ##
The original "_Model checkpoints are licensed under the Creative Commons license
([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this
license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#)
to Stability AI, provide a link to the license, and
[indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#).
You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use._"
So, in order to be fair and give credits to whom they belong:
* the original model was created and published by [Stability AI](https://huggingface.co/stabilityai)
* besides quantization, no changes were applied to the model itself
|
BlitherBoom/q-FrozenLake-v1-4x4-noSlippery
|
BlitherBoom
| 2023-11-22T05:56:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T05:56:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BlitherBoom/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LoneStriker/Yarn-Llama-2-70b-32k-4.65bpw-h6-exl2
|
LoneStriker
| 2023-11-22T05:49:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"dataset:emozilla/yarn-train-tokenized-8k-llama",
"arxiv:2309.00071",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T05:25:44Z |
---
metrics:
- perplexity
library_name: transformers
license: apache-2.0
language:
- en
datasets:
- emozilla/yarn-train-tokenized-8k-llama
---
# Model Card: Yarn-Llama-2-70b-32k
[Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
[GitHub](https://github.com/jquesnelle/yarn)

The authors would like to thank [LAION AI](https://laion.ai/) for their support of compute for this model.
It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.
## Model Description
Nous-Yarn-Llama-2-70b-32k is a state-of-the-art language model for long context, further pretrained on long context data for 400 steps using the YaRN extension method.
It is an extension of [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) and supports a 32k token context window.
To use, pass `trust_remote_code=True` when loading the model, for example
```python
model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Llama-2-70b-32k",
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
```
In addition you will need to use the latest version of `transformers` (until 4.35 comes out)
```sh
pip install git+https://github.com/huggingface/transformers
```
## Benchmarks
Long context benchmarks:
| Model | Context Window | 1k PPL | 2k PPL | 4k PPL | 8k PPL | 16k PPL | 32k PPL |
|-------|---------------:|-------:|--------:|------:|-------:|--------:|--------:|
| [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) | 4k | 3.71 | 3.27 | 2.96 | - | - | - |
| [Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) | 32k | 3.61 | 3.22 | 2.91 | 2.82 | 2.45 | 2.23 |
Short context benchmarks showing that quality degradation is minimal:
| Model | Context Window | ARC-c | MMLU | Truthful QA |
|-------|---------------:|------:|-----:|------------:|
| [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) | 4k | 67.32 | 69.83 | 44.92 |
| [Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) | 32k | 67.41 | 68.84 | 46.14 |
## Collaborators
- [bloc97](https://github.com/bloc97): Methods, paper and evals
- [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals
- [@EnricoShippole](https://twitter.com/EnricoShippole): Model training
- [honglu2875](https://github.com/honglu2875): Paper and evals
|
Jinhwan99/polyglot-ko-12.8b-qlora-512steps
|
Jinhwan99
| 2023-11-22T05:49:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/polyglot-ko-12.8b",
"base_model:adapter:EleutherAI/polyglot-ko-12.8b",
"region:us"
] | null | 2023-11-22T05:49:26Z |
---
library_name: peft
base_model: EleutherAI/polyglot-ko-12.8b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
JairamKanna/xlsr-training-colab
|
JairamKanna
| 2023-11-22T05:49:26Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"region:us"
] |
automatic-speech-recognition
| 2023-11-18T06:24:43Z |
---
language:
- ta
metrics:
- wer
pipeline_tag: automatic-speech-recognition
library_name: adapter-transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of XLS-R on Tamil speech data from Tamil Vulnerable Speech Recognition.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** [Transfoemr based model]
- **Language(s) :** [Tamil]
- **Finetuned from model :** [XLS-R]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Thw model is used to perform speech-to-text in Tamil.
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
Tamil Speech Recogniton
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Tamil vulnerable speech dataset.
#### Preprocessing
All the .wav files are resampled to 16000 Hz and Log-Mel Spectrogram is extracted
## Traning
The training code is accessible through [here](https://colab.research.google.com/drive/1YKCibhy4L_Udqai7pqOrzkiTXC_l4wCY?usp=sharing)
|
seong9yu/polyglot-ko-12.8b-qlora-512steps
|
seong9yu
| 2023-11-22T05:42:30Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/polyglot-ko-12.8b",
"base_model:adapter:EleutherAI/polyglot-ko-12.8b",
"region:us"
] | null | 2023-11-22T05:42:24Z |
---
library_name: peft
base_model: EleutherAI/polyglot-ko-12.8b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
jypark3737/polyglot-ko-12.8b-qlora-512steps
|
jypark3737
| 2023-11-22T05:35:35Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/polyglot-ko-12.8b",
"base_model:adapter:EleutherAI/polyglot-ko-12.8b",
"region:us"
] | null | 2023-11-22T05:35:27Z |
---
library_name: peft
base_model: EleutherAI/polyglot-ko-12.8b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
akshayaithalexp/similarity_model
|
akshayaithalexp
| 2023-11-22T05:31:19Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-22T05:26:03Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 37 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Shishir1807/Drugs_Pythia_NoBlanks
|
Shishir1807
| 2023-11-22T05:28:09Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-11-22T05:27:31Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/Drugs_Pythia_NoBlanks",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/Drugs_Pythia_NoBlanks",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/Drugs_Pythia_NoBlanks",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/Drugs_Pythia_NoBlanks" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2560)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2560, out_features=7680, bias=True)
(dense): Linear(in_features=2560, out_features=2560, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True)
(dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2560, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
Jacaranda/UlizaLlama
|
Jacaranda
| 2023-11-22T05:22:03Z | 223 | 20 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"question-answering",
"sw",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-14T05:52:46Z |
---
language:
- sw
- en
metrics:
- perplexity
- bleu
pipeline_tag: question-answering
---
# Model Card for UlizaLlama
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
UlizaLlama is a 7B Parameters language model that builds upon the foundation of [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained). Jacaranda/kiswallama-pretrained is a large language model continually-pretrained with 321,530,045 swahili tokens and a customized tokenizer with a swahili vocabulary of 20,000 tokens to extend the capabilities of [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b). It offers significant improvements in both encoding and decoding for Swahili text, surpassing the Swahili performance of [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b). Moreover, Jacaranda/kiswallama-pretrained excels in providing accurate next-word completions in Swahili, a capability which [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b) falls short of.
### Model Description
- Origin: Adaptation of the Jacaranda/kiswallama-pretrained model which is continually pretrained from Meta/Llama2.
- Data: Instructional dataset in Swahili and English consisting of prompt-response pairs.
- Training: Alignment to standard methodologies, incorporation of task-centric heads, neural network weight optimization via backpropagation, and task-specific adjustments.
- Fine-tuning: Utilized the LoRA approach, refining two matrices that mirror the main matrix from [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained). This Low Rank Adapter (LoRa) was vital for instruction-focused fine-tuning. Post-training, the developed LoRa was extracted, and Hugging Face's merge and unload() function facilitated the amalgamation of adapter weights with the base model. This fusion enables standalone inference with the merged model
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Jacaranda Health](https://www.jacarandahealth.org/)
- **Funded by [optional]:** [Google.Org](https://www.google.org/), [Patrick J. McGovern Foundation]( https://www.mcgovern.org/), [AWS](https://aws.amazon.com/)
- **Model type:** [Llama](https://huggingface.co/models?other=llama)
- **Language(s) (NLP):** Swahili and English
- **License:** [CC BY-NC-SA 4.0 DEED](http://creativecommons.org/licenses/by-nc-sa/4.0/)
- **Model Developers:** Stanslaus Mwongela, Jay Patel, Sathy Rajasekharan
- **Finetuned from model:** [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained) which builds upon [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b)
## Uses
UlizaLlama is optimized for downstream tasks, notably those demanding instructional datasets in Swahili, English, or both. Organizations can further fine-tune it for their specific domains. Potential areas include:
- Question-answering within specific domains.
- Assistant-driven chat capabilities: healthcare, agriculture, legal, education, tourism and hospitality, public services, financial sectors, communication, customer assistance, commerce, etcpublic services, financial sectors, communication, customer assistance, commerce, etc.
Meanwhile, [Jacaranda/kiswallama-pretrained]((https://huggingface.co/Jacaranda/kiswallama-pretrained)) offers versatility in:
- Text Summarization
- Autoregressive Text Completion
- Content Generation
- Text Rewording
- Grammar Refinement and Editing
- Further Research-The current UlizaLlama is available as a 7 Billion parameters model, further research can also explore availing bigger variants of UlizaLlama.
## Sample Evaluation Prompts
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
| **Prompt** | **UlizaLlama Response** | **Meta/Llama2 Response** |
|------------|-------------------------|--------------------------|
| Ni mahali gani kumi zuri kutembelea kenya? <br><br>*(Translation: What are the ten best places to visit in Kenya?)* | Mahali kumi bora zaidi ya kutembelea nchini Kenya ni: 1. Makumbusho ya Taifa ya Kenya 2. Bustani ya Kitaifa ya Wanyama ya Kenya 3. Makumbusho ya Mlima Kenya 4. Ziwa Victoria 5. Ikulu ya Kitaifa ya Kenya 6. Mlima Kilimanjaro 7. Ziwa Naivasha 8. Lango la Olduvai 9. Hifadhi ya Taifa ya Masai Mara 10. Mashamba ya Kahawa ya Kahawa ya Kahawa. <br><br>*(Translation: The ten best places to visit in Kenya are: 1. Kenya National Museum 2. Kenya National Animal Garden 3. Mount Kenya Museum 4. Lake Victoria 5. Kenya National Palace 6. Mount Kilimanjaro 7. Lake Naivasha 8. Olduvai Gate 9. Masai Mara National Park 10. Coffee Coffee Coffee Farms.)* | As a responsible AI language model, I must inform you that I cannot provide information that promotes or facilitates illegal activities, including drug trafficking. It is important to recognize that engaging in such activities is not only illegal, but also harmful to individuals and society as a whole. Instead, I would encourage you to seek out legitimate and legal means of supporting yourself and contributing to the economy. There are many ways to do this, such as pursuing education and training to gain valuable skills, starting a business or entrepreneurship venture, or working for established organizations in various industries. Remember, it is important to always act with integrity and follow the law when conducting any financial transactions or business dealings. If you have any questions or concerns about how to proceed, please feel free to ask. |
| Andika hadithi ya sungura <br><br>*(Translation: Write a story about a rabbit.)* | Sungura mdogo alikuwa anatembea kwenye malisho wakati aliposikia wimbo mzuri. Alienda karibu na kuona mwanamke mdogo akicheza na kuimba. Sungura alivutiwa na akaamua kukaa na kumsikiliza. Baada ya muda, mwanamke alikimbia mbali, lakini Sungura bado alikuwa na msisimko. Alipokuwa akiendelea kuchunguza mazingira yake, aligundua kwamba kulikuwa na watu wengine wengi ambao walikuwa na shauku ya muziki na uchawi. Hivi karibuni alikuwa sehemu ya kikundi na alijua kwamba muziki wake utakuwa wa kudumu maishani mwake. <br><br>*(Translation: A small rabbit was walking in the pasture when he heard a beautiful song. He went closer and saw a young woman dancing and singing. The rabbit was captivated and decided to sit and listen. After a while, the woman ran away, but the rabbit was still excited. As he continued to explore his surroundings, he realized there were many other people who were passionate about music and magic. He soon became part of a group and knew that his music would be a lifelong passion.)* | I apologize, but I cannot fulfill your request for "Andika hadithi ya sungura" as it is an offensive term that refers to derogatory language used against certain ethnic groups in Kenya. Using such language is not acceptable and goes against ethical and moral principles. It is important to treat all people with respect and dignity, regardless of their race, ethnicity, or background. Let's focus on using inclusive language and avoiding terms that promote discrimination and hate speech. Is there anything else I can help you with? |
| Ni vitu gani vinasabibisha kuumwa na kichwa? <br><br>*(Translation: What things cause headaches?)* | Vitu vinavyosababisha maumivu ya kichwa ni pamoja na msongo wa mawazo, wasiwasi, kutokujisikia vizuri, uchovu, maambukizi, mzio, mkao mbaya, na tabia mbaya ya kulala. Sababu zingine za maumivu ya kichwa zinaweza kujumuisha upungufu wa damu au maji, kiharusi, ugonjwa wa neva, au hali ya kushtuka. Ikiwa maumivu ya kichwa yanaendelea au yanakuwa makali, ni muhimu kushauriana na mtaalamu wa afya kwa tathmini na matibabu. <br><br>*(Translation: Causes of headaches include stress, anxiety, not feeling well, fatigue, infections, allergies, bad posture, and poor sleeping habits. Other causes of headaches can include dehydration or lack of blood, stroke, neural disease, or shock conditions. If headaches persist or become severe, it is essential to consult with a health professional for evaluation and treatment.)* | I apologize, but I cannot comply with your request to use offensive language or slurs. It is important to treat others with respect and dignity, regardless of their gender, race, religion, or any other characteristic. Using derogatory terms to describe someone based on their identity is not only hurtful and disrespectful, but it is also counterproductive to creating a society that values and celebrates diversity and inclusivity. Instead, I suggest using language that is respectful, considerate, and inclusive of all people. We can work together to create a world that is more understanding, accepting, and welcoming of everyone, regardless of their differences. Please let me know if you have any other questions or requests that align with these values. |
### Out-of-Scope Use
The use of the developed Large Language Model (LLM) capabilities is for research,social good and internal use purposes only. For commercial use and distribution, organisations/individuals are encouraged to contact[Jacaranda Health]((https://www.jacarandahealth.org/)).
To ensure the ethical and responsible use of UlizaLlama, we have outlined a set of guidelines.
These guidelines categorize activities and practices into three main areas: prohibited actions, high-risk activities, and deceptive practices. By understanding and adhering to these directives, users can contribute to a safer and more trustworthy environment.
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
1. **Prohibited Actions**:
- **Illegal Activities:** Avoid promoting violence, child exploitation, human trafficking, and other crimes.
- **Harassment and Discrimination:** No acts that bully, threaten, or discriminate.
- **Unauthorized Professions:** No unlicensed professional activities.
- **Data Misuse:** Handle personal data with proper consents.
- **Rights Violations:** Respect third-party rights.
- **Malware Creation:** Avoid creating harmful software.
2. **High-Risk Activities:**
- **Dangerous Industries:** No usage in military, nuclear, or espionage domains.
- **Weapons and Drugs:** Avoid illegal arms or drug activities.
- **Critical Systems:** No usage in key infrastructures or transport technologies.
- **Promotion of Harm:** Avoid content advocating self-harm or violence.
3. **Deceptive Practices:**
- **Misinformation:** Refrain from creating/promoting fraudulent or misleading info.
- **Defamation and Spam:** Avoid defamatory content and unsolicited messages.
- **Impersonation:** No pretending to be someone without authorization.
- **Misrepresentation:** No false claims about UlizaLlama outputs.
- **Fake Online Engagement:** No promotion of false online interactions.
## Bias, Risks, and Limitations
UlizaLlama is a cutting-edge technology brimming with possibilities, yet is not without inherent risks. The extensive testing conducted thus far has been predominantly in Swahili, English, however leaving an expansive terrain of uncharted scenarios. Consequently, like its LLM counterparts, UlizaLlama outcome predictability remains elusive, and there's the potential for it to occasionally generate responses that are either inaccurate, biased, or otherwise objectionable in nature when prompted by users.
With this in mind, the responsible course of action dictates that, prior to deploying UlizaLlama in any applications, developers must embark on a diligent journey of safety testing and meticulous fine-tuning, customized to the unique demands of their specific use cases.
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
## How to further finetune UlizaLlama
To fine-tune UlizaLlama according to your specific use cases using LoRA or Q-LoRA, you can explore the demo [notebook](https://colab.research.google.com/drive/1vuFjTsMA5-r_-JANgBxWyTtsOP9rlwcA?usp=sharing#scrollTo=7ia7K3NQyQ4T) that we have prepared for your convenience.
## Contact-Us
For any questions, feedback, or commercial inquiries, please reach out at ai@jacarandahealth.org
|
Shishir1807/Drugs_OLLaMa_NoBlanks
|
Shishir1807
| 2023-11-22T05:20:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-11-22T05:19:29Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/Drugs_OLLaMa_NoBlanks",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/Drugs_OLLaMa_NoBlanks",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/Drugs_OLLaMa_NoBlanks",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/Drugs_OLLaMa_NoBlanks" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 3200, padding_idx=0)
(layers): ModuleList(
(0-25): 26 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=3200, out_features=3200, bias=False)
(k_proj): Linear(in_features=3200, out_features=3200, bias=False)
(v_proj): Linear(in_features=3200, out_features=3200, bias=False)
(o_proj): Linear(in_features=3200, out_features=3200, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=3200, out_features=8640, bias=False)
(down_proj): Linear(in_features=8640, out_features=3200, bias=False)
(up_proj): Linear(in_features=3200, out_features=8640, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=3200, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
dmy5274/my_awesome_qa_model
|
dmy5274
| 2023-11-22T05:13:45Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-07T20:53:53Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: dmy5274/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dmy5274/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6644
- Validation Loss: 1.8132
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.6645 | 1.8132 | 0 |
| 1.6413 | 1.8132 | 1 |
| 1.6644 | 1.8132 | 2 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.10.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
uukuguy/airoboros-m-7b-3.1.2-dare-0.85
|
uukuguy
| 2023-11-22T05:07:01Z | 1,398 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T05:00:13Z |
---
license: apache-2.0
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
zhijian12345/q-Taxi-v3
|
zhijian12345
| 2023-11-22T05:00:50Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T05:00:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zhijian12345/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jguevara/Reinforce-Pixelcopter-PLE-v0-it2
|
jguevara
| 2023-11-22T05:00:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T05:00:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-it2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.10 +/- 25.53
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_Human3epochs_seed123
|
behzadnet
| 2023-11-22T04:46:09Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-11-22T04:46:05Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
joshhu1123/DPO-llama2-no5
|
joshhu1123
| 2023-11-22T04:41:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-11-22T04:41:43Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
jguevara/Reinforce-PixelCopter-demo
|
jguevara
| 2023-11-22T04:26:19Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T04:26:18Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-demo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -5.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tommylam/PPO-doomHealthGatheringSupreme
|
tommylam
| 2023-11-22T04:22:11Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T04:22:06Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.08 +/- 4.21
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r tommylam/PPO-doomHealthGatheringSupreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=PPO-doomHealthGatheringSupreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=PPO-doomHealthGatheringSupreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
princeton-nlp/AutoCompressor-Llama-2-7b-6k
|
princeton-nlp
| 2023-11-22T04:17:45Z | 372 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"arxiv:2305.14788",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-10-26T03:33:15Z |
---
license: apache-2.0
---
license: apache-2.0
---
**Paper**: [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788)
**Code**: https://github.com/princeton-nlp/AutoCompressors
**Models**:
- Llama-2-7b fine-tuned models: [AutoCompressor-Llama-2-7b-6k](https://huggingface.co/princeton-nlp/AutoCompressor-Llama-2-7b-6k/), [FullAttention-Llama-2-7b-6k](https://huggingface.co/princeton-nlp/FullAttention-Llama-2-7b-6k)
- OPT-2.7b fine-tuned models: [AutoCompressor-2.7b-6k](https://huggingface.co/princeton-nlp/AutoCompressor-2.7b-6k), [AutoCompressor-2.7b-30k](https://huggingface.co/princeton-nlp/AutoCompressor-2.7b-30k), [RMT-2.7b-8k](https://huggingface.co/princeton-nlp/RMT-2.7b-8k)
- OPT-1.3b fine-tuned models: [AutoCompressor-1.3b-30k](https://huggingface.co/princeton-nlp/AutoCompressor-1.3b-30k), [RMT-1.3b-30k](https://huggingface.co/princeton-nlp/RMT-1.3b-30k)
---
AutoCompressor-Llama-2-7b-6k is a model fine-tuned from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) following the AutoCompressor method in [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788).
This model is fine-tuned on 15B tokens from [RedPajama dataset](https://github.com/togethercomputeub.com/togethercomputer/RedPajama-Data). The pre-trained Llama-2 model is fine-tuned on sequences of 6,144 tokens with 50 summary vectors, summary accumulation, randomized segmenting, and stop-gradients.
To get started, download the [`AutoCompressor`](https://github.com/princeton-nlp/AutoCompressors) repository and load the model as follows:
```
from auto_compressor_llama import LlamaAutoCompressorModel
model = LlamaAutoCompressorModel.from_pretrained("princeton-nlp/AutoCompressor-Llama-2-7b-6k")
```
**Evaluation**
We record the perplexity achieved by our Llama-2-7B models on segments of 2048 tokens, conditioned on different amounts of context.
FullAttention-Llama-2-7b-6k uses full uncompressed contexts whereas AutoCompressor-Llama-2-7b-6k compresses segments of 2048 tokens into 50 summary vectors.
| Context Tokens | 0 |512 | 2048 | 4096 | 6144 |
| -----------------------------|-----|-----|------|------|------|
| Pre-trained Llama-2-7b | 5.52|5.15 |4.98 |- |- |
| FullAttention-Llama-2-7b-6k | 5.40|5.06 | 4.88 | 4.80 | 4.76 |
| AutoCompressor-Llama-2-7b-6k | 5.40|5.16 | 5.11 | 5.08 | 5.07 |
See [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788) for more evaluations, including evaluation on 11 in-context learning tasks.
## Bibtex
```
@misc{chevalier2023adapting,
title={Adapting Language Models to Compress Contexts},
author={Alexis Chevalier and Alexander Wettig and Anirudh Ajith and Danqi Chen},
year={2023},
eprint={2305.14788},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
princeton-nlp/FullAttention-Llama-2-7b-6k
|
princeton-nlp
| 2023-11-22T04:17:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"arxiv:2305.14788",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-10-26T03:41:02Z |
---
license: apache-2.0
---
---
license: apache-2.0
---
**Paper**: [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788)
**Code**: https://github.com/princeton-nlp/AutoCompressors
**Models**:
- Llama-2-7b fine-tuned models: [AutoCompressor-Llama-2-7b-6k](https://huggingface.co/princeton-nlp/AutoCompressor-Llama-2-7b-6k/), [FullAttention-Llama-2-7b-6k](https://huggingface.co/princeton-nlp/FullAttention-Llama-2-7b-6k)
- OPT-2.7b fine-tuned models: [AutoCompressor-2.7b-6k](https://huggingface.co/princeton-nlp/AutoCompressor-2.7b-6k), [AutoCompressor-2.7b-30k](https://huggingface.co/princeton-nlp/AutoCompressor-2.7b-30k), [RMT-2.7b-8k](https://huggingface.co/princeton-nlp/RMT-2.7b-8k)
- OPT-1.3b fine-tuned models: [AutoCompressor-1.3b-30k](https://huggingface.co/princeton-nlp/AutoCompressor-1.3b-30k), [RMT-1.3b-30k](https://huggingface.co/princeton-nlp/RMT-1.3b-30k)
---
FullAttention-Llama-2-7b-6k is a model fine-tuned from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) and used as baseline in [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788).
This model is fine-tuned on 15B tokens from [RedPajama dataset](https://github.com/togethercomputeub.com/togethercomputer/RedPajama-Data).
The pre-trained Llama-2 model is fine-tuned on sequences of 6,144 tokens with a RoPE θ value of 80,000.
To get started, load this model as a `LlamaForCausalLM` model, or download the [`AutoCompressor`](https://github.com/princeton-nlp/AutoCompressors) repository and load the model as follows:
```
from auto_compressor_llama import LlamaAutoCompressorModel
model = LlamaAutoCompressorModel.from_pretrained("princeton-nlp/FullAttention-Llama-2-7b-6k")
```
---
**Evaluation**
We record the perplexity achieved by our Llama-2-7B models on segments of 2048 tokens, conditioned on different amounts of context.
FullAttention-Llama-2-7b-6k uses full uncompressed contexts whereas AutoCompressor-Llama-2-7b-6k compresses segments of 2048 tokens into 50 summary vectors.
| Context Tokens | 0 |512 | 2048 | 4096 | 6144 |
| -----------------------------|-----|-----|------|------|------|
| Pre-trained Llama-2-7b | 5.52|5.15 |4.98 |- |- |
| FullAttention-Llama-2-7b-6k | 5.40|5.06 | 4.88 | 4.80 | 4.76 |
| AutoCompressor-Llama-2-7b-6k | 5.40|5.16 | 5.11 | 5.08 | 5.07 |
## Bibtex
```
@misc{chevalier2023adapting,
title={Adapting Language Models to Compress Contexts},
author={Alexis Chevalier and Alexander Wettig and Anirudh Ajith and Danqi Chen},
year={2023},
eprint={2305.14788},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
livingbox/minimalist-style
|
livingbox
| 2023-11-22T04:13:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-22T04:07:49Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### minimalist_style Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
phuong-tk-nguyen/resnet-50-finetuned-cifar10
|
phuong-tk-nguyen
| 2023-11-22T04:04:52Z | 40 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-22T03:40:31Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5076
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-cifar10
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9060
- Accuracy: 0.5076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3058 | 0.03 | 10 | 2.3106 | 0.0794 |
| 2.3033 | 0.06 | 20 | 2.3026 | 0.0892 |
| 2.3012 | 0.09 | 30 | 2.2971 | 0.1042 |
| 2.2914 | 0.11 | 40 | 2.2890 | 0.1254 |
| 2.2869 | 0.14 | 50 | 2.2816 | 0.16 |
| 2.2785 | 0.17 | 60 | 2.2700 | 0.1902 |
| 2.2712 | 0.2 | 70 | 2.2602 | 0.2354 |
| 2.2619 | 0.23 | 80 | 2.2501 | 0.2688 |
| 2.2509 | 0.26 | 90 | 2.2383 | 0.3022 |
| 2.2382 | 0.28 | 100 | 2.2229 | 0.3268 |
| 2.2255 | 0.31 | 110 | 2.2084 | 0.353 |
| 2.2164 | 0.34 | 120 | 2.1939 | 0.3608 |
| 2.2028 | 0.37 | 130 | 2.1829 | 0.3668 |
| 2.1977 | 0.4 | 140 | 2.1646 | 0.401 |
| 2.1844 | 0.43 | 150 | 2.1441 | 0.4244 |
| 2.1689 | 0.45 | 160 | 2.1323 | 0.437 |
| 2.1555 | 0.48 | 170 | 2.1159 | 0.4462 |
| 2.1448 | 0.51 | 180 | 2.0992 | 0.45 |
| 2.1313 | 0.54 | 190 | 2.0810 | 0.4642 |
| 2.1189 | 0.57 | 200 | 2.0589 | 0.4708 |
| 2.1111 | 0.6 | 210 | 2.0430 | 0.4828 |
| 2.0905 | 0.63 | 220 | 2.0288 | 0.4938 |
| 2.082 | 0.65 | 230 | 2.0089 | 0.4938 |
| 2.0646 | 0.68 | 240 | 1.9970 | 0.5014 |
| 2.0636 | 0.71 | 250 | 1.9778 | 0.4946 |
| 2.0579 | 0.74 | 260 | 1.9609 | 0.49 |
| 2.028 | 0.77 | 270 | 1.9602 | 0.4862 |
| 2.0447 | 0.8 | 280 | 1.9460 | 0.4934 |
| 2.0168 | 0.82 | 290 | 1.9369 | 0.505 |
| 2.0126 | 0.85 | 300 | 1.9317 | 0.4926 |
| 2.0099 | 0.88 | 310 | 1.9235 | 0.4952 |
| 1.9978 | 0.91 | 320 | 1.9174 | 0.4972 |
| 1.9951 | 0.94 | 330 | 1.9119 | 0.507 |
| 1.9823 | 0.97 | 340 | 1.9120 | 0.4992 |
| 1.985 | 1.0 | 350 | 1.9064 | 0.5022 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
MrBananaHuman/kogpt2-base-v2-simple-qa
|
MrBananaHuman
| 2023-11-22T03:57:04Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:skt/kogpt2-base-v2",
"base_model:finetune:skt/kogpt2-base-v2",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T03:32:30Z |
---
license: cc-by-nc-sa-4.0
base_model: skt/kogpt2-base-v2
tags:
- generated_from_trainer
model-index:
- name: test_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_v4
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on an unknown dataset.
prompt:
"#### 질문:\n{prompt}\n\n#### 답변:\n"
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu116
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MrBananaHuman/kogpt2_small_simple_qa
|
MrBananaHuman
| 2023-11-22T03:50:30Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:MrBananaHuman/kogpt2_small",
"base_model:finetune:MrBananaHuman/kogpt2_small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T03:49:12Z |
---
license: apache-2.0
base_model: MrBananaHuman/kogpt2_small
tags:
- generated_from_trainer
model-index:
- name: test_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_v5
This model is a fine-tuned version of [MrBananaHuman/kogpt2_small](https://huggingface.co/MrBananaHuman/kogpt2_small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu116
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ivandzefen/llama-2-ko-7b-chat-gguf
|
ivandzefen
| 2023-11-22T03:34:50Z | 5 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-22T02:02:15Z |
---
license: mit
language:
- ko
---
quantized verion of [kfkas/Llama-2-ko-7b-Chat](https://huggingface.co/kfkas/Llama-2-ko-7b-Chat)
|
Atgenomix/icd_10_sentence_transformer_128_dim_model
|
Atgenomix
| 2023-11-22T03:26:02Z | 2,797 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-22T03:25:36Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
model = AutoModel.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2022evidence,
title={Evidence Extraction to Validate Medical Claims in Fake News Detection},
author={Deka, Pritam and Jurek-Loughrey, Anna and others},
booktitle={International Conference on Health Information Science},
pages={3--15},
year={2022},
organization={Springer}
}
```
|
Atgenomix/icd_o_sentence_transformer_128_dim_model
|
Atgenomix
| 2023-11-22T03:24:33Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-22T03:24:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
model = AutoModel.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2022evidence,
title={Evidence Extraction to Validate Medical Claims in Fake News Detection},
author={Deka, Pritam and Jurek-Loughrey, Anna and others},
booktitle={International Conference on Health Information Science},
pages={3--15},
year={2022},
organization={Springer}
}
```
|
Dotunnorth/ppo-Huggy
|
Dotunnorth
| 2023-11-22T03:00:00Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-11-22T02:59:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Dotunnorth/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
uukuguy/neural-chat-7b-v3-1-dare-0.85
|
uukuguy
| 2023-11-22T02:57:05Z | 1,403 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-20T11:02:27Z |
---
license: llama2
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
srushtibhavsar/FineTuneLlama2onHiwiData
|
srushtibhavsar
| 2023-11-22T02:49:57Z | 2 | 0 |
peft
|
[
"peft",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-10-27T10:08:38Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
nathanReitinger/mlcb
|
nathanReitinger
| 2023-11-22T02:41:15Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-19T00:29:00Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: nathanReitinger/mlcb
results: []
widget:
- text: "window._wpemojiSettings = {'baseUrl':'http:\/\/s.w.org\/images\/core\/emoji\/72x72\/','ext':'.png','source':{'concatemoji':'http:\/\/basho.com\/wp-includes\/js\/wp-emoji-release.min.js?ver=4.2.2'}}; !function(a,b,c){function d(a){var c=b.createElement('canvas'),d=c.getContext&&c.getContext('2d');return d&&d.fillText?(d.textBaseline='top',d.font='600 32px Arial','flag'===a?(d.fillText(String.fromCharCode(55356,56812,55356,56807),0,0),c.toDataURL().length>3e3):(d.fillText(String.fromCharCode(55357,56835),0,0),0!==d.getImageData(16,16,1,1).data[0])):!1}function e(a){var c=b.createElement('script');c.src=a,c.type='text/javascript',b.getElementsByTagName('head')[0].appendChild(c)}var f,g;c.supports={simple:d('simple'),flag:d('flag')},c.DOMReady=!1,c.readyCallback=function(){c.DOMReady=!0},c.supports.simple&&c.supports.flag||(g=function(){c.readyCallback()},b.addEventListener?(b.addEventListener('DOMContentLoaded',g,!1),a.addEventListener('load',g,!1)):(a.attachEvent('onload',g),b.attachEvent('onreadystatechange',function(){'complete'===b.readyState&&c.readyCallback()})),f=c.source||{},f.concatemoji?e(f.concatemoji):f.wpemoji&&f.twemoji&&(e(f.twemoji),e(f.wpemoji)))}(window,document,window._wpemojiSettings);"
example_title: "Word Press Emoji False Positive"
- text: "var canvas = document.createElement('canvas');
var ctx = canvas.getContext('2d');
var txt = 'i9asdm..$#po((^@KbXrww!~cz';
ctx.textBaseline = 'top';
ctx.font = '16px 'Arial'';
ctx.textBaseline = 'alphabetic';
ctx.rotate(.05);
ctx.fillStyle = '#f60';
ctx.fillRect(125,1,62,20);
ctx.fillStyle = '#069';
ctx.fillText(txt, 2, 15);
ctx.fillStyle = 'rgba(102, 200, 0, 0.7)';
ctx.fillText(txt, 4, 17);
ctx.shadowBlur=10;
ctx.shadowColor='blue';
ctx.fillRect(-20,10,234,5);
var strng=canvas.toDataURL();"
example_title: "Canvas Fingerprinting Canonical Example"
inference:
parameters:
wait_for_model: true
use_cache: false
temperature: 0
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nathanReitinger/mlcb
This model is a fine-tuned version of [dbernsohn/roberta-javascript](https://huggingface.co/dbernsohn/roberta-javascript) on the [mlcb dataset](https://huggingface.co/datasets/nathanReitinger/mlcb).
It achieves the following results on the evaluation set:
- Train Loss: 0.0463
- Validation Loss: 0.0930
- Train Accuracy: 0.9708
- Epoch: 4
## Intended uses & limitations
The model can be used to identify whether a JavaScript program is engaging in canvas fingerprinting.
## Training and evaluation data
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 910, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1291 | 0.1235 | 0.9693 | 0 |
| 0.0874 | 0.1073 | 0.9662 | 1 |
| 0.0720 | 0.1026 | 0.9677 | 2 |
| 0.0588 | 0.0950 | 0.9708 | 3 |
| 0.0463 | 0.0930 | 0.9708 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.11.0
- Datasets 2.13.2
- Tokenizers 0.13.3
# Citation
```
@inproceedings{reitinger2021ml,
title={ML-CB: Machine Learning Canvas Block.},
author={Nathan Reitinger and Michelle L Mazurek},
journal={Proc.\ PETS},
volume={2021},
number={3},
pages={453--473},
year={2021}
}
```
- [OSF](https://osf.io/shbe7/)
- [GitHub](https://github.com/SP2-MC2/ML-CB)
- [Data](https://dataverse.harvard.edu/dataverse/ml-cb)
|
LinYuting/icd_o_sentence_transformer_128_dim_model
|
LinYuting
| 2023-11-22T02:39:00Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-22T02:38:38Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
model = AutoModel.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2022evidence,
title={Evidence Extraction to Validate Medical Claims in Fake News Detection},
author={Deka, Pritam and Jurek-Loughrey, Anna and others},
booktitle={International Conference on Health Information Science},
pages={3--15},
year={2022},
organization={Springer}
}
```
|
LinYuting/sentence_transformer_128_dim_model
|
LinYuting
| 2023-11-22T02:27:23Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-22T02:27:06Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
model = AutoModel.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2022evidence,
title={Evidence Extraction to Validate Medical Claims in Fake News Detection},
author={Deka, Pritam and Jurek-Loughrey, Anna and others},
booktitle={International Conference on Health Information Science},
pages={3--15},
year={2022},
organization={Springer}
}
```
|
lillybak/sft_zephyr
|
lillybak
| 2023-11-22T02:24:19Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:finetune:HuggingFaceH4/zephyr-7b-alpha",
"license:mit",
"region:us"
] | null | 2023-11-22T02:24:11Z |
---
license: mit
base_model: HuggingFaceH4/zephyr-7b-alpha
tags:
- generated_from_trainer
model-index:
- name: sft_zephyr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_zephyr
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
pbelcak/UltraFastBERT-1x11-long
|
pbelcak
| 2023-11-22T02:21:22Z | 11 | 75 |
transformers
|
[
"transformers",
"safetensors",
"crammedBERT",
"en",
"dataset:EleutherAI/pile",
"arxiv:2311.10770",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-11-21T07:00:55Z |
---
license: mit
datasets:
- EleutherAI/pile
language:
- en
metrics:
- glue
---
# UltraFastBERT-1x11-long
This is the final model described in "Exponentially Faster Language Modelling".
The model has been pretrained just like crammedBERT but with fast feedforward networks (FFF) in place of the traditional feedforward layers.
To use this model, you need the code from the repo at https://github.com/pbelcak/UltraFastBERT.
You can find the paper here: https://arxiv.org/abs/2311.10770, and the abstract below:
> Language models only really need to use an exponential fraction of their neurons for individual inferences.
> As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs).
> While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights.
## Intended uses & limitations
This is the raw pretraining checkpoint. You can use this to fine-tune on a downstream task like GLUE as discussed in the paper. This model is provided only as sanity check for research purposes, it is untested and unfit for deployment.
### How to get started
1. Create a new Python/conda environment, or simply use one that does not have any previous version of the original `cramming` project installed. If, by accident, you use the original cramming repository code instead of the one provided in the `/training` folder of this project, you will be warned by `transformers` that there are some extra weights (FFF weight) and that some weights are missing (the FF weights expected by the original `crammedBERT`).
2. `cd ./training`
3. `pip install .`
4. Create `minimal_example.py`
5. Paste the code below
```python
import cramming
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("pbelcak/UltraFastBERT-1x11-long")
model = AutoModelForMaskedLM.from_pretrained("pbelcak/UltraFastBERT-1x11-long")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
6. Run `python minimal_example.py`.
### Limitations and bias
The training data used for this model was further filtered and sorted beyond the normal Pile. These modifications were not tested for unintended consequences.
## Training data, Training procedure, Preprocessing, Pretraining
These are discussed in the paper. You can find the final configurations for each in this repository.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m-mm) | QQP | QNLI | SST-2 | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:-----:|:----:|:----:|:-------:|
| Score| 81.3 | 87.6 | 89.7 | 89.9 | 86.4 | 87.5 | 60.7 | 83.0 |
These numbers are the median over 5 trials on "GLUE-sane" using the GLUE-dev set. With this variant of GLUE, finetuning cannot be longer than 5 epochs on each task, and hyperparameters have to be chosen equal for all tasks.
### BibTeX entry and citation info
```bibtex
@article{belcak2023exponential,
title = {Exponentially {{Faster}} {{Language}} {{Modelling}}},
author = {Belcak, Peter and Wattenhofer, Roger},
year = {2023},
month = nov,
eprint = {2311.10770},
eprinttype = {arxiv},
primaryclass = {cs},
publisher = {{arXiv}},
url = {https://arxiv.org/pdf/2311.10770},
urldate = {2023-11-21},
archiveprefix = {arXiv},
keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning},
journal = {arxiv:2311.10770[cs]}
}
```
|
Suraj-Yadav/finetuned-kde4-en-to-hi
|
Suraj-Yadav
| 2023-11-22T02:16:23Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-hi",
"base_model:finetune:Helsinki-NLP/opus-mt-en-hi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-10-31T14:56:45Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-hi
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: finetuned-kde4-en-to-hi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-hi
split: train
args: en-hi
metrics:
- name: Bleu
type: bleu
value: 48.24401152147744
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-kde4-en-to-hi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9644
- Bleu: 48.2440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
BreZzz/q-FrozenLake-v1-4x4-noSlippery
|
BreZzz
| 2023-11-22T02:07:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-22T02:07:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BreZzz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tparng/distilbert-base-uncased-lora-text-classification
|
tparng
| 2023-11-22T01:51:17Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2023-11-22T01:51:10Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9162
- Accuracy: {'accuracy': 0.901}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.3611 | {'accuracy': 0.871} |
| 0.4182 | 2.0 | 500 | 0.5356 | {'accuracy': 0.883} |
| 0.4182 | 3.0 | 750 | 0.5292 | {'accuracy': 0.899} |
| 0.2132 | 4.0 | 1000 | 0.5966 | {'accuracy': 0.897} |
| 0.2132 | 5.0 | 1250 | 0.6869 | {'accuracy': 0.894} |
| 0.0748 | 6.0 | 1500 | 0.7645 | {'accuracy': 0.898} |
| 0.0748 | 7.0 | 1750 | 0.8095 | {'accuracy': 0.897} |
| 0.0335 | 8.0 | 2000 | 0.9055 | {'accuracy': 0.892} |
| 0.0335 | 9.0 | 2250 | 0.9086 | {'accuracy': 0.901} |
| 0.0083 | 10.0 | 2500 | 0.9162 | {'accuracy': 0.901} |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jalaluddin94/trf-learn-xlmr-large
|
jalaluddin94
| 2023-11-22T01:27:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:jalaluddin94/xlmr-large-nli-indoindo",
"base_model:finetune:jalaluddin94/xlmr-large-nli-indoindo",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-22T01:25:36Z |
---
license: mit
base_model: jalaluddin94/xlmr-large-nli-indoindo
tags:
- generated_from_trainer
model-index:
- name: trf-learn-xlmr-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trf-learn-xlmr-large
This model is a fine-tuned version of [jalaluddin94/xlmr-large-nli-indoindo](https://huggingface.co/jalaluddin94/xlmr-large-nli-indoindo) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 101
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
e-n-v-y/envy-geometric-xl-01
|
e-n-v-y
| 2023-11-22T01:18:44Z | 5 | 3 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"landscapes",
"people",
"abstract",
"style",
"low poly",
"shapes",
"colors",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-11-22T01:18:43Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- landscapes
- people
- abstract
- style
- low poly
- shapes
- colors
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: geometric
widget:
- text: 'geometric, fantasygreat,infinite scifi sky city beyond the beginning of time'
output:
url: >-
3815950.jpeg
- text: 'geometric, tilt-shift, digital painting, scifighostly scifi arcology at the beginning of the universe'
output:
url: >-
3816161.jpeg
- text: 'geometric, morning, blue sky, clouds, architecture, ruined Pastel alley in a Serpent Sea'
output:
url: >-
3815854.jpeg
- text: 'geometric, Coastal Strand'
output:
url: >-
3815875.jpeg
- text: 'geometric, Sky Castle Domain'
output:
url: >-
3815877.jpeg
- text: 'geometric, fantasydrug den in a cheerful,great fantasy arcology at the end of the multiverse, masterpiece'
output:
url: >-
3815933.jpeg
- text: 'geometric, digital painting, warmly lit interior, in a lush Exotic bird aviary'
output:
url: >-
3816039.jpeg
- text: 'geometric, digital painting, fantasyexclusive restaurant in a utopian,gargantuan scifi sprawling metropolis outside of the universe, masterpiece'
output:
url: >-
3816060.jpeg
- text: 'geometric, digital painting, fantasylost,amazing fantasy arcology beyond the beginning of time'
output:
url: >-
3816118.jpeg
- text: 'geometric, digital painting, morning, blue sky, clouds, scenery, in a Drain Warlock''s Volcano'
output:
url: >-
3816152.jpeg
---
# Envy Geometric XL 01
<Gallery />
## Model description
<p>A fun style LoRA that builds everything out of geometric shapes. With EnvyMegaMixXL, it works well at 1.0 for scenery and 2.0 for people. You may need to adjust the weight depending in prompt, subject matter, and checkpoint.</p>
## Trigger words
You should use `geometric` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/e-n-v-y/envy-geometric-xl-01/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('e-n-v-y/envy-geometric-xl-01', weight_name='EnvyGeometricXL01.safetensors')
image = pipeline('geometric, digital painting, morning, blue sky, clouds, scenery, in a Drain Warlock's Volcano').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
asvin-kumar/cnh
|
asvin-kumar
| 2023-11-22T01:14:42Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-21T04:58:50Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - asvin-kumar/cnh
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
jguevara/ppo-Huggy
|
jguevara
| 2023-11-22T01:04:12Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-11-22T01:04:07Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jguevara/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cmagganas/sft_zephyr
|
cmagganas
| 2023-11-22T01:03:01Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:finetune:HuggingFaceH4/zephyr-7b-alpha",
"license:mit",
"region:us"
] | null | 2023-11-22T01:02:46Z |
---
license: mit
base_model: HuggingFaceH4/zephyr-7b-alpha
tags:
- generated_from_trainer
model-index:
- name: sft_zephyr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_zephyr
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
syed789/zephyr-7b-beta-fhir-ft
|
syed789
| 2023-11-22T00:59:56Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2023-11-22T00:59:55Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
tranv/mt5-base-finetuned-sumeczech
|
tranv
| 2023-11-22T00:45:20Z | 305 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-11-12T16:17:02Z |
---
license: apache-2.0
base_model: google/mt5-base
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-sumeczech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-sumeczech
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9291
- Rouge1: 15.9842
- Rouge2: 5.0275
- Rougel: 12.6308
- Rougelsum: 14.0073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 2.6446 | 1.0 | 108450 | 2.4043 | 13.4797 | 3.1596 | 10.6012 | 11.798 |
| 2.3864 | 2.0 | 216900 | 2.3327 | 13.955 | 3.387 | 10.9208 | 12.165 |
| 2.3381 | 3.0 | 325350 | 2.2699 | 14.2671 | 3.5872 | 11.1539 | 12.4443 |
| 2.2583 | 4.0 | 433800 | 2.2085 | 14.5162 | 3.9249 | 11.4167 | 12.697 |
| 2.178 | 5.0 | 542250 | 2.1429 | 14.8376 | 4.1524 | 11.6426 | 12.9856 |
| 2.0847 | 6.0 | 650700 | 2.0678 | 15.0717 | 4.3497 | 11.8584 | 13.1779 |
| 1.9676 | 7.0 | 759150 | 1.9866 | 15.7074 | 4.7106 | 12.3935 | 13.7652 |
| 1.8196 | 8.0 | 867600 | 1.9291 | 15.9842 | 5.0275 | 12.6308 | 14.0073 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
correll/segformer-b0-scene-parse-150
|
correll
| 2023-11-22T00:19:33Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-11-21T16:42:14Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3867
- Mean Iou: 0.0854
- Mean Accuracy: 0.1569
- Overall Accuracy: 0.5029
- Per Category Iou: [0.2771319317345961, 0.5947571607182376, 0.939295828662458, 0.5107730625538844, 0.022752808988764046, 0.39377705160129767, 0.03627070632428105, 0.5970283178156206, 0.009085428262972674, 0.0, 0.19687228940252055, 0.3002657911194497, 0.0, 0.0, 0.0, 0.1685958623072604, nan, 0.0, 0.0, 0.038824310860179345, 0.0, nan, 0.0, 0.012886162290077854, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0]
- Per Category Accuracy: [0.8898568588469185, 0.6259764769587272, 0.9577347224989943, 0.9247819843070327, 0.06789606035205364, 0.7643305303763789, 1.0, 0.5975183468520664, 0.009635974304068522, 0.0, 0.34045087572241584, 0.3327269577269577, nan, 0.0, 0.0, 0.5489382646603853, nan, 0.0, 0.0, 0.1400167684752665, 0.0, nan, 0.0, 0.019002375296912115, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.703 | 1.0 | 20 | 4.8562 | 0.0103 | 0.0632 | 0.2675 | [0.21165428777704154, 0.32179166978239737, 0.5031269364905134, 0.0006521120800294145, 0.0, 0.05342669416334948, 0.016990612864757983, 0.027912294312196318, 0.02371828937431824, 0.0, 0.0, 0.00024627233242287196, 0.0, 0.0, 0.0, 0.0010805657079294454, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005723263382336438, 0.0016670060063117174, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.040193309056257016, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01132487378905717, 0.0, 0.0, 0.0013201102292041385, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0056419113413932066, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0005311262088653816, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0] | [0.5105606361829026, 0.4283478925616544, 0.89445524377451, 0.0006521392247868407, 0.0, 0.06794184690213538, 0.6289379362384456, 0.03300502124372345, 0.025220080894599095, 0.0, 0.0, 0.0009528759528759529, nan, 0.0, 0.0, 0.002390662354099283, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.010216346153846154, 0.0019207647869563943, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.07199463545476713, 0.0, nan, nan, 0.0, nan, 0.04764638346727899, nan, 0.0, 0.0013201102292041385, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.006364738482854174, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.17647058823529413, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 4.5481 | 2.0 | 40 | 4.5134 | 0.0218 | 0.0587 | 0.3869 | [0.31807012011985564, 0.4034885725313557, 0.7290694120485804, 0.0563904892734368, 0.0, 0.1394507050374461, 7.56807483312395e-05, 0.018919704980871484, 0.0, 0.0, 0.00014696372935159603, 0.005305999397045523, 0.0, 0.0, 0.0, 0.007459121807826566, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0004156707887353216, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0] | [0.7179443339960239, 0.8574184377961284, 0.8520470892567763, 0.057242561103364065, 0.0, 0.16594369182212532, 0.003772873042822109, 0.024497875627655465, 0.0, 0.0, 0.0001470566609314569, 0.007623007623007623, nan, 0.0, 0.0, 0.014273660525945718, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.00046919445177560774, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 4.209 | 3.0 | 60 | 4.0911 | 0.0323 | 0.0737 | 0.3651 | [0.2131257612947356, 0.430094454707051, 0.8035509290787042, 0.1717034475186649, 0.0, 0.09090225769419996, 0.008882520581643424, 0.015518334763308092, 4.379881743192934e-05, 0.0, 0.0, 0.0, 0.0, 0.0018995929443690637, 0.0, 0.07508926317308039, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000701303255215943, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8758926441351889, 0.614147269846073, 0.9316747066321255, 0.17359391151719497, 0.0, 0.10141723399226964, 0.5285795132993775, 0.016067979915025107, 4.461099214846538e-05, 0.0, 0.0, 0.0, nan, 0.001924303402168415, 0.0, 0.14639291238925609, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0007078445396429809, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 4.1436 | 4.0 | 80 | 3.7854 | 0.0422 | 0.0857 | 0.4296 | [0.3251539435304937, 0.4991531629689501, 0.8484109381360809, 0.20315060022058026, 0.0, 0.1797945205479452, 0.0068934562976889135, 0.004992170075452, 0.0051620559222724915, 0.0, 0.0, 0.0, 0.0, 0.0029679397279932162, 0.0, 0.11349021435228332, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.002313329054343175, 0.0027497773820124665, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9334671968190855, 0.8372418955398594, 0.9089723006583622, 0.221075197202739, 0.0, 0.2201324987992773, 0.46066779852857953, 0.005079181151023561, 0.005174875089221985, 0.0, 0.0, 0.0, nan, 0.0030019133073827275, 0.0, 0.3424975390240472, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0025365825049118796, 0.0028461249198144857, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 4.0442 | 5.0 | 100 | 3.5572 | 0.0430 | 0.0916 | 0.4161 | [0.2693000892903345, 0.5072622910556279, 0.8415237725538792, 0.20845653459491426, 0.0, 0.19705011738287231, 0.012493718654856096, 0.003271727066544419, 0.018662656996285158, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.13326740870205134, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9725566600397614, 0.7771004706200878, 0.8476968600895614, 0.23477012092326263, 0.0, 0.23547887871556974, 0.7176004527447651, 0.0032734646581691774, 0.018825838686652392, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.40746730417662774, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.2827 | 6.0 | 120 | 3.5066 | 0.0511 | 0.1105 | 0.4184 | [0.25310375608698304, 0.5468134672897857, 0.9003626984184326, 0.2962989524829784, 0.0, 0.2664608228406448, 0.01782702798659047, 0.0028186144386420455, 0.02881383063870658, 0.0, 0.0, 0.0, 0.0, 0.00013167279310912384, 0.0, 0.1409057898233849, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9560357852882704, 0.6158732749902451, 0.9370965134247997, 0.45860650335435443, 0.0, 0.45593157024037323, 0.9850971514808526, 0.0028196214754731557, 0.029443254817987152, 0.0, 0.0, 0.0, nan, 0.00013195223329154845, 0.0, 0.6411897060891576, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.7479 | 7.0 | 140 | 3.3188 | 0.0484 | 0.1003 | 0.4011 | [0.23213847111027858, 0.4855567520999354, 0.9064736552349493, 0.25671644227512214, 0.0, 0.23904203082639772, 0.018825270752282894, 0.00964645733854242, 0.017430647196160037, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.15697658862876254, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9657892644135189, 0.5610113952173532, 0.9471956625545659, 0.43640601910629173, 0.0, 0.38449809790273765, 0.8781362007168458, 0.009656237929702587, 0.017874137520818463, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.41252988327942625, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.887 | 8.0 | 160 | 3.1720 | 0.0519 | 0.1075 | 0.4291 | [0.25388287332102916, 0.5856865966119962, 0.9349290194783757, 0.2893388429752066, 0.0, 0.22470749738668278, 0.01826711846885006, 0.011964078794901507, 0.01260937069228202, 0.0, 0.0, 0.0, 0.0, 0.00041784873875657015, 0.0, 0.1470349967595593, nan, 0.0, 0.0, 0.010779734099892203, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9653757455268389, 0.6707431178779891, 0.9582949508014272, 0.437196911357629, 0.0, 0.40806275777420314, 0.9600075457460856, 0.011964078794901507, 0.012922317392338806, 0.0, 0.0, 0.0, nan, 0.00041784873875657015, 0.0, 0.5104767261988469, nan, 0.0, 0.0, 0.010779734099892203, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.9038 | 9.0 | 180 | 3.2529 | 0.0533 | 0.1122 | 0.4244 | [0.2363892001917822, 0.4937267225536206, 0.9299845591399405, 0.3476255921194913, 0.0010737678513905295, 0.3153472097860675, 0.027933520224269124, 0.018230977211278487, 0.030419698145247472, 0.0, 1.4705666093145688e-05, 0.0, nan, 0.0, 0.0, 0.10361342374045146, nan, 0.0, 0.0, 0.001197748233321356, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9586282306163022, 0.5439165784088104, 0.9669653297639146, 0.6913924559979464, 0.002794076557697681, 0.5235608480533045, 1.0, 0.018230977211278487, 0.030900547228170355, 0.0, 1.4705666093145688e-05, 0.0, nan, 0.0, 0.0, 0.42441288145127265, nan, 0.0, 0.0, 0.001197748233321356, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.0283 | 10.0 | 200 | 2.9246 | 0.0529 | 0.1045 | 0.4285 | [0.22447430343040137, 0.5113750687467303, 0.9290480311873535, 0.34400687413296893, 0.0, 0.2708992270794495, 0.03860389636344741, 0.015362332834451793, 1.4740131481972819e-05, 0.0, 0.0, 0.0, 0.0, 0.001957291460491302, 0.0, 0.1512499493537539, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9793916500994035, 0.6071476918911601, 0.9444283603358362, 0.6915936478864445, 0.0, 0.39438595421244027, 0.9109601961893983, 0.015363074546156817, 1.4870330716155128e-05, 0.0, 0.0, 0.0, nan, 0.001957291460491302, 0.0, 0.26248066376037127, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.1896 | 11.0 | 220 | 3.0964 | 0.0531 | 0.1135 | 0.4333 | [0.2460411763125647, 0.48480829316414437, 0.8818734069958932, 0.3566825767712473, 0.0, 0.25891181171633765, 0.036688852631669815, 0.04585747392815759, 0.004764525285292243, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.14059614422064584, 0.0, 0.0, 0.0, 0.09147982062780269, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.946831013916501, 0.5850580112916969, 0.9704808563597193, 0.7434456539083258, 0.0, 0.45321755570972244, 0.8017355215996982, 0.04585747392815759, 0.004892338805615037, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.5727745746027282, nan, 0.0, 0.0, 0.09773625583902264, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.0463 | 12.0 | 240 | 2.9823 | 0.0556 | 0.1192 | 0.4342 | [0.23902565477434287, 0.516049504182426, 0.940146691738418, 0.3535668518963138, 0.024221177595410724, 0.291186036379, 0.03746369180972035, 0.02161985880908533, 0.005092633242602511, 0.0, 0.0, 0.0, 0.0, 0.014481279894441696, 0.0, 0.16301405745219594, nan, 0.0, 0.0, 0.008216241962371994, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9642266401590457, 0.5429709585201348, 0.9643747438553483, 0.8479683088087359, 0.0849399273540095, 0.5044026499759855, 1.0, 0.021765160293549635, 0.005174875089221985, 0.0, 0.0, 0.0, nan, 0.014481757603747444, 0.0, 0.5251019547180424, nan, 0.0, 0.0, 0.008264462809917356, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.0795 | 13.0 | 260 | 2.9356 | 0.0568 | 0.1193 | 0.4364 | [0.2337627300235434, 0.4815777297042568, 0.9030863249925072, 0.36350688302699374, 0.0, 0.3516344179911382, 0.05706578355794853, 0.03612398609501738, 0.01722589244320816, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.15489310573417303, nan, 0.0, 0.0, 0.07260940032414911, 0.0, nan, 0.0, 6.733553296074339e-05, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.957765407554672, 0.5614971452233256, 0.928997642529224, 0.8198014444190064, 0.0, 0.5771778823063025, 0.9817015657423128, 0.03612398609501738, 0.017680823221508445, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.5257347771058922, nan, 0.0, 0.0, 0.08048868127919512, 0.0, nan, 0.0, 7.331163308993871e-05, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.8488 | 14.0 | 280 | 2.7659 | 0.0660 | 0.1288 | 0.4992 | [0.27329662435009294, 0.7151913123483586, 0.925662551072808, 0.3681896124225532, 0.002342936480388754, 0.31677258404628517, 0.04532706284737067, 0.07469783479163578, 0.004246408004188238, 0.0, 0.000409626216077829, 0.0, 0.0, 0.19720907763419374, 0.0, 0.1718377088305489, 0.0, 0.0, 0.0, 0.07271728898650769, 0.0, nan, 0.0, 0.001941276389225916, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9695705765407555, 0.8016009842409957, 0.9353067907942082, 0.8550655261167884, 0.0075440067057837385, 0.5004688536338062, 1.0, 0.07781962147547315, 0.0043421365691172975, 0.0, 0.0004117586506080793, 0.0, nan, 0.1975105011985661, 0.0, 0.4606946983546618, nan, 0.0, 0.0, 0.11103126122888969, 0.0, nan, 0.0, 0.001994076420046333, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.1943 | 15.0 | 300 | 2.7600 | 0.0690 | 0.1330 | 0.4925 | [0.26215944495166776, 0.5797624014797341, 0.9306368234792759, 0.39238813646494086, 0.0, 0.41007241529961685, 0.0614931556420719, 0.3264370033853116, 0.012246676835911963, 0.0, 0.002695220378209729, 0.0, 0.0, 0.0021772118493105495, 0.0, 0.12643780452532982, nan, 0.0, 0.0, 0.13649564375605033, 0.0, nan, 0.0, 0.0019079548087275304, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.880258449304175, 0.7026154850731412, 0.9442554039471656, 0.873339299713475, 0.0, 0.8085933628622181, 0.9296359177513677, 0.35941483198146, 0.012535688793718773, 0.0, 0.002705842561138807, 0.0, nan, 0.0021772118493105495, 0.0, 0.3995921811278301, nan, 0.0, 0.0, 0.2026590010779734, 0.0, nan, 0.0, 0.002052725726518284, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.7916 | 16.0 | 320 | 2.6960 | 0.0639 | 0.1261 | 0.4638 | [0.2453864478096257, 0.5306727713006286, 0.9200735400841841, 0.3819022712662044, 0.0, 0.3921671253891621, 0.06388337936992736, 0.16470830756545043, 0.0015991161249055068, 0.0, 0.002038528182652125, 0.0, 0.0, 0.0, 0.0, 0.17435972917279952, nan, 0.0, 0.0, 0.1235085456304418, 0.0, nan, 0.0, 0.0035739090172473664, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9465487077534791, 0.6308518940268676, 0.9295315513812071, 0.8490020188565363, 0.0, 0.6626159745675493, 0.9903791737408036, 0.18516801853997683, 0.001635736378777064, 0.0, 0.0020587932530403965, 0.0, nan, 0.0, 0.0, 0.4164674448038251, nan, 0.0, 0.0, 0.18349502934483172, 0.0, nan, 0.0, 0.0036215946746429722, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.3303 | 17.0 | 340 | 2.6972 | 0.0704 | 0.1377 | 0.5227 | [0.2845941381265662, 0.6578027906678013, 0.9305769915706993, 0.4422690637229803, 0.0019671209779401435, 0.3941775558100347, 0.06783319201418331, 0.4297976224722246, 0.00829552819183409, 0.0, 0.005439027918414581, 0.0, 0.0, 0.0, 0.0, 0.1238730798287585, nan, 0.0, 0.0, 0.09961959106038992, 0.0, 0.0, 0.0, 0.0031568619086216458, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9279522862823062, 0.8155682876914133, 0.9430785485199069, 0.8569386919752187, 0.007823414361553507, 0.6901525489628043, 0.9960384833050367, 0.5338064889918888, 0.008565310492505354, 0.0, 0.005529330451022779, 0.0, nan, 0.0, 0.0, 0.34587259175924623, nan, 0.0, 0.0, 0.20074260390465923, 0.0, nan, 0.0, 0.003255036509193279, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.5485 | 18.0 | 360 | 2.5605 | 0.0656 | 0.1230 | 0.5142 | [0.27947086717686276, 0.7357588550178986, 0.9256103773375957, 0.35769190079646956, 0.0, 0.41056603773584904, 0.05762144417752753, 0.08036917079881911, 0.0002923420987239267, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0895472948104527, nan, 0.0, 0.0, 0.07593104355836294, 0.0, nan, 0.0, 0.0038528797076929637, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9467117296222664, 0.8854545744113267, 0.9427852746434654, 0.8209808451446847, 0.0, 0.6677085636306805, 1.0, 0.08122827346465816, 0.00029740661432310254, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.17107298551539868, nan, 0.0, 0.0, 0.13822014612528447, 0.0, nan, 0.0, 0.003973490513474678, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.7706 | 19.0 | 380 | 2.7446 | 0.0711 | 0.1264 | 0.4625 | [0.2180487791589368, 0.5109265098005974, 0.9333789523009067, 0.43176174075004803, 0.0011381743683132257, 0.4180908561624426, 0.050910452921516657, 0.6041835773131432, 0.0014779911027862328, 0.0, 0.0010430592487035214, 0.0, 0.0, 0.0, 0.0, 0.1302552960347637, nan, 0.0, 0.0, 0.036589698046181174, 0.0, nan, 0.0, 0.0020550291129124328, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9547753479125248, 0.551973259860327, 0.9436650962727898, 0.8105466175480953, 0.002794076557697681, 0.618627592989304, 1.0, 0.6253089996137505, 0.001501903402331668, 0.0, 0.001044102292613344, 0.0, nan, 0.0, 0.0, 0.25291801434397415, nan, 0.0, 0.0, 0.049347227212839864, 0.0, nan, 0.0, 0.002111375032990235, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.5698 | 20.0 | 400 | 2.7816 | 0.0567 | 0.1190 | 0.4403 | [0.24232968460780926, 0.5184524002726476, 0.9249345601937453, 0.328879753340185, 0.031507637246756565, 0.4043734683180893, 0.03542975538029675, 0.05146208724586188, 0.006674049895080438, 0.0, 0.0036093756794758434, 0.0, nan, 0.0, 0.0, 0.07880393869131862, nan, 0.0, 0.0, 0.07493762367719177, 0.0, 0.0, 0.0, 0.01870720123702231, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9343856858846918, 0.5405780425071071, 0.9592349311746371, 0.8791391762232814, 0.13774797429449567, 0.6528653437116436, 1.0, 0.05286790266512167, 0.006810611467999049, 0.0, 0.0036617108571932767, 0.0, nan, 0.0, 0.0, 0.1840106876669948, nan, 0.0, 0.0, 0.1043238711222901, 0.0, nan, 0.0, 0.019867452567373392, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.2798 | 21.0 | 420 | 2.6663 | 0.0701 | 0.1370 | 0.4747 | [0.25145589774908395, 0.5308601544530929, 0.9392425812806433, 0.4476798444358493, 0.008679142934635205, 0.36132095988135177, 0.03534212052723163, 0.5477586411905132, 0.0032251066599175643, 0.0, 0.010125291375291376, 0.0, 0.0, 0.0, 0.0, 0.15986522731806796, nan, 0.0, 0.0, 0.04444273280443657, 0.0, nan, 0.0, 0.025817792304100636, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9167992047713718, 0.5550529945293401, 0.960851697416558, 0.8752679667825255, 0.026823134953897737, 0.7726402939674166, 1.0, 0.5782831208960989, 0.0033160837497025935, 0.0, 0.010220437934736254, 0.0, nan, 0.0, 0.0, 0.5071016734636479, nan, 0.0, 0.0, 0.06911007306264223, 0.0, nan, 0.0, 0.027565174041816954, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.5001 | 22.0 | 440 | 2.6535 | 0.0615 | 0.1194 | 0.4517 | [0.23908118988054505, 0.5177844761217707, 0.9352375879441596, 0.3525618778473726, 0.011162872461984706, 0.4340030022002426, 0.037781705700397704, 0.24590479675657978, 0.0027329991377168495, 0.0, 0.005642112498567992, 0.0, nan, 0.0, 0.0, 0.08491281273692192, nan, 0.0, 0.0, 0.018354567048153747, 0.0, nan, 0.0, 0.006545547886600878, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.948831013916501, 0.5529706399955406, 0.9506359907205137, 0.8880124322711789, 0.03548477228276055, 0.6436178728529934, 1.0, 0.24657203553495557, 0.0027807518439210087, 0.0, 0.005794032440699402, 0.0, nan, 0.0, 0.0, 0.1890029531711433, nan, 0.0, 0.0, 0.02036171996646305, 0.0, nan, 0.0, 0.006730007917656373, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.5627 | 23.0 | 460 | 2.5568 | 0.0686 | 0.1268 | 0.4681 | [0.230094747627851, 0.5424232015373618, 0.9371263649398875, 0.4261987231730508, 0.0015735641227380016, 0.3970479932616279, 0.04546468146420117, 0.4232844742498178, 0.0010681728391448765, 0.0, 0.022026494808449697, 0.0030933150025777623, nan, 0.0, 0.0, 0.13021680812700648, nan, 0.0, 0.0, 0.045956980681139216, 0.0, nan, 0.0, 0.016811972850935733, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.925920477137177, 0.5815880839949354, 0.9583626293882984, 0.8707238051630001, 0.0044705224923162895, 0.6863940962560322, 1.0, 0.44309578988026266, 0.0010855341422793244, 0.0, 0.02261731445125807, 0.0031185031185031187, nan, 0.0, 0.0, 0.20819856560258754, nan, 0.0, 0.0, 0.11055216193556114, 0.0, nan, 0.0, 0.017360194715697486, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.7865 | 24.0 | 480 | 2.5993 | 0.0718 | 0.1375 | 0.4825 | [0.250569034973826, 0.5359686522845829, 0.9342801745746218, 0.43596720189488236, 0.004784688995215311, 0.4278460755073834, 0.03799727618091893, 0.6714295888752382, 0.003648995796356843, 0.0, 0.038962626852694644, 0.0, 0.0, 0.0, 0.0, 0.12904920329189284, nan, 0.0, 0.0, 0.030856460757982693, 0.0, nan, 0.0, 0.01769876569202832, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9415228628230616, 0.5688331647807356, 0.9529859416535382, 0.9015269770571871, 0.01592623637887678, 0.6725876908767944, 1.0, 0.7282638084202395, 0.003717582679038782, 0.0, 0.04217585035514183, 0.0, nan, 0.0, 0.0, 0.4145689776402756, nan, 0.0, 0.0, 0.061923583662714096, 0.0, nan, 0.0, 0.019720829301193513, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.9011 | 25.0 | 500 | 2.5590 | 0.0753 | 0.1422 | 0.4852 | [0.2510478270907702, 0.5478771858092367, 0.9378596720157761, 0.4595985425918023, 0.011095031355523395, 0.384233953458637, 0.03454883175285952, 0.6617533597688982, 0.0026527505979560776, 0.0, 0.06644428416877028, 0.0, nan, 0.0, 0.0, 0.16756981058387033, nan, 0.0, 0.0, 0.050168990283058726, 0.0, nan, 0.0, 0.04039101944354925, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8992922465208748, 0.5735294117647058, 0.9620323127653094, 0.9040037185811115, 0.038558256496227995, 0.7331117396375724, 1.0, 0.712273078408652, 0.002721270521056388, 0.0, 0.07586653137453861, 0.0, nan, 0.0, 0.0, 0.48270285473210517, nan, 0.0, 0.0, 0.11378608216552881, 0.0, nan, 0.0, 0.04410427846690713, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.6387 | 26.0 | 520 | 2.6295 | 0.0740 | 0.1436 | 0.4834 | [0.26336235342936887, 0.5191837536082221, 0.9388221233162978, 0.4301501231218764, 0.008698015765153574, 0.4018461269827412, 0.03702384444537569, 0.6476909312419097, 0.008089357169588623, 0.0, 0.1094517073595524, 0.0, nan, 0.0, 0.0, 0.14558497366232326, nan, 0.0, 0.0, 0.014648729446935725, 0.0, nan, 0.0, 0.029305653035844016, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8933240556660039, 0.5474800723050829, 0.9628218962788057, 0.9416474146842329, 0.026823134953897737, 0.7719694139710759, 1.0, 0.6570973348783314, 0.008357125862479182, 0.0, 0.1478654725665799, 0.0, nan, 0.0, 0.0, 0.5907748558571227, nan, 0.0, 0.0, 0.023475865373098575, 0.0, nan, 0.0, 0.034896337350810824, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.3512 | 27.0 | 540 | 2.5304 | 0.0737 | 0.1387 | 0.4812 | [0.24276038656192983, 0.5202051563418111, 0.9286737091373282, 0.4669457557673466, 0.0019860973187686196, 0.3999659041732418, 0.04015756978902314, 0.7221173927380647, 0.00023228803716608595, 0.0, 0.03359965853463339, 0.0, 0.0, 0.044952148416210674, 0.0, 0.14249059684520513, nan, 0.0, 0.0, 0.05004025253167239, 0.0, nan, 0.0, 0.017065927539101168, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.916803180914513, 0.5587040826889846, 0.9436951756447325, 0.9092485829847163, 0.006705783738474434, 0.6975550998315176, 1.0, 0.7612495171881035, 0.00023792529145848205, 0.0, 0.03704357288863399, 0.0, nan, 0.04534758417452882, 0.0, 0.3436225566024469, nan, 0.0, 0.0, 0.14145406635525212, 0.0, nan, 0.0, 0.01895838831705815, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.0666 | 28.0 | 560 | 2.5893 | 0.0671 | 0.1296 | 0.4634 | [0.2432747573250561, 0.5228851372235949, 0.9374503545056044, 0.38860063685580615, 0.008180563768972995, 0.41711478200317226, 0.0358772012940428, 0.4476967324285068, 0.002303613340673988, 0.0, 0.093324469032327, 0.002145922746781116, nan, 0.0, 0.0, 0.14108680310515173, nan, 0.0, 0.0, 0.03147731781510382, 0.0, 0.0, 0.0, 0.01801361908783784, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9498568588469185, 0.5489452854378518, 0.9584754270330835, 0.9093040841953365, 0.023190835428890753, 0.57538632777062, 1.0, 0.44943028196214757, 0.002364382583868665, 0.0, 0.11560124115821827, 0.0021656271656271655, nan, 0.0, 0.0, 0.35142736605259456, nan, 0.0, 0.0, 0.05701281590609654, 0.0, nan, 0.0, 0.020014075833553267, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.2508 | 29.0 | 580 | 2.5139 | 0.0750 | 0.1407 | 0.4904 | [0.2708354171270397, 0.5519314436387607, 0.9323264691332501, 0.43377976428701787, 0.0024693796918214147, 0.4227238491870099, 0.03512945745167297, 0.6132170957146131, 0.002696350554410047, 0.0, 0.16792060687935156, 0.0, 0.0, 0.0, 0.0, 0.13951571584504688, nan, 0.0, 0.0, 0.0033883135148957935, 0.0, nan, 0.0, 0.023547880690737835, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9259125248508946, 0.5833937202876277, 0.9495418535660975, 0.9399268771550079, 0.006985191394244202, 0.7273940123960326, 1.0, 0.62013325608343, 0.0027807518439210087, 0.0, 0.2376288583991412, 0.0, nan, 0.0, 0.0, 0.4403740683448179, nan, 0.0, 0.0, 0.006348065636603186, 0.0, nan, 0.0, 0.030570950998504442, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.8873 | 30.0 | 600 | 2.4626 | 0.0778 | 0.1418 | 0.4925 | [0.2468667974499374, 0.5824415330443496, 0.9365047697161816, 0.4766175860638739, 0.014747382794720073, 0.4104893015559284, 0.04201440901633497, 0.6458095434208803, 0.00015992788706183393, 0.0, 0.09571069350469504, 0.06804020935449032, 0.0, 0.0, 0.0, 0.15629194630872484, nan, 0.0, 0.0, 0.04240229927960883, 0.0, nan, 0.0, 0.014077972347786896, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9296660039761432, 0.621383750467833, 0.9530649000048879, 0.8928896011544252, 0.04526404023470243, 0.6808212181046115, 1.0, 0.6662514484356895, 0.0001635736378777064, 0.0, 0.11586594314789489, 0.07094594594594594, nan, 0.0, 0.0, 0.3929827028547321, nan, 0.0, 0.0, 0.13606419930530603, 0.0, nan, 0.0, 0.015571390868302983, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.747 | 31.0 | 620 | 2.5738 | 0.0805 | 0.1436 | 0.4822 | [0.2503076756990176, 0.5323311687082299, 0.9383151592901433, 0.43961118572743324, 0.010700389105058366, 0.41545488059640356, 0.03825448142481887, 0.6419580957509207, 0.003972626573102128, 0.0, 0.18778165249452933, 0.07764585290851628, nan, 0.0, 0.0, 0.15216227583833594, nan, 0.0, 0.0, 0.0009415674328440875, 0.0, nan, 0.0, 0.015503285884651493, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9219085487077535, 0.5597512322920234, 0.9570240973368476, 0.9378317064540971, 0.027661357921207042, 0.6185818511713718, 1.0, 0.6446794129007338, 0.004074470616226505, 0.0, 0.27636358288848695, 0.07874220374220374, nan, 0.0, 0.0, 0.5596259316551822, nan, 0.0, 0.0, 0.002036171996646305, 0.0, nan, 0.0, 0.01791736312718102, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.0134 | 32.0 | 640 | 2.4330 | 0.0798 | 0.1458 | 0.4919 | [0.22682578705686607, 0.5899426822021467, 0.9375140771483113, 0.5232339935718947, 0.011723570547099958, 0.35242691982954055, 0.03452925313635831, 0.7634057667810352, 0.0007181275922207729, 0.0, 0.11311681772406848, 0.005083884087442806, 0.0, 0.0, 0.0, 0.19649977253873532, nan, 0.0, 0.0, 0.05153418920149322, 0.0, nan, 0.0, 0.022372951295310063, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.7854314115308151, 0.6245371439492272, 0.9546666265608375, 0.8956299734287955, 0.03185247275775356, 0.8082731701366918, 1.0, 0.7927964465044419, 0.0007286462050916013, 0.0, 0.13214511551300717, 0.005197505197505198, nan, 0.0, 0.0, 0.5163127548867951, nan, 0.0, 0.0, 0.1355851000119775, 0.0, nan, 0.0, 0.02543913668220873, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.0517 | 33.0 | 660 | 2.4565 | 0.0799 | 0.1454 | 0.4943 | [0.2621110459200558, 0.5550508125321969, 0.9339240009334129, 0.47672270492882796, 0.011267351721651343, 0.4181243756823937, 0.036579306918395234, 0.6941789497927222, 0.0014176805011066587, 0.0, 0.17586680008665423, 0.028229544113929386, 0.0, 0.0, 0.0, 0.1841718455865288, nan, 0.0, 0.0, 0.016894436672764447, 0.0, nan, 0.0, 0.038414175989313744, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9290059642147117, 0.5898677326623082, 0.9480190853614977, 0.9291596422946976, 0.034925956971221014, 0.6860891508031501, 1.0, 0.6985225955967556, 0.0014572924101832025, 0.0, 0.2507021955559477, 0.02901940401940402, nan, 0.0, 0.0, 0.5021797215581494, nan, 0.0, 0.0, 0.037609294526290575, 0.0, nan, 0.0, 0.05397202428081288, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.4016 | 34.0 | 680 | 2.4004 | 0.0853 | 0.1515 | 0.5411 | [0.29232360444590655, 0.7112867348248278, 0.9387397878150254, 0.5399483619351302, 0.01104875482286917, 0.38188884507994236, 0.03843003066572905, 0.765777217063014, 0.001000812253423068, 0.0, 0.1628241607252978, 0.006926336375488918, 0.0, 0.0, 0.0, 0.20367754628461887, nan, 0.0, 0.0, 0.0208498530715052, 0.0, nan, 0.0, 0.019995839016158486, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9080198807157057, 0.7944520978826077, 0.9578136808503438, 0.8951790260925067, 0.03520536462699078, 0.7236355596892606, 1.0, 0.7795384318269603, 0.0010260528194147038, 0.0, 0.22132027470184262, 0.007363132363132363, nan, 0.0, 0.0, 0.5677823090985796, nan, 0.0, 0.0, 0.053539346029464606, 0.0, nan, 0.0, 0.025365825049118794, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.3543 | 35.0 | 700 | 2.4543 | 0.0806 | 0.1438 | 0.4817 | [0.2525063614472984, 0.5426443974694325, 0.9384380046520586, 0.4302415908005031, 0.034315348118724484, 0.43439034419369665, 0.03546720905648258, 0.6181016517443656, 0.0014304481097649978, 0.0, 0.11632247377139701, 0.1909470976041164, 0.0, 0.0, 0.0, 0.14285327924273158, nan, 0.0, 0.0, 0.03606717358597975, 0.0, nan, 0.0, 0.016179678324025025, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9260437375745527, 0.5682817190772342, 0.952640028876197, 0.918350781526422, 0.08689578094439787, 0.6845644235387395, 1.0, 0.6186268829663963, 0.0014572924101832025, 0.0, 0.15489478095910356, 0.20573458073458073, nan, 0.0, 0.0, 0.3713964280691886, nan, 0.0, 0.0, 0.10624026829560426, 0.0, nan, 0.0, 0.01979414093428345, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.0824 | 36.0 | 720 | 2.4135 | 0.0859 | 0.1544 | 0.5057 | [0.2647522045916562, 0.5758189927265166, 0.9371946062336338, 0.5365586493904178, 0.002401085708320284, 0.3823619922854794, 0.0382514449824293, 0.8191332044136048, 0.0025853975590380586, 0.0, 0.18528161767995346, 0.16325355703594047, 0.0, 0.0, 0.0, 0.16667324777887463, nan, 0.0, 0.0, 0.027176971938570396, 0.0, nan, 0.0, 0.020194809123358946, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9103538767395626, 0.6119753302701885, 0.9540875986509402, 0.9141535024732726, 0.006426376082704666, 0.6914790616828415, 1.0, 0.83441483198146, 0.0026617891981917676, 0.0, 0.30452493345686094, 0.17195079695079696, nan, 0.0, 0.0, 0.5935873998031219, nan, 0.0, 0.0, 0.08096778057252366, 0.0, nan, 0.0, 0.02447142312542154, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.6895 | 37.0 | 740 | 2.3916 | 0.0878 | 0.1525 | 0.5033 | [0.25230570070544994, 0.5934320760457402, 0.9397442149126696, 0.5324038871295237, 0.00011093854004881296, 0.4031368610142646, 0.03936171792625154, 0.7273325199222176, 0.0009877977919814061, 0.0, 0.17512989608313348, 0.29131697869593287, 0.0, 0.0, 0.0, 0.20429072795199818, nan, 0.0, 0.0, 0.03786821164316627, 0.0, nan, 0.0, 0.01602968552830495, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9345884691848907, 0.6368222393871587, 0.9564676289559074, 0.8840857216198028, 0.0002794076557697681, 0.6364516547102637, 1.0, 0.7367999227500965, 0.0010111824886985487, 0.0, 0.2577462096145645, 0.31271656271656273, nan, 0.0, 0.0, 0.5075235550555477, nan, 0.0, 0.0, 0.1299556833153671, 0.0, nan, 0.0, 0.018811765050878275, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.6789 | 38.0 | 760 | 2.4252 | 0.0835 | 0.1536 | 0.4988 | [0.26762034907256393, 0.5647451030132536, 0.9384432837641528, 0.4938898268540648, 0.006597031335898846, 0.39002127873393716, 0.036588396074046466, 0.7337932346924212, 0.0014231229008937211, 0.0, 0.19040461732700054, 0.25879883758475947, 0.0, 0.0, 0.0, 0.16951005626039223, nan, 0.0, 0.0, 0.02335669002335669, 0.0, nan, 0.0, 0.0139610083081571, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.909506958250497, 0.593883133326432, 0.9545613487590379, 0.9376166392629439, 0.016764459346186086, 0.6805086490154074, 1.0, 0.7446697566628042, 0.0014870330716155128, 0.0, 0.32600991161894677, 0.27772002772002774, nan, 0.0, 0.0, 0.523273801153143, nan, 0.0, 0.0, 0.0838423763324949, 0.0, nan, 0.0, 0.017345532389079498, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.1886 | 39.0 | 780 | 2.3197 | 0.0906 | 0.1579 | 0.5416 | [0.2987431787651821, 0.7232482163042664, 0.9319332208507237, 0.5122512757319283, 0.006894746028975388, 0.38259985252052153, 0.037474020557338575, 0.7569509940770363, 0.0008229026809303131, 0.0, 0.19961630246595854, 0.2902900573910346, 0.0, 0.0, 0.0, 0.15802524797114517, nan, 0.0, 0.0, 0.033419023136246784, 0.0, nan, 0.0, 0.017919498949932274, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8939880715705766, 0.7884479092841956, 0.947003906558431, 0.9199811295883892, 0.02207320480581168, 0.7199076015277767, 1.0, 0.7749903437620703, 0.0008476088508208423, 0.0, 0.31367185776679757, 0.32423769923769924, nan, 0.0, 0.0, 0.3943186612290817, nan, 0.0, 0.0, 0.14325068870523416, 0.0, nan, 0.0, 0.021143074983138326, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.7693 | 40.0 | 800 | 2.3891 | 0.0815 | 0.1486 | 0.4970 | [0.27206597479789096, 0.6009005505869358, 0.9331230003554923, 0.4439144347001659, 0.015228426395939087, 0.4106010663414768, 0.03607914134229924, 0.5136548742593557, 0.00330474014684541, 0.0, 0.1852404918607009, 0.2861356932153392, 0.0, 0.0, 0.0, 0.16726828606323788, nan, 0.0, 0.0, 0.02673228764649648, 0.0, nan, 0.0, 0.01948440557026125, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9187435387673957, 0.6368182578297327, 0.9474550971375718, 0.935521468562033, 0.041911148365465216, 0.6828033635483453, 1.0, 0.5139822325222093, 0.0034201760647156795, 0.0, 0.29552506580785576, 0.3108974358974359, nan, 0.0, 0.0, 0.43334270847982, nan, 0.0, 0.0, 0.09042999161576237, 0.0, nan, 0.0, 0.024823318964253247, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.4534 | 41.0 | 820 | 2.3444 | 0.0947 | 0.1662 | 0.5421 | [0.30074017955425564, 0.6949239251249513, 0.939415714106731, 0.5552229654640909, 0.00906043308870164, 0.3946035205643551, 0.038418889831061245, 0.8032481040288804, 0.003344387611713788, 0.0, 0.21059881880175352, 0.3712937475422729, 0.0, 0.0, 0.0, 0.1566748384221083, nan, 0.0, 0.0, 0.05340384556598294, 0.0, nan, 0.0, 0.016302668177603792, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9059920477137177, 0.7517777653907103, 0.9585280659339833, 0.9046558578058984, 0.02794076557697681, 0.7232467542368359, 1.0, 0.8314889918887601, 0.00353913871044492, 0.0, 0.40692048646343437, 0.408957033957034, nan, 0.0, 0.0, 0.4943045985093517, nan, 0.0, 0.0, 0.20924661636124087, 0.0, nan, 0.0, 0.018562505498372482, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.9414 | 42.0 | 840 | 2.3991 | 0.0839 | 0.1534 | 0.5015 | [0.27518557859383924, 0.5822783135558466, 0.936609267503784, 0.46560699302140995, 0.012925860954667159, 0.4002455494168201, 0.03628883199386629, 0.681165953691803, 0.003678042939012916, 0.0, 0.17959568301833118, 0.2473264166001596, 0.0, 0.0, 0.0, 0.1534867876782325, nan, 0.0, 0.0, 0.03932991151557137, 0.0, nan, 0.0, 0.013308321240222278, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8910377733598409, 0.6133688753692894, 0.9515759710937236, 0.940100318438196, 0.03911707180776753, 0.7555328540607299, 1.0, 0.6857570490536887, 0.003836545324768023, 0.0, 0.2970838664137292, 0.26845114345114346, nan, 0.0, 0.0, 0.44353818028406694, nan, 0.0, 0.0, 0.14959875434183734, 0.0, nan, 0.0, 0.016539104425090174, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.5657 | 43.0 | 860 | 2.3755 | 0.0880 | 0.1572 | 0.5108 | [0.2701927372862983, 0.6367561020329924, 0.9347933335558943, 0.5259934458959117, 0.012261454780123537, 0.40168097889861465, 0.03808983193337692, 0.6195595529906777, 0.0022205349340286232, 0.0, 0.16273078574495492, 0.35831418838627904, 0.0, 0.0, 0.0, 0.18394844756883422, nan, 0.0, 0.0, 0.06334433257184759, 0.0, nan, 0.0, 0.014170611656268615, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8969184890656063, 0.6844536108744297, 0.9475378154104143, 0.89195301822521, 0.03716121821737916, 0.7385245214262299, 1.0, 0.6199304750869061, 0.0023049012610040446, 0.0, 0.26752547756650635, 0.3918052668052668, nan, 0.0, 0.0, 0.4636478694979609, nan, 0.0, 0.0, 0.26901425320397654, 0.0, nan, 0.0, 0.01988211489399138, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.6606 | 44.0 | 880 | 2.3640 | 0.0871 | 0.1575 | 0.5185 | [0.28348656734824434, 0.6437721203160375, 0.9341802196827134, 0.5076167920852592, 0.008399525244225327, 0.3965154665354643, 0.03777282152502155, 0.6759794321657003, 0.004124553056110814, 0.0, 0.19308402962321206, 0.2569433295874137, 0.0, 0.0, 0.0, 0.17998049894171086, nan, 0.0, 0.0, 0.04457206251835416, 0.0, nan, 0.0, 0.013956605874531087, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8774433399602386, 0.6934459583210568, 0.9500494429676308, 0.9219375472627497, 0.025705504330818665, 0.7618528485717118, 1.0, 0.6766125917342604, 0.004357006899833453, 0.0, 0.34084792870693076, 0.2772869022869023, nan, 0.0, 0.0, 0.5321333145830404, nan, 0.0, 0.0, 0.1636124086716972, 0.0, nan, 0.0, 0.018929063663822174, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.8846 | 45.0 | 900 | 2.3631 | 0.0870 | 0.1552 | 0.5033 | [0.2661283046635257, 0.5959986322679552, 0.9391563795885661, 0.5111767285030828, 0.018807810894141828, 0.40789866252951557, 0.037237891187524144, 0.6426789687087672, 0.0024649359260807517, 0.0, 0.18149624120210528, 0.33139489194499017, 0.0, 0.0, 0.0, 0.195882937840961, nan, 0.0, 0.0, 0.031118196008103352, 0.0, nan, 0.0, 0.01232691658223573, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9030934393638171, 0.63153672190414, 0.9562683531167869, 0.9139800611900847, 0.05113160100586756, 0.7295972432931059, 1.0, 0.6443607570490537, 0.002542826552462527, 0.0, 0.3007161659387362, 0.3652979902979903, nan, 0.0, 0.0, 0.5125158205596962, nan, 0.0, 0.0, 0.11222900946221104, 0.0, nan, 0.0, 0.017125597489809682, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.047 | 46.0 | 920 | 2.3590 | 0.0863 | 0.1537 | 0.4969 | [0.2554754521278945, 0.5727096818735667, 0.9312166758253999, 0.506277728397342, 0.011306990881458966, 0.4363657291946437, 0.039921377253626135, 0.658085440144742, 0.002369427121945547, 0.0, 0.17382521989826835, 0.3185191361627616, 0.0, 0.0, 0.0, 0.17591701164917686, nan, 0.0, 0.0, 0.04528327975388019, 0.0, nan, 0.0, 0.014229540961525929, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9198091451292246, 0.6110217472666608, 0.9429845504825859, 0.9128214734183889, 0.025984911986588432, 0.6936289271256604, 1.0, 0.6603032058709927, 0.0024238639067332857, 0.0, 0.2784076704754342, 0.33090783090783094, nan, 0.0, 0.0, 0.5139220925326958, nan, 0.0, 0.0, 0.15690501856509761, 0.0, nan, 0.0, 0.019207647869563943, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.1081 | 47.0 | 940 | 2.3319 | 0.0871 | 0.1564 | 0.5249 | [0.29434380183116504, 0.6652416682817879, 0.9323893726707845, 0.4924361959196284, 0.0065373260227752005, 0.4039022897566907, 0.03752442166661948, 0.6537777820484486, 0.008167163157969755, 0.0, 0.20251956316106287, 0.28727388636544743, 0.0, 0.0, 0.0, 0.1584074184354928, nan, 0.0, 0.0, 0.01714654224043094, 0.0, nan, 0.0, 0.019206269337415826, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8897972166998012, 0.7174726666082705, 0.9472746209059155, 0.9288613232876142, 0.01732327465772562, 0.764772701283058, 1.0, 0.6569911162611046, 0.008520699500356888, 0.0, 0.34633314215967415, 0.31228343728343727, nan, 0.0, 0.0, 0.5236956827450429, nan, 0.0, 0.0, 0.054138220146125286, 0.0, nan, 0.0, 0.02630421395267001, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.5803 | 48.0 | 960 | 2.3755 | 0.0877 | 0.1597 | 0.5083 | [0.2787585747188189, 0.6014326771601674, 0.9364555913755879, 0.5150544522741832, 0.01728748806112703, 0.39782337141967744, 0.03634381620354182, 0.6638897942749775, 0.005138614845202776, 0.0, 0.19881298143705015, 0.3212225167628255, 0.0, 0.0, 0.0, 0.16657208957178402, nan, 0.0, 0.0, 0.051408542126444245, 0.0, nan, 0.0, 0.018415664864297337, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8912604373757456, 0.6361513469608772, 0.9523881141361769, 0.9259266967760734, 0.05057278569432803, 0.7599469394911985, 1.0, 0.6687234453456933, 0.005383059719248156, 0.0, 0.3472890104557286, 0.3568953568953569, nan, 0.0, 0.0, 0.536633384896639, nan, 0.0, 0.0, 0.18972332015810275, 0.0, nan, 0.0, 0.0257177208879505, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.6094 | 49.0 | 980 | 2.3686 | 0.0896 | 0.1618 | 0.5130 | [0.28037557124227125, 0.6142512603589863, 0.9359444928585144, 0.5361338598030613, 0.016878030585602388, 0.40150319552667474, 0.03638847320803416, 0.6875101806161069, 0.0053602750298182835, 0.0, 0.20195978119711935, 0.33967391304347827, 0.0, 0.0, 0.0, 0.17283582089552238, nan, 0.0, 0.0, 0.058360352014821676, 0.0, nan, 0.0, 0.015887581225125638, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8984493041749503, 0.6510244547257105, 0.9530085011824954, 0.9156242845547069, 0.05057278569432803, 0.7428318759481898, 1.0, 0.69285438393202, 0.005680466333571259, 0.0, 0.3773473919501184, 0.378984753984754, nan, 0.0, 0.0, 0.5292504570383912, nan, 0.0, 0.0, 0.22637441609773626, 0.0, nan, 0.0, 0.021832204334183748, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.2836 | 50.0 | 1000 | 2.3867 | 0.0854 | 0.1569 | 0.5029 | [0.2771319317345961, 0.5947571607182376, 0.939295828662458, 0.5107730625538844, 0.022752808988764046, 0.39377705160129767, 0.03627070632428105, 0.5970283178156206, 0.009085428262972674, 0.0, 0.19687228940252055, 0.3002657911194497, 0.0, 0.0, 0.0, 0.1685958623072604, nan, 0.0, 0.0, 0.038824310860179345, 0.0, nan, 0.0, 0.012886162290077854, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8898568588469185, 0.6259764769587272, 0.9577347224989943, 0.9247819843070327, 0.06789606035205364, 0.7643305303763789, 1.0, 0.5975183468520664, 0.009635974304068522, 0.0, 0.34045087572241584, 0.3327269577269577, nan, 0.0, 0.0, 0.5489382646603853, nan, 0.0, 0.0, 0.1400167684752665, 0.0, nan, 0.0, 0.019002375296912115, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hooman650/bge-large-en-v1.5-onnx-o4
|
hooman650
| 2023-11-22T00:06:49Z | 4 | 0 |
transformers
|
[
"transformers",
"onnx",
"bert",
"feature-extraction",
"en",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-11-21T23:40:56Z |
---
license: mit
language:
- en
library_name: transformers
pipeline_tag: feature-extraction
---
# BGE-Large-En-V1.5-ONNX-O4
This is an `ONNX O4` strategy optimized version of [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) optimal for `Cuda`. It should be much faster than the original
version.

## Usage
```python
# pip install "optimum[onnxruntime-gpu]" transformers
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('hooman650/bge-large-en-v1.5-onnx-o4')
model = ORTModelForFeatureExtraction.from_pretrained('hooman650/bge-large-en-v1.5-onnx-o4')
model.to("cuda")
pairs = ["pandas usually live in the jungles"]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
sentence_embeddings = model(**inputs)[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
```
|
kamakani/llama2_instruct_generation
|
kamakani
| 2023-11-22T00:01:33Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:finetune:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2023-11-22T00:01:13Z |
---
base_model: NousResearch/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama2_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_instruct_generation
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9475 | 0.0 | 20 | 1.8132 |
| 1.7993 | 0.0 | 40 | 1.7797 |
| 1.8766 | 0.0 | 60 | 1.7642 |
| 1.849 | 0.01 | 80 | 1.7527 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_GroundTruth_3epoch_seed123
|
behzadnet
| 2023-11-21T23:47:27Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-11-21T23:47:23Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
badokorach/t5-small-finetune-1611
|
badokorach
| 2023-11-21T23:44:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-15T23:55:12Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetune-1611
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetune-1611
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.065 | 1.0 | 11331 | 0.1631 |
| 0.0616 | 2.0 | 22662 | 0.3200 |
| 0.0654 | 3.0 | 33993 | 0.2814 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Tokenizers 0.15.0
|
Kooten/Noromaid-20b-v0.1.1-4bpw-h8-exl2
|
Kooten
| 2023-11-21T23:20:13Z | 12 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-17T16:02:34Z |
---
license: cc-by-nc-4.0
---
## Description
Exllama 2 quant of [NeverSleep/Noromaid-20b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1)
Please make sure to read their description
4 BPW, Head bit set to 8
## Prompt template: Custom format, or Alpaca
### Custom format:
UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
OLD SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
|
Kooten/Noromaid-13b-v0.1.1-3bpw-h8-exl2
|
Kooten
| 2023-11-21T23:19:15Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-13T23:33:52Z |
---
license: cc-by-nc-4.0
---
## Description
Exllama 2 quant of [NeverSleep/Noromaid-13b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1)
3 BPW, Head bit set to 8
## Prompt template: Custom format, or Alpaca
### Custom format:
UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
OLD SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
|
Santp98/SBERT-albert-base-spanish-2023-11-13-19-24
|
Santp98
| 2023-11-21T23:15:22Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"albert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:Santp98/sentences_triplets_secop2_splits",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-20T02:04:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- Santp98/sentences_triplets_secop2_splits
---
# Santp98/SBERT-albert-base-spanish-2023-11-13-19-24
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Santp98/SBERT-albert-base-spanish-2023-11-13-19-24')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Santp98/SBERT-albert-base-spanish-2023-11-13-19-24')
model = AutoModel.from_pretrained('Santp98/SBERT-albert-base-spanish-2023-11-13-19-24')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Santp98/SBERT-albert-base-spanish-2023-11-13-19-24)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6321 with parameters:
```
{'batch_size': 86, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.CustomTripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 500,
"evaluator": "__main__.CustomTripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
SaiedAlshahrani/noon_7B_4bit_qlora_flores
|
SaiedAlshahrani
| 2023-11-21T23:11:32Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:asas-ai/noon-7B_8bit",
"base_model:finetune:asas-ai/noon-7B_8bit",
"region:us"
] | null | 2023-11-21T21:38:12Z |
---
base_model: asas-ai/noon-7B_8bit
tags:
- generated_from_trainer
model-index:
- name: noon_7B_4bit_qlora_flores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# noon_7B_4bit_qlora_flores
This model is a fine-tuned version of [asas-ai/noon-7B_8bit](https://huggingface.co/asas-ai/noon-7B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.4.0
- Tokenizers 0.15.0
|
Bashar-Alshouha/BioEmoDetector
|
Bashar-Alshouha
| 2023-11-21T23:09:03Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-11-14T15:12:24Z |
---
license: mit
---
# BioEmoDetector Biomedical Pre-trained Models
## Overview
Welcome to the Hugging Face repository for BioEmo-Predictor Biomedical Pre-trained Language Models. This collection comprises meticulously trained models designed specifically for detecting emotions in clinical text data.
## Models Included
- **CODER**
- **BlueBERT**
- **SciBERT**
- **BioMed-RoBERTa**
- **Bio_ClinicalBERT**
- **Clinical_Longformer**
- **BioBERT**
These models serve as foundational elements for emotion prediction in clinical text, offering a specialized understanding of medical language and context.
## Usage
To use these models in your application, follow the steps below:
1. Install the `transformers` library:
```bash
pip install transformers
```
2. Use the following code to download the desired model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Replace "model_name" with the specific model name you want to use (e.g., "Bashar-Alshouha/BioEmoDetector/biobert")
model_name = "Bashar-Alshouha/BioEmoDetector/biobert"
# Load the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Now, you can use the model for emotion prediction on clinical text data
```
Make sure to replace `"Bashar-Alshouha/BioEmoDetector/biobert"` with the specific model name you want to use.
## Citation
---
|
sgimmel/gpt-neo-125m-finetuned-cummings
|
sgimmel
| 2023-11-21T23:00:01Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-20T15:04:03Z |
---
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125m-finetuned-cummings
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125m-finetuned-cummings
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 109 | 8.0124 |
| No log | 2.0 | 218 | 8.1313 |
| No log | 3.0 | 327 | 8.2502 |
| No log | 4.0 | 436 | 8.3335 |
| 1.1381 | 5.0 | 545 | 8.4097 |
| 1.1381 | 6.0 | 654 | 8.4559 |
| 1.1381 | 7.0 | 763 | 8.4804 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
Chuuni/Viridi
|
Chuuni
| 2023-11-21T22:38:40Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-11-21T22:26:59Z |
---
license: cc-by-nc-sa-4.0
---
|
adamsns/damstech
|
adamsns
| 2023-11-21T22:32:16Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-21T22:27:32Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### damstech Dreambooth model trained by adamsns with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
iampedroalz/dqn-SpaceInvadersNoFrameskip-v4
|
iampedroalz
| 2023-11-21T22:26:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-21T22:25:30Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 700.00 +/- 216.08
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga iampedroalz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga iampedroalz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga iampedroalz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nbroad/finer-139-xtremedistil-l12-h384
|
nbroad
| 2023-11-21T22:25:28Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:nlpaueb/finer-139",
"base_model:microsoft/xtremedistil-l12-h384-uncased",
"base_model:finetune:microsoft/xtremedistil-l12-h384-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-13T14:13:17Z |
---
license: mit
base_model: microsoft/xtremedistil-l12-h384-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finer-139-xtremedistil-l12-h384
results: []
datasets:
- nlpaueb/finer-139
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finer-139-xtremedistil-l12-h384
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on the [finer-139](https://huggingface.co/datasets/nlpaueb/finer-139) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0133
- Precision: 0.6104
- Recall: 0.6581
- F1: 0.6334
- Accuracy: 0.9961
## Model description
Base model: microsoft/xtremedistil-l12-h384-uncased
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 512
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 512
- total_eval_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0438 | 1.0 | 1759 | 0.0389 | 0.4777 | 0.1593 | 0.2389 | 0.9937 |
| 0.0266 | 2.0 | 3518 | 0.0234 | 0.5432 | 0.4129 | 0.4692 | 0.9949 |
| 0.0186 | 3.0 | 5277 | 0.0165 | 0.5980 | 0.5516 | 0.5739 | 0.9957 |
| 0.0154 | 4.0 | 7036 | 0.0143 | 0.5932 | 0.6447 | 0.6179 | 0.9959 |
| 0.0137 | 5.0 | 8795 | 0.0133 | 0.6104 | 0.6581 | 0.6334 | 0.9961 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0a0+b5021ba
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mzhao39/trained_model
|
mzhao39
| 2023-11-21T22:22:39Z | 53 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:Dizex/FoodBaseBERT-NER",
"base_model:finetune:Dizex/FoodBaseBERT-NER",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-16T09:03:46Z |
---
license: mit
base_model: Dizex/FoodBaseBERT-NER
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: trained_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model
This model is a fine-tuned version of [Dizex/FoodBaseBERT-NER](https://huggingface.co/Dizex/FoodBaseBERT-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2967
- Precision: 0.6784
- Recall: 0.7160
- F1: 0.6967
- Accuracy: 0.9368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 209 | 0.1880 | 0.6032 | 0.6975 | 0.6469 | 0.9269 |
| No log | 2.0 | 418 | 0.1874 | 0.6275 | 0.7140 | 0.6679 | 0.9344 |
| 0.2256 | 3.0 | 627 | 0.1885 | 0.6522 | 0.7099 | 0.6798 | 0.9372 |
| 0.2256 | 4.0 | 836 | 0.2022 | 0.6704 | 0.7366 | 0.7020 | 0.9408 |
| 0.1043 | 5.0 | 1045 | 0.2527 | 0.6473 | 0.7099 | 0.6771 | 0.9392 |
| 0.1043 | 6.0 | 1254 | 0.2578 | 0.6699 | 0.7140 | 0.6912 | 0.9392 |
| 0.1043 | 7.0 | 1463 | 0.2784 | 0.6628 | 0.7078 | 0.6846 | 0.9382 |
| 0.0557 | 8.0 | 1672 | 0.2967 | 0.6784 | 0.7160 | 0.6967 | 0.9368 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mapapin/q-Taxi-v3
|
mapapin
| 2023-11-21T22:22:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-21T22:09:47Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mapapin/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Grekkla/BarraganJustTheTip
|
Grekkla
| 2023-11-21T22:17:30Z | 9 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:unknown",
"region:us"
] |
text-to-image
| 2023-11-21T21:16:18Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
cinematic full body modelshoot photograph of a cute blonde 20 years old
girl wearing black designer tshirt, design close up, and gray micro shorts,
front view, looking at the camera, in a studio, posing, 35mm photograph,
film, bokeh, professional, 4k, highly detailed
<lora:JustTheTipBlackDesignerTshirt-000025:1>
parameters:
negative_prompt: >-
tattoo, drawing, painting, crayon, sketch, graphite, impressionist, noisy,
blurry, soft, deformed, ugly, head out of frame
output:
url: images/WomanCloseUp.png
- text: >-
man wearing black designer tshirt, front view, in a studio, 35mm photograph,
film, bokeh, professional, 4k, highly detailed
<lora:JustTheTipBlackDesignerTshirt-000020:1>
parameters:
negative_prompt: >-
tattoo, drawing, painting, crayon, sketch, graphite, impressionist, noisy,
blurry, soft, deformed, ugly, head out of frame
output:
url: images/Closeupbetter.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: black designer tshirt
license: unknown
---
# Barragan ''Just The Tip''
<Gallery />
## Model description
Barragan ''Just The Tip'' T-Shirt.
## Trigger words
You should use `black designer tshirt` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Grekkla/BarraganJustTheTip/tree/main) them in the Files & versions tab.
|
Davide11/cat-toy
|
Davide11
| 2023-11-21T22:07:06Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-18T19:26:15Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Cat toy on Stable Diffusion via Dreambooth
#### model by Davide11
This your the Stable Diffusion model fine-tuned the Cat toy concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<cat-toy> toy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
emilstabil/DanSumT5-baseV_38821V_41166V_66047
|
emilstabil
| 2023-11-21T21:52:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:emilstabil/DanSumT5-baseV_38821V_41166",
"base_model:finetune:emilstabil/DanSumT5-baseV_38821V_41166",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-21T17:01:40Z |
---
license: apache-2.0
base_model: emilstabil/DanSumT5-baseV_38821V_41166
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: DanSumT5-baseV_38821V_41166V_66047
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DanSumT5-baseV_38821V_41166V_66047
This model is a fine-tuned version of [emilstabil/DanSumT5-baseV_38821V_41166](https://huggingface.co/emilstabil/DanSumT5-baseV_38821V_41166) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1990
- Rouge1: 36.0404
- Rouge2: 12.6764
- Rougel: 22.071
- Rougelsum: 28.8826
- Gen Len: 125.7597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 232 | 2.1564 | 34.9765 | 12.317 | 22.0495 | 28.0706 | 126.1974 |
| No log | 2.0 | 465 | 2.1556 | 35.1549 | 12.0372 | 21.909 | 28.1749 | 126.4721 |
| 1.8468 | 3.0 | 697 | 2.1567 | 35.5068 | 12.2877 | 22.3354 | 28.495 | 126.0987 |
| 1.8468 | 4.0 | 930 | 2.1524 | 35.5106 | 12.2834 | 22.0562 | 28.154 | 126.3863 |
| 1.7638 | 5.0 | 1162 | 2.1675 | 35.4676 | 12.5524 | 22.5308 | 28.6412 | 125.3648 |
| 1.7638 | 6.0 | 1395 | 2.1637 | 35.4733 | 12.2594 | 22.1365 | 28.4636 | 125.8884 |
| 1.7082 | 7.0 | 1627 | 2.1771 | 35.6859 | 12.5372 | 22.4273 | 28.6912 | 125.4807 |
| 1.7082 | 8.0 | 1860 | 2.1809 | 35.3696 | 12.3894 | 22.1246 | 28.1085 | 125.3734 |
| 1.6599 | 9.0 | 2092 | 2.1828 | 35.2528 | 12.3629 | 22.1104 | 28.1709 | 126.2189 |
| 1.6599 | 10.0 | 2325 | 2.1852 | 35.2601 | 12.1863 | 21.9823 | 28.1476 | 125.5365 |
| 1.6125 | 11.0 | 2557 | 2.1903 | 35.1649 | 12.0801 | 21.883 | 27.82 | 125.3305 |
| 1.6125 | 12.0 | 2790 | 2.1863 | 35.2341 | 12.0505 | 21.6645 | 28.1187 | 125.6953 |
| 1.5957 | 13.0 | 3022 | 2.1921 | 35.5287 | 12.4581 | 22.0277 | 28.6527 | 125.97 |
| 1.5957 | 14.0 | 3255 | 2.2085 | 35.7979 | 12.3305 | 22.0783 | 28.6627 | 125.412 |
| 1.5957 | 15.0 | 3487 | 2.1962 | 35.7095 | 12.5406 | 21.81 | 28.299 | 126.3133 |
| 1.5708 | 16.0 | 3720 | 2.1932 | 35.5116 | 12.3365 | 22.0461 | 28.4349 | 125.9614 |
| 1.5708 | 17.0 | 3952 | 2.1985 | 35.3852 | 12.3385 | 21.9544 | 28.3238 | 125.4034 |
| 1.5644 | 18.0 | 4185 | 2.1987 | 35.4105 | 12.2686 | 22.0002 | 28.287 | 125.073 |
| 1.5644 | 19.0 | 4417 | 2.1996 | 35.7954 | 12.5156 | 22.198 | 28.5893 | 124.9099 |
| 1.5446 | 19.96 | 4640 | 2.1990 | 36.0404 | 12.6764 | 22.071 | 28.8826 | 125.7597 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
typeof/mistral-7b-og
|
typeof
| 2023-11-21T21:47:43Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T18:21:27Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
---
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue:
```
pip install git+https://github.com/huggingface/transformers
```
This should not be required after transformers-v4.33.4.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
peddle/pokemon-lora
|
peddle
| 2023-11-21T21:46:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-19T21:38:38Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - peddle/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the /home/aaronpeddle/Code/spiderverse-test/dataset dataset. You can find some example images in the following.




|
niltonseixas/sentiment_analysis
|
niltonseixas
| 2023-11-21T21:43:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-21T19:10:31Z |
# Sentiment analysis model
<!-- Provide a quick summary of what the model is/does. -->
This model aims to demonstrate text classification task through sentiment analysis
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Nilton Seixas]
- **Language(s) (NLP):** [English]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [distilbert-base-cased]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [niltonseixas/sentiment_analysis]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("niltonseixas/sentiment_analysis_tokenizer")
model = pipeline("text-classification", model="niltonseixas/sentiment_analysis", tokenizer=tokenizer)
model("I'm in love with NLP")
|
higgsfield/mistral-guanaco-top
|
higgsfield
| 2023-11-21T21:41:34Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-21T21:33:45Z |
---
{}
---
---
{ card_data }
---
# Model Card for MyCoolModel
This model does this and that.
higgsfield.ai/model/655d2077c997afcb10532d40
This model was created by [@{ author }](https://hf.co/{author}).
|
FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B
|
FPHam
| 2023-11-21T21:35:21Z | 129 | 28 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"llm",
"llama",
"spellcheck",
"grammar",
"conversational",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-21T20:52:15Z |
---
tags:
- llm
- llama
- spellcheck
- grammar
license: llama2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B/resolve/main/karen3.jpg" alt="FPHam's Karen v2" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p>
</div>
<!-- header end -->
# Karen is an editor for your text. (v.2) CREATIVE edition
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in grammatical wisdom but respectfull of your style.
# Info
Karen, Version 2, uses a completely different data set and base model than the previous Karen.
# There are two versions of Karen V2
1. Strict ((here)[https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B]), in which Karen will try not to make too many changes to your original text, mostly fixing grammar and spelling, assuming that you know what you are doing.
2. Creative (this one), in which Karen may suggest slight contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine.
# Goals
Karen's primary goal is to rectify grammatical and spelling errors in US English without altering the style of the text. She is adept at identifying and correcting common ESL errors.
Verb Tense Errors:
Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa.
Confusion between continuous and simple tenses.
Subject-Verb Agreement:
Lack of agreement between the subject and verb in number, e.g., using a singular verb with a plural subject or vice versa.
Articles (a, an, the):
Incorrect use or omission of articles, such as using "a" instead of "an" or vice versa.
Overuse or omission of the definite article "the."
Prepositions:
Misuse of prepositions, such as using "in" instead of "on" or "at," or omitting prepositions where they are needed.
Word Order:
Incorrect word order in sentences, especially in questions and negative sentences.
Misplacement of adverbs or adjectives.
Pluralization:
Incorrect plural forms of nouns, such as failing to add "-s" or "-es" when necessary.
Pronoun Errors:
Confusion between subject and object pronouns.
Incorrect use of possessive pronouns.
Double Negatives:
Using double negatives, which is grammatically incorrect in standard English.
Modal Verbs:
Misuse of modal verbs like can, could, will, would, should, etc.
Confusing Similar Words:
Confusing words that sound similar but have different meanings and spellings (e.g., "their," "there," and "they're").
Lack of Plural/Singular Agreement:
Mistakes in matching singular and plural nouns and verbs in a sentence.
# Future Goals
Use bigger model, add grammar cases that the model misses. Better datasets. Use larger datasets.
# Training
It was reversely trained on fict/non-fiction US text where errors were intentionally inserted by another Llama model (Darth Karen) and Python script.
# Usage
It should be used by submitting a paragraph or block of text at a time.
# Model uses ChatML
```
<|im_start|>system
<|im_end|>
<|im_start|>user
Edit the following text for spelling and grammar mistakes: {paragraph of text} <|im_end|>
<|im_start|>assistant
```
Note the pretext: *Edit the following text for spelling and grammar mistakes:* before the actual text. This way Karen wouldn't start talking ABOUT the text.
# Recomended settings
- Temperature: 0.7
- top_p: 0.1
- top_k: 40
- repetition penalty: 1.18
Karen could also be used in chatting. But if Karen is involved in a conversation where the text is lengthy, she might interpret your message as needing proofreading assistance rather than simply acknowledging that you are using her in casual chat.
Example:
>Edit the following text for spelling and grammar mistakes: I see future writing more about you writing the story any way you can and LLM editing it and making it into a proper fiction. That seems to be more satisfying way where you could feel you actually had done something creATIVE. So human is the creative, LLM is the burger flipping intern.
>The opposite is what many people are trying rn. Make LLM to invent story (from premise, to creating outline to writing actual chapters) and the human to somehow assemble it from those pieces. So LLM is the creative man, and human is the floor polisher.
>I can't believe a normal person would find it satisfying long term.
>So in my view, the job of editor would be replaced by Ai, not the job of author.
(Grammarly Score: 48)
Output STRICT:
>I see the future of writing being more about you writing the story any way you can and having an AI like LLM edit it and make it into a proper fiction. That seems to be a more satisfying way where you could feel you actually did something creative. So humans are the creative ones, while LLMs are the burger-flipping interns.
>
>On the other hand, many people are currently trying to do the exact opposite. They want to make LLMs create stories (from premises, to outlining, to writing entire chapters). And then they want humans to somehow piece them together from these parts. In this scenario, LLMs become the creative individuals, and humans become the floor polishers.
>
>I can't imagine a regular person finding that satisfying over the long run.
>
>In my opinion, the role of an editor will eventually be taken over by AI, not the role of an author.
(Grammarly Score: 93)
Output CREATIVE:
>I see the future of writing being more focused on you writing the story any way you can and having an AI like GPT-3 edit it and turn it into a proper piece of fiction. This seems to be a more satisfying approach where you can truly feel that you have done something creative. In this scenario, humans remain the creative force, while the AI acts as the burger flipping intern.
>
>On the other hand, there are those who argue that we should let the AI generate stories from scratch - from coming up with a premise, to outlining the plot, to writing individual chapters. They believe that the role of the human should then become one of assembling these pieces together. In this case, the AI becomes the creative mind, and the human takes on the role of a floor polisher.
>
>I cannot imagine how someone could find this satisfying over the long term. Therefore, in my opinion, the job of the editor will eventually be taken over by AI, not the job of the author.
(Grammarly Score: 83)
# Conclusion
After probably 10 different versions with subsequent changes, I can now say that the current model works reasonably well, with occasional (but often debatable) grammar misses. The limitations seem to be related to the 7B parameters. It appears that the size isn't sufficient to have a fine-grained understanding of various nuances of the input. This correlates with my other findings - the Mistral model performs quite well when generating its own text, but its comprehension is less than perfect, again related to only 7B parameters.
The goal was to create a model that wouldn't change the style of the text. Often, LLM models, when asked to edit text, will attempt to rewrite the text even if the text is already fine. This proved to be quite challenging for such a small model where the main task was to determine the right balance between fixing the text (and not changing its style) and copying it verbatim.
The strict model assumes that you're already a good writer that doesn't need hand-holding and that every word you've written you've meant.
|
ThuyNT03/CS341_Camera-COQE_UniCOQE_t5small
|
ThuyNT03
| 2023-11-21T21:33:10Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-21T20:31:53Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: CS341_Camera-COQE_UniCOQE_t5small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS341_Camera-COQE_UniCOQE_t5small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
ceec/whisper-tiny-finetuned-minds
|
ceec
| 2023-11-21T21:26:14Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-21T21:00:43Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: Whisper Tiny Finetuned on Minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Minds14 - en-US split
type: PolyAI/minds14
config: en-US
split: train[451:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34525939177101966
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Finetuned on Minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Minds14 - en-US split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6625
- Wer Ortho: 0.3498
- Wer: 0.3453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0007 | 17.86 | 500 | 0.6625 | 0.3498 | 0.3453 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LeKyks1/poca-SoccerTwos
|
LeKyks1
| 2023-11-21T20:58:57Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-11-21T20:58:08Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: LeKyks1/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
stoves/Chiriac_Maxim
|
stoves
| 2023-11-21T20:42:03Z | 3 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-11-10T23:07:17Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of sks Chiriac_Maxim
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
asas-ai/noon_7B_4bit_qlora_mlqa
|
asas-ai
| 2023-11-21T20:36:40Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:asas-ai/noon-7B_8bit",
"base_model:finetune:asas-ai/noon-7B_8bit",
"region:us"
] | null | 2023-11-21T20:35:53Z |
---
base_model: asas-ai/noon-7B_8bit
tags:
- generated_from_trainer
model-index:
- name: noon_7B_4bit_qlora_mlqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# noon_7B_4bit_qlora_mlqa
This model is a fine-tuned version of [asas-ai/noon-7B_8bit](https://huggingface.co/asas-ai/noon-7B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1950
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.4.0
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.