modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-14 00:42:58
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 558
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-14 00:36:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
shre-db/bert-finetuned-ner
|
shre-db
| 2023-07-31T10:52:33Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-31T10:26:52Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9313871020946726
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9407746772178258
- name: Accuracy
type: accuracy
value: 0.9866515570730559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0577
- Precision: 0.9314
- Recall: 0.9504
- F1: 0.9408
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0781 | 1.0 | 1756 | 0.0729 | 0.9083 | 0.9349 | 0.9214 | 0.9807 |
| 0.0413 | 2.0 | 3512 | 0.0581 | 0.9196 | 0.9465 | 0.9328 | 0.9854 |
| 0.0268 | 3.0 | 5268 | 0.0577 | 0.9314 | 0.9504 | 0.9408 | 0.9867 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
TencentARC/t2iadapter_zoedepth_sd15v1
|
TencentARC
| 2023-07-31T10:48:46Z | 1,831 | 1 |
diffusers
|
[
"diffusers",
"art",
"t2i-adapter",
"controlnet",
"stable-diffusion",
"image-to-image",
"arxiv:2302.08453",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-07-14T19:02:00Z |
---
license: apache-2.0
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- t2i-adapter
- controlnet
- stable-diffusion
- image-to-image
---
# T2I Adapter - Zoedepth
T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
This checkpoint provides conditioning on zoedepth depth estimation for the stable diffusion 1.5 checkpoint.
## Model Details
- **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** Apache 2.0
- **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453).
- **Cite as:**
@misc{
title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models},
author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie},
year={2023},
eprint={2302.08453},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
### Checkpoints
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>|
|[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>|
|[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>|
|[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>|
|[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>|
|[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>|
|[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> |
|[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)||
|[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)||
|[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)||
|[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)||
## Example
1. Dependencies
```sh
pip install diffusers transformers matplotlib
```
2. Run code:
```python
from PIL import Image
import torch
import numpy as np
import matplotlib
from diffusers import T2IAdapter, StableDiffusionAdapterPipeline
def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None):
"""Converts a depth map to a color image.
Args:
value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed
vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None.
vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None.
cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'.
invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99.
invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None.
background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255).
gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False.
value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None.
Returns:
numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4)
"""
if isinstance(value, torch.Tensor):
value = value.detach().cpu().numpy()
value = value.squeeze()
if invalid_mask is None:
invalid_mask = value == invalid_val
mask = np.logical_not(invalid_mask)
# normalize
vmin = np.percentile(value[mask],2) if vmin is None else vmin
vmax = np.percentile(value[mask],85) if vmax is None else vmax
if vmin != vmax:
value = (value - vmin) / (vmax - vmin) # vmin..vmax
else:
# Avoid 0-division
value = value * 0.
# squeeze last dim if it exists
# grey out the invalid values
value[invalid_mask] = np.nan
cmapper = matplotlib.cm.get_cmap(cmap)
if value_transform:
value = value_transform(value)
# value = value / value.max()
value = cmapper(value, bytes=True) # (nxmx4)
img = value[...]
img[invalid_mask] = background_color
if gamma_corrected:
img = img / 255
img = np.power(img, 2.2)
img = img * 255
img = img.astype(np.uint8)
return img
model = torch.hub.load("isl-org/ZoeDepth", "ZoeD_N", pretrained=True)
img = Image.open('./images/zoedepth_in.png')
out = model.infer_pil(img)
zoedepth_image = Image.fromarray(colorize(out)).convert('RGB')
zoedepth_image.save('images/zoedepth.png')
adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_zoedepth_sd15v1", torch_dtype=torch.float16)
pipe = StableDiffusionAdapterPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16"
)
pipe.to('cuda')
zoedepth_image_out = pipe(prompt="motorcycle", image=zoedepth_image).images[0]
zoedepth_image_out.save('images/zoedepth_out.png')
```



|
c4big/xlm-roberta-base-finetuned-panx-de-fr
|
c4big
| 2023-07-31T10:33:43Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-31T10:22:01Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- F1: 0.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2908 | 1.0 | 715 | 0.1909 | 0.8125 |
| 0.1466 | 2.0 | 1430 | 0.1613 | 0.8492 |
| 0.0945 | 3.0 | 2145 | 0.1658 | 0.8588 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
namec/llama2-qlora-finetunined-flc
|
namec
| 2023-07-31T10:29:35Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T10:29:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e1_s6789_v3_l6_r4
|
KingKazma
| 2023-07-31T10:29:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T10:29:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
cpierse/wav2vec2-large-xlsr-53-esperanto
|
cpierse
| 2023-07-31T10:27:11Z | 5,661 | 6 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"eo",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: eo
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Esperanto by Charles Pierse
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eo
type: common_voice
args: eo
metrics:
- name: Test WER
type: wer
value: 12.31
---
# Wav2Vec2-Large-XLSR-53-eo
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eo", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto")
model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Esperanto test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
test_dataset = load_dataset("common_voice", "eo", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto")
model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\„\«\(\»\)\’\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=2000)))
```
**Test Result**: 12.31 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
runningsnake/distilbert-base-uncased-finetuned-imdb
|
runningsnake
| 2023-07-31T10:20:00Z | 70 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-31T08:39:27Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: runningsnake/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# runningsnake/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8559
- Validation Loss: 2.5783
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8559 | 2.5783 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.2
- Tokenizers 0.13.3
|
digiplay/OrangeChillMix_v7fix
|
digiplay
| 2023-07-31T10:08:18Z | 397 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-31T09:52:26Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/9486?modelVersionId=129974
Original Author's DEMO images :
,%20(photorealistic_1.2),%20light,%20depth%20of%20field,%20(detailed%20face,%20face%20focus_1),%20game%20cg,%20ultra%20detail.jpeg)
,__sunglasses,%20necktie,%20animal%20ears,%20gloves,%201girl,%20black%20hair,%20shirt,%20suit,.jpeg)
,%20grating,raster,(Light%20through%20hair_1.2),_1g.jpeg)
|
sattebaaz/ppo-LunarLander-v2
|
sattebaaz
| 2023-07-31T10:07:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T10:07:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.03 +/- 14.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
himanimaheshwari3/distilbert-base-uncased-finetuned-hina
|
himanimaheshwari3
| 2023-07-31T10:01:14Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-31T09:54:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-hina
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-hina
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.1322 | 1.0 | 25 | 5.8440 |
| 4.6013 | 2.0 | 50 | 4.9180 |
| 4.5674 | 3.0 | 75 | 5.5854 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Marco-Cheung/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
Marco-Cheung
| 2023-07-31T09:58:56Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-31T09:38:22Z |
---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned2-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.93
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned2-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3235
- Accuracy: 0.93
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6202 | 0.99 | 28 | 0.6153 | 0.83 |
| 0.3175 | 1.98 | 56 | 0.4811 | 0.84 |
| 0.123 | 2.97 | 84 | 0.4716 | 0.85 |
| 0.0279 | 4.0 | 113 | 0.4575 | 0.88 |
| 0.0348 | 4.99 | 141 | 0.4270 | 0.88 |
| 0.0331 | 5.98 | 169 | 0.3423 | 0.89 |
| 0.0022 | 6.97 | 197 | 0.3178 | 0.94 |
| 0.0009 | 8.0 | 226 | 0.4422 | 0.9 |
| 0.0006 | 8.99 | 254 | 0.3187 | 0.92 |
| 0.0005 | 9.91 | 280 | 0.3235 | 0.93 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.14.2
- Tokenizers 0.13.3
|
christianiscoding/LunarLander
|
christianiscoding
| 2023-07-31T09:58:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T09:56:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.58 +/- 19.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aroot/eng-fra-sent_delfy
|
aroot
| 2023-07-31T09:58:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-31T09:21:08Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-sent_delfy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-sent_delfy
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1613
- Bleu: 31.3966
- Chrf: 53.4743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
dfalvearg/ppo-LunarLander-v2
|
dfalvearg
| 2023-07-31T09:52:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T09:52:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.02 +/- 16.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JinsooKim/Taxi
|
JinsooKim
| 2023-07-31T09:50:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T09:50:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="JinsooKim/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
JinsooKim/q-FrozenLake-v1-4x4-noSlippery
|
JinsooKim
| 2023-07-31T09:39:12Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T09:39:11Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="JinsooKim/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
MKT678/slotpg
|
MKT678
| 2023-07-31T09:38:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-31T09:13:16Z |
สล็อต
สำหรับนักพนันทางไหนที่พอใจรวมทั้งต้องการจะเข้ามาเป็นส่วนหนึ่งส่วนใดในทางเว็บไซต์ สล็อตpgเว็บตรง สล็อตของทางพวกเรา แน่ๆว่าท่านจำต้องทางปากทางเข้าที่ใช้งานได้จริง ถ้าเกิดคุณนั้นอยากได้หาทางเข้าเว็บไซต์สล็อตของพวกเรา สามารถที่จะเข้าปากทางเข้านี้ได้เลยถึงแม้ท่านนั้นมีเงินทุนน้อย เพียงแค่หลัก10 ก็สามารถร่วมสนุกสนานเสี่ยงดวง ไปกับเกม golden678 กับพวกเราได้แน่ๆเนื่องจากเว็บไซต์ของพวกเรานั้นมีเงินเพียงแค่ 1 บาทคุณก็สามารถเสี่ยงดวงได้แล้วรวมทั้งที่สำคัญเว็บไซต์ของพวกเรานั้นให้อัตราการจ่ายที่สูงและก็ดียิ่งกว่าคนไหนกันโน่นก็เลยไม่ใช่เรื่องยากเลยที่คุณจะสามารถทำเงินรวมทั้งร่ำรวยได้จากการเล่นเกมสล็อตออนไลน์กับพวกเรา หรือเรียกกล้วยๆว่าลงทุนน้อยแม้กระนั้นเห็นผลทดแทนสูงนั่นเอง ถ้าท่านคิดต้องการจะหาเงินเสริมที่จากการเล่นเกมพนันออนไลน์จำเป็นต้องเลือกเล่นเกมสล็อตออนไลน์ตรงนี้แค่นั้น
พวกเราเป็นเว็บไซต์ สล็อตออนไลน์ อันดับต้นๆที่ยืนยันเลยว่าคุณจะสามารถทำเงินได้กำไรได้จากการเล่นเกมสล็อตออนไลน์กับเว็บไซต์ของพวกเราแน่ๆ และก็ที่สำคัญคุณจะสามารถทำเงินได้มากกว่าที่คิดอีกด้วยเพราะว่าเว็บไซต์ของพวกรานั้นกล้าให้ เว็บสล็อต pg เว็บตรง กล้าแจกในส่วนของอัตราการจ่ายที่สูงและก็คุ้มกว่าโดยเหตุนี้ถ้าหากยิ่งตกลงใจช้า มัวสองจิตสองใจ คุณก็บางทีอาจจะร่ำรวยไม่ทันเพื่อนพ้องก็เป็นได้ โดยเหตุนี้แล้วมาร่วมสมัครเป็นสมาชิกแล้วก็เล่นเกมสล็อตออนไลน์กับพวกเราแล้วมาลุ้นเงินรางวัลโบนัส เครดิตฟรี ไปร่วมกันนะคะสำหรับมือใหม่ที่พึงพอใจต้องการจะเล่นสล็อต แม้กระนั้นยังไม่กล้าพนันสามารถที่จะเข้ามาทดสอบเล่นกับทางเว็บไซต์พวกเราได้ ทดสอบเล่นฟรี ไม่ต้องจ่ายเงินอะไรก็ตามทั้งหมด คลิก
สมัครสล็อต <p>► <a href="https://golden678.com/" rel="noopener nofollow">ทางเข้า PG SlotPG ล่าสุด</a></p>
<p>► <a href="https://golden678.com/" rel="noopener nofollow">ทางเข้า PG SlotPG ล่าสุด</a></p>
ในส่วนของขั้นตอนการสมัครเล่นเกมสล็อตแน่นอนว่าไม่มช่เรื่องยาก เพราะอย่างไร Golden678 ก็ต้องการให้ผู้ที่สนใจได้เข้ามาสมัครเป็นสมาชิกได้อย่างง่ายที่สุด สะดวกที่สุด เพียงผู้ที่สนใจเข้ามาที่สมัครเว็ล คาสิโนออนไลน์ของเรา และทำตามขั้นตอนการ สมัครสล็อตต่อไปนี้
เข้ามาที่หน้าเว็บหลัก Golden678
จากนั้นกดปุ่ม สมัครสมาชิก
กรอกรายละเอียดข้อมูลการสมัครเล่นเกมสล็อตต่างๆให้ครบถ้วน
ยืนยันตัวตนผ่านเบอร์โทรศัพท์ ด้วยระบบ OTP
เมื่อยืนยันตัวต้นเสร็จแล้ว ผู้เล่นก็จะได้รับยูสเซอร์ประจำตัว เพื่อเข้าไปสนุกกับเกมสล็อตที่ต้องการ
เพราะเหตุว่าเว็บไซต์ของพวกเราจ่ายจริงไม่มีทุจริตมีค่ายสล็อตให้เลือกเล่นมาก ประกันได้เลยว่าถ้าหากเลือกเล่นสล็อตออนไลน์กับพวกเราคุณจะไม่ผิดหวังเพราะว่าเว็บไซต์ของพวกเรานั้นมากมายไปด้วยโปรโมชั่นรวมทั้งคำแนะนำต่างๆที่เรียกว่าดีมากยิ่งกว่าเว็บไซต์สล็อตออนไลน์เว็บไซต์ไหน ๆ แน่ ๆ
ขั้นตอนการสมัครสมาชิก
กดเข้าสู่ระบบ และเลือกค่ายเกมที่ต้องการเล่น
<p>► <a href="https://golden678.com/" rel="noopener nofollow">ทางเข้า PG SlotPG ล่าสุด</a></p>
<p>► <a href="https://golden678.com/" rel="noopener nofollow">ทางเข้า PG SlotPG ล่าสุด</a></p>
|
jkhan447/HateXplain-weighted-agreement-labeled
|
jkhan447
| 2023-07-31T09:38:17Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-31T08:42:29Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HateXplain-weighted-agreement-labeled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HateXplain-weighted-agreement-labeled
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0701
- Accuracy: 0.6760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
golaxy/gogpt2-7b-pretrain
|
golaxy
| 2023-07-31T09:36:59Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"llama2",
"chinese-llama2",
"gogpt2-7b",
"zh",
"dataset:BelleGroup/train_0.5M_CN",
"dataset:BelleGroup/train_1M_CN",
"dataset:c-s-ale/alpaca-gpt4-data-zh",
"dataset:BAAI/COIG",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-29T11:39:04Z |
---
license: apache-2.0
datasets:
- BelleGroup/train_0.5M_CN
- BelleGroup/train_1M_CN
- c-s-ale/alpaca-gpt4-data-zh
- BAAI/COIG
language:
- zh
tags:
- llama2
- chinese-llama2
- gogpt2-7b
---
# GoGPT2-7B: 基于Llama2-7b训练的中英文增强大模型

<p align="center">
<img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square">
<img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca">
</p>
> ICT中英文底座增强大模型:70亿参数、130亿参数
🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重
| 模型名称 | 基座模型 | 模型大小 | 下载地址 |
|-------------------------------------------------------------|-----------|------|-------------------------------------------------|
| [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) |
| [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) |
| [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) |
| [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) |
[GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT)
## 🚀step1:训练分词器
[🐱怎么从零到一训练一个LLM分词器](https://github.com/yanqiangmiffy/how-to-train-tokenizer)
```text
├── data
│ └── corpus.txt 训练语料
├── llama
│ ├── tokenizer_checklist.chk
│ └── tokenizer.model
├── merged_tokenizer_hf 合并结果 hf格式
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── tokenizer.model
├── merged_tokenizer_sp
│ └── open_llama.model #
├── merge_tokenizer
│ └── tokenizer.model
├── open_llama.model 训练的sp模型
├── open_llama.vocab 训练的sp词汇表
├── README.md
├── step0_step0_process_text.py 基于多分数据集准备训练语料
├── step1_make_corpus.py 基于中文Wikipedia数据准备训练语料
├── step2_train_tokenzier.py 训练分词器
├── step3_tokenzier_segment.py 测试训练后的模型,包括编码和解码测试样例
└── step4_merge_tokenizers.py 与原版llama的分词器进行合并,得到hf格式的tokenizer
```
原始llama2词表大小**32000**,与40k训练的中文分词模型合并之后词表大小为**68419**,sft添加pad字符之后大小为**68420**
## 🚀step2:二次预训练
> 在中文预训练语料上对LLaMA进行增量预训练、继续预训练,目前训练语料20GB,后续继续迭代更新
## 🚀step3: 有监督微调
基于多样性的指令数据进行微调,包括belle,alpaca的中英文指令数据以及moss多轮对话数据,完成在120万+条数据的指令微调
- belle数据:120k数据 v1
- stanford_alapca:52k数据 v2
- stanford_alapca_gpt4_zh:52k数据 v2
- [sharegpt](data%2Ffinetune%2Fsharegpt):90k数据
- [fnlp/moss-003-sft-data](https://huggingface.co/datasets/fnlp/moss-003-sft-data):moss多轮对话数据
根据长度(输出长度大约500)采样之后,筛选出11万指令数据进行sft训练
## 测试效果
- 西瓜和南瓜有什么区别
```
1. 外观:西瓜通常是圆形或椭圆形的,有浅黄色或棕色的外皮,内部有红色或黄色的果肉。而南瓜则是一个圆形或椭圆形的,通常有浅黄色或棕色的外皮,内部有橙黄色的果肉。
2. 品种:西瓜主要分为三种品种:小型、中等和巨型。小型西瓜通常只有一个或两个果核,中等西瓜通常有两个或三个果核,巨型西瓜则有四个或更多果核。南瓜主要分为两种品种:南 瓜和萝卜南瓜。南瓜通常有两个或三个果核,萝卜南瓜则有四个或更多果核。
3. 生长环境:西瓜需要温暖的气候,并且需要充足的阳光。它们通常生长在热带或亚热带地区。南瓜则在温带和寒带地区生长。南瓜需要更长的冬季来生长,而西瓜通常在春季和夏季生 长。
4. 营养价值:西瓜和南瓜都是富含维生素C和纤维素的水果。西瓜还富含抗氧化剂和钾,而南瓜富含叶酸和维生素E。
总的来说,西瓜和南瓜是两种完全不同的果类,它们在外观、品种、生长环境和营养价值上有很多区别。
```
- 如何提高我的时间管理技能?
```
Response: 1. 制定计划和目标:制定每周或每日的计划和目标,以帮助您更好地分配时间。
2. 优先考虑最重要的任务:优先完成最重要的任务,这有助于避免分心和浪费时间。
3. 避免拖延:尝试避免拖延,尽早开始任务,这样您可以更容易地完成它们。
4. 集中注意力:集中注意力,避免同时处理多个任务,这有助于减少分心和提高效率。5. 利用工具和技术:利用时间管理工具和技术,如日历、提醒和待办事项列表,以帮助您更好地管理时间。
6. 学会说“不”:学会拒绝不必要的事情,这有助于您更好地利用时间,并专注于最重要的任务。
7. 保持积极心态:保持积极心态,相信自己能够管理时间,并尝试不断学习和发展自己的时间管理技能。
8. 休息和放松:尝试保持工作和生活的平衡,并定期休息和放松,这有助于提高时间管理技能和提高工作效率。
```
- Python和JavaScript编程语言的主要区别是什么?
```
1. 语法不同:Python是一种面向对象的编程语言,它的语法比较严格,需要使用花括号来标识代码块。JavaScript是一种脚本语言,它的语法相对比较松散,需要使用花括号或缩进来标 识代码块。
2. 应用领域不同:Python广泛应用于数据分析、机器学习、人工智能等领域,由于其可读性好、语法清晰、功能强大等特点。JavaScript则主要应用于前端开发、后端开发、移动端开发 等领域。
3. 运行环境不同:Python的运行环境较为独立,可以在不同的操作系统上运行。JavaScript的运行环境相对较复杂,需要考虑浏览器和Web服务器的兼容性问题。
4. 应用场景不同:Python更适合进行科学计算、数据分析、机器学习等方面的应用,如Pandas、Numpy等库;而JavaScript则更适合进行前端开发、后端开发、移动端开发等领域的脚本编程,如React、Angular等框架。
5. 库和框架不同:Python拥有丰富的标准库和第三方库,如Django、Flask等;而JavaScript则拥有流行的Web开发框架,如React、Angular等。
```
- AI会如何影响农业生产?
```
AI将有助于提高农业生产效率,优化作物种植和收成过程,以及提高农产品的质量和品质。
AI技术可以应用于植物监测、灌溉管理、病虫害防治、土壤分析等方面,减少人力 成本,提高生产效率,同时也可以降低农业生产中的资源浪费和环境污染。例如,AI可以监测土壤湿度、气象状况等环境因素,帮助农民决定何时种植、灌溉和施肥,从而提高农作物的产量和质量。
另外,AI还可以通过监测植物健康状况,及时发现和预防病虫害的发生,提高农作物的产量和质量。此外,AI还可以帮助农民预测未来天气、干旱和洪涝灾害,制定更有效的灾害应对计划。
总之,AI有望成为农业生产中不可或缺的技术,在未来为农业生产带来更大的改进和效益。
```
- 翻译成英文:过去都是假的,回忆是一条没有归途的路,以往的一切春天都无法复原,即使最狂热最坚贞的爱情,归根结底也不过是一种瞬息即逝的现实,唯有孤独永恒。
```
Response: Translation into English:
The past is false, memories are a dead-end road, all previous springs are hopelessly incapable of being restored, even the most passionate and steadfast love is ultimately a fleeting reality,
and only solitude is eternal
```
- 请用python编程语言实现快速排序算法的代码
```
快速排序(Quick Sort)是一种常用的排序算法,基本思想是通过一次排序将数组分为两个部分,其中一个部分的所有元素都比另一个小,然后再分别对这两个部分进行排序 ,直到整个数组有序。
以下是使用Python语言实现快速排序的代码:
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr)//2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```
在这个实现中,首先判断数组的长度是否小于等于1,如果是,则直接返回该数组。否则,选择一个枢轴(pivot)元素,将数组分为三个部分:小于枢轴的左部分、等于枢轴的中间部分、大于枢轴的右部分。接着递归地对左部分和右部分进行排序,并将它们与中间部分连接起来。最后返回排好序的数组。
以下是使用上述代码对数组进行排序的示例:
```python
arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]
sorted_arr = quick_sort(arr)
print(sorted_arr)
```
```
输出结果为:[1, 1, 2, 3, 3, 4, 5, 5, 5, 6, 9]
```
## 免责声明
本项目相关资源仅供学术研究之用,严禁用于商业用途。 使用涉及第三方代码的部分时,请严格遵循相应的开源协议。
模型生成的内容受模型计算、随机性和量化精度损失等因素影响,本项目不对其准确性作出保证。
对于模型输出的任何内容,本项目不承担任何法律责任,亦不对因使用相关资源和输出结果而可能产生的任何损失承担责任。
|
golaxy/gogpt2-13b-pretrain
|
golaxy
| 2023-07-31T09:36:19Z | 10 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"llama2",
"chinese-llama2",
"gogpt2-7b",
"zh",
"dataset:BelleGroup/train_0.5M_CN",
"dataset:BelleGroup/train_1M_CN",
"dataset:c-s-ale/alpaca-gpt4-data-zh",
"dataset:BAAI/COIG",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T08:24:40Z |
---
license: apache-2.0
datasets:
- BelleGroup/train_0.5M_CN
- BelleGroup/train_1M_CN
- c-s-ale/alpaca-gpt4-data-zh
- BAAI/COIG
language:
- zh
tags:
- llama2
- chinese-llama2
- gogpt2-7b
---
# GoGPT2-13B: 基于Llama2-13b训练的中英文增强大模型

<p align="center">
<img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square">
<img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca">
</p>
> ICT中英文底座增强大模型:70亿参数、130亿参数
🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重
| 模型名称 | 基座模型 | 模型大小 | 下载地址 |
|-------------------------------------------------------------|-----------|------|-------------------------------------------------|
| [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) |
| [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) |
| [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) |
| [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) |
[GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT)
## 🚀step1:训练分词器
[🐱怎么从零到一训练一个LLM分词器](https://github.com/yanqiangmiffy/how-to-train-tokenizer)
```text
├── data
│ └── corpus.txt 训练语料
├── llama
│ ├── tokenizer_checklist.chk
│ └── tokenizer.model
├── merged_tokenizer_hf 合并结果 hf格式
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── tokenizer.model
├── merged_tokenizer_sp
│ └── open_llama.model #
├── merge_tokenizer
│ └── tokenizer.model
├── open_llama.model 训练的sp模型
├── open_llama.vocab 训练的sp词汇表
├── README.md
├── step0_step0_process_text.py 基于多分数据集准备训练语料
├── step1_make_corpus.py 基于中文Wikipedia数据准备训练语料
├── step2_train_tokenzier.py 训练分词器
├── step3_tokenzier_segment.py 测试训练后的模型,包括编码和解码测试样例
└── step4_merge_tokenizers.py 与原版llama的分词器进行合并,得到hf格式的tokenizer
```
原始llama2词表大小**32000**,与40k训练的中文分词模型合并之后词表大小为**68419**,sft添加pad字符之后大小为**68420**
## 🚀step2:二次预训练
> 在中文预训练语料上对LLaMA进行增量预训练、继续预训练,目前训练语料20GB,后续继续迭代更新
## 🚀step3: 有监督微调
基于多样性的指令数据进行微调,包括belle,alpaca的中英文指令数据以及moss多轮对话数据,完成在120万+条数据的指令微调
- belle数据:120k数据 v1
- stanford_alapca:52k数据 v2
- stanford_alapca_gpt4_zh:52k数据 v2
- [sharegpt](data%2Ffinetune%2Fsharegpt):90k数据
- [fnlp/moss-003-sft-data](https://huggingface.co/datasets/fnlp/moss-003-sft-data):moss多轮对话数据
根据长度(输出长度大约500)采样之后,筛选出11万指令数据进行sft训练
## 测试效果
- 西瓜和南瓜有什么区别
```
1. 外观:西瓜通常是圆形或椭圆形的,有浅黄色或棕色的外皮,内部有红色或黄色的果肉。而南瓜则是一个圆形或椭圆形的,通常有浅黄色或棕色的外皮,内部有橙黄色的果肉。
2. 品种:西瓜主要分为三种品种:小型、中等和巨型。小型西瓜通常只有一个或两个果核,中等西瓜通常有两个或三个果核,巨型西瓜则有四个或更多果核。南瓜主要分为两种品种:南 瓜和萝卜南瓜。南瓜通常有两个或三个果核,萝卜南瓜则有四个或更多果核。
3. 生长环境:西瓜需要温暖的气候,并且需要充足的阳光。它们通常生长在热带或亚热带地区。南瓜则在温带和寒带地区生长。南瓜需要更长的冬季来生长,而西瓜通常在春季和夏季生 长。
4. 营养价值:西瓜和南瓜都是富含维生素C和纤维素的水果。西瓜还富含抗氧化剂和钾,而南瓜富含叶酸和维生素E。
总的来说,西瓜和南瓜是两种完全不同的果类,它们在外观、品种、生长环境和营养价值上有很多区别。
```
- 如何提高我的时间管理技能?
```
Response: 1. 制定计划和目标:制定每周或每日的计划和目标,以帮助您更好地分配时间。
2. 优先考虑最重要的任务:优先完成最重要的任务,这有助于避免分心和浪费时间。
3. 避免拖延:尝试避免拖延,尽早开始任务,这样您可以更容易地完成它们。
4. 集中注意力:集中注意力,避免同时处理多个任务,这有助于减少分心和提高效率。5. 利用工具和技术:利用时间管理工具和技术,如日历、提醒和待办事项列表,以帮助您更好地管理时间。
6. 学会说“不”:学会拒绝不必要的事情,这有助于您更好地利用时间,并专注于最重要的任务。
7. 保持积极心态:保持积极心态,相信自己能够管理时间,并尝试不断学习和发展自己的时间管理技能。
8. 休息和放松:尝试保持工作和生活的平衡,并定期休息和放松,这有助于提高时间管理技能和提高工作效率。
```
- Python和JavaScript编程语言的主要区别是什么?
```
1. 语法不同:Python是一种面向对象的编程语言,它的语法比较严格,需要使用花括号来标识代码块。JavaScript是一种脚本语言,它的语法相对比较松散,需要使用花括号或缩进来标 识代码块。
2. 应用领域不同:Python广泛应用于数据分析、机器学习、人工智能等领域,由于其可读性好、语法清晰、功能强大等特点。JavaScript则主要应用于前端开发、后端开发、移动端开发 等领域。
3. 运行环境不同:Python的运行环境较为独立,可以在不同的操作系统上运行。JavaScript的运行环境相对较复杂,需要考虑浏览器和Web服务器的兼容性问题。
4. 应用场景不同:Python更适合进行科学计算、数据分析、机器学习等方面的应用,如Pandas、Numpy等库;而JavaScript则更适合进行前端开发、后端开发、移动端开发等领域的脚本编程,如React、Angular等框架。
5. 库和框架不同:Python拥有丰富的标准库和第三方库,如Django、Flask等;而JavaScript则拥有流行的Web开发框架,如React、Angular等。
```
- AI会如何影响农业生产?
```
AI将有助于提高农业生产效率,优化作物种植和收成过程,以及提高农产品的质量和品质。
AI技术可以应用于植物监测、灌溉管理、病虫害防治、土壤分析等方面,减少人力 成本,提高生产效率,同时也可以降低农业生产中的资源浪费和环境污染。例如,AI可以监测土壤湿度、气象状况等环境因素,帮助农民决定何时种植、灌溉和施肥,从而提高农作物的产量和质量。
另外,AI还可以通过监测植物健康状况,及时发现和预防病虫害的发生,提高农作物的产量和质量。此外,AI还可以帮助农民预测未来天气、干旱和洪涝灾害,制定更有效的灾害应对计划。
总之,AI有望成为农业生产中不可或缺的技术,在未来为农业生产带来更大的改进和效益。
```
- 翻译成英文:过去都是假的,回忆是一条没有归途的路,以往的一切春天都无法复原,即使最狂热最坚贞的爱情,归根结底也不过是一种瞬息即逝的现实,唯有孤独永恒。
```
Response: Translation into English:
The past is false, memories are a dead-end road, all previous springs are hopelessly incapable of being restored, even the most passionate and steadfast love is ultimately a fleeting reality,
and only solitude is eternal
```
- 请用python编程语言实现快速排序算法的代码
```
快速排序(Quick Sort)是一种常用的排序算法,基本思想是通过一次排序将数组分为两个部分,其中一个部分的所有元素都比另一个小,然后再分别对这两个部分进行排序 ,直到整个数组有序。
以下是使用Python语言实现快速排序的代码:
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr)//2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```
在这个实现中,首先判断数组的长度是否小于等于1,如果是,则直接返回该数组。否则,选择一个枢轴(pivot)元素,将数组分为三个部分:小于枢轴的左部分、等于枢轴的中间部分、大于枢轴的右部分。接着递归地对左部分和右部分进行排序,并将它们与中间部分连接起来。最后返回排好序的数组。
以下是使用上述代码对数组进行排序的示例:
```python
arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]
sorted_arr = quick_sort(arr)
print(sorted_arr)
```
```
输出结果为:[1, 1, 2, 3, 3, 4, 5, 5, 5, 6, 9]
```
## 免责声明
本项目相关资源仅供学术研究之用,严禁用于商业用途。 使用涉及第三方代码的部分时,请严格遵循相应的开源协议。
模型生成的内容受模型计算、随机性和量化精度损失等因素影响,本项目不对其准确性作出保证。
对于模型输出的任何内容,本项目不承担任何法律责任,亦不对因使用相关资源和输出结果而可能产生的任何损失承担责任。
|
undrwolf/SoccerTwos-RL
|
undrwolf
| 2023-07-31T09:19:36Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-31T09:09:57Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Agog/Soccer
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Josrf/ppo-LunarLander-v2
|
Josrf
| 2023-07-31T09:16:02Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T09:15:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.48 +/- 35.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Srivasanth23/Srivasanth_Portfolio
|
Srivasanth23
| 2023-07-31T09:04:39Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-07-30T08:18:37Z |
---
license: mit
---
# Sv_Portfolio
<br>Created my own portfolio website using <b>JavaScript</b> and <b> CSS Flexbox </b> properties.
<br>
<b> Webiste Link ➡️ [here](https://jsrivportfolio.ccbp.tech/)</b>
## 🛠 Tools used
| Tools |Applied |
| ----------------- | ------------------------------------------------------------------ |
| IDE | VSCode |
| Frontend | JS, CSS, HTML, Bootstrap|
## ✍ Author
[Jammula Srivasanth](https://github.com/srivasanth23)
<br>
## 🛡 License
You may use this project freely at your own risk. See [LICENSE](https://choosealicense.com/licenses/mit/).
Copyright (c) 2023 Jammula Srivasanth
<div align="center">
<h3 align="left">Connect with me </h3>
<a href="https://www.github.com/srivasanth23" target="_blank"><img alt="Github" width="40px" src="https://cdn-icons-png.flaticon.com/512/733/733553.png"></a>    
<a href="[https://www.linkedin.com/in/mahimachuri/](https://linkedin.com/in/https://www.linkedin.com/in/srivasanth-jammula)" target="_blank"><img alt="LinkedIn" width="40px" src="https://cdn-icons-png.flaticon.com/512/3536/3536505.png"></a>    
<a href="mailto:techisrivasanth23@gmail.com" target="_blank"><img alt="Gmail" width="40px" src="https://cdn-icons-png.flaticon.com/512/5968/5968534.png"></a>   
</div>
|
MichelNivard/Rchat_3b_v2
|
MichelNivard
| 2023-07-31T08:47:09Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T13:19:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
NasimB/all-base-miss-switchboard-seed
|
NasimB
| 2023-07-31T08:30:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T05:10:45Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-miss-switchboard-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-miss-switchboard-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3572 | 0.29 | 500 | 5.3445 |
| 5.0473 | 0.59 | 1000 | 4.9275 |
| 4.702 | 0.88 | 1500 | 4.6874 |
| 4.4496 | 1.18 | 2000 | 4.5481 |
| 4.291 | 1.47 | 2500 | 4.4269 |
| 4.2007 | 1.77 | 3000 | 4.3256 |
| 4.0643 | 2.06 | 3500 | 4.2553 |
| 3.8944 | 2.36 | 4000 | 4.2086 |
| 3.867 | 2.65 | 4500 | 4.1498 |
| 3.8222 | 2.95 | 5000 | 4.1040 |
| 3.6099 | 3.24 | 5500 | 4.0985 |
| 3.5862 | 3.54 | 6000 | 4.0672 |
| 3.5652 | 3.83 | 6500 | 4.0345 |
| 3.4416 | 4.12 | 7000 | 4.0403 |
| 3.3122 | 4.42 | 7500 | 4.0320 |
| 3.2986 | 4.71 | 8000 | 4.0164 |
| 3.2906 | 5.01 | 8500 | 4.0097 |
| 3.1267 | 5.3 | 9000 | 4.0182 |
| 3.1245 | 5.6 | 9500 | 4.0176 |
| 3.1197 | 5.89 | 10000 | 4.0167 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
undrwolf/rl_course_vizdoom_health_gathering_supreme
|
undrwolf
| 2023-07-31T08:11:34Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T08:11:25Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.96 +/- 4.63
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r undrwolf/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
badokorach/bert-base-cased-finetuned
|
badokorach
| 2023-07-31T08:06:22Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-31T07:55:02Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 66 | 2.6893 |
| No log | 2.0 | 132 | 2.4447 |
| No log | 3.0 | 198 | 2.3882 |
| No log | 4.0 | 264 | 2.3255 |
| No log | 5.0 | 330 | 2.3217 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
legendarymoney/inarowboatapcoolmuchhappydays
|
legendarymoney
| 2023-07-31T07:41:53Z | 0 | 2 | null |
[
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-31T07:16:16Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
---
This model is based on https://huggingface.co/runwayml/stable-diffusion-v1-5 with additional finetuning based on dreambooth training on a local machine to fine tune the model. Each version update will feature additional dreambooth training cycles
This model uses CreativeML-OpenRAIL-M license - https://huggingface.co/spaces/CompVis/stable-diffusion-license
You are free to redistribute or distribute modifications (e.g., merged) in accordance with creativeml-openrail-m Chapter III.
This is a SD 1.5 fine-tune model
https://huggingface.co/stabilityai are the creators of the stable diffusion model used as a base.
Euler A or DPM++ 2M Karras with 25 - 35 steps
CFG Scale 6 - 8
Hires. fix with Latent upscaler
0 Hires steps and Denoising strength 0.25-0.45
Upscale by 1.1-2.0
Free Model Share to Others, Free to use commercially, free to modify. Please tell me if i missing anything that should be here
|
tds/ppo-lunarlander-v2
|
tds
| 2023-07-31T07:37:02Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T05:49:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.94 +/- 24.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Pashtu/llama2-d
|
Pashtu
| 2023-07-31T07:22:04Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T07:22:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
saurabh2086/ppo-LunarLander-v2
|
saurabh2086
| 2023-07-31T07:14:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T07:14:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.65 +/- 16.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
schakal/OUTPUT_DIR
|
schakal
| 2023-07-31T07:12:32Z | 28 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-31T06:54:44Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - schakal/output_dir
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
xiao12346/t5-large_PREFIX_TUNING_SEQ2SEQ_c2
|
xiao12346
| 2023-07-31T07:12:25Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T07:11:47Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
JosefJilek/AAT-JPEG-Artefact-Detection
|
JosefJilek
| 2023-07-31T07:11:56Z | 0 | 2 |
tf-keras
|
[
"tf-keras",
"image-classification",
"license:creativeml-openrail-m",
"region:us"
] |
image-classification
| 2023-07-29T12:36:59Z |
---
license: creativeml-openrail-m
tags:
- image-classification
---
# AAT JPEG Artefact Datection
Autodetects JPEG artefacts in categories of C100 (no artefacts), C95, C90, C85, C80, C70, C60, C40 and C20 (really terrible artefacts). The input is 1024x1024 large image. Image is converted to gray scale to omit redundant informations and reduce overall training and inference time. Model is self designed and doesn't use any other model as base.
# Training stats
Around 60k of images was used during training.
<img src="https://huggingface.co/JosefJilek/AAT-JPEG-Artefact-Detection/resolve/main/v1.0-stats.png"></img>
|
AdiOO7/Azure-tickets-Classifier-llama-1
|
AdiOO7
| 2023-07-31T07:10:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T07:10:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
pradeepiisc/xlm-roberta-base-finetuned-panx-fr
|
pradeepiisc
| 2023-07-31T06:54:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-30T05:56:36Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8492640839113517
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2606
- F1: 0.8493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5508 | 1.0 | 191 | 0.3140 | 0.7874 |
| 0.2698 | 2.0 | 382 | 0.2807 | 0.8322 |
| 0.1837 | 3.0 | 573 | 0.2606 | 0.8493 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.10.1
- Tokenizers 0.13.3
|
c4big/xlm-roberta-base-finetuned-panx-de
|
c4big
| 2023-07-31T06:54:00Z | 135 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T09:03:10Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8616659101225601
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1329
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2568 | 1.0 | 525 | 0.1583 | 0.8125 |
| 0.1261 | 2.0 | 1050 | 0.1458 | 0.8473 |
| 0.0823 | 3.0 | 1575 | 0.1329 | 0.8617 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Tahsin-Mayeesha/squad-bn-mt5-base2
|
Tahsin-Mayeesha
| 2023-07-31T06:53:25Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"doi:10.57967/hf/0940",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-11T17:49:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: squad-bn-mt5-base2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-bn-mt5-base2
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5309
- Rouge1 Precision: 37.5039
- Rouge1 Recall: 30.4476
- Rouge1 Fmeasure: 32.6695
- Rouge2 Precision: 16.2843
- Rouge2 Recall: 12.9093
- Rouge2 Fmeasure: 13.9246
- Rougel Precision: 35.2648
- Rougel Recall: 28.6919
- Rougel Fmeasure: 30.7578
- Rougelsum Precision: 35.2646
- Rougelsum Recall: 28.6829
- Rougelsum Fmeasure: 30.7527
- Bleu-1: 23.9098
- Bleu-2: 14.7458
- Bleu-3: 9.684
- Bleu-4: 6.6217
- Meteor: 0.142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Meteor |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|:-------:|:-------:|:------:|:------:|:------:|
| 0.698 | 1.0 | 6769 | 0.5654 | 35.1173 | 28.5689 | 30.6164 | 14.7565 | 11.6885 | 12.6012 | 33.0241 | 26.9309 | 28.8245 | 33.0061 | 26.9075 | 28.807 | 22.6163 | 13.6841 | 8.8346 | 5.926 | 0.1314 |
| 0.6202 | 2.0 | 13538 | 0.5437 | 36.3795 | 29.5116 | 31.6675 | 15.5398 | 12.3022 | 13.2805 | 34.3036 | 27.8749 | 29.8881 | 34.2498 | 27.8384 | 29.8439 | 23.2744 | 14.1999 | 9.2715 | 6.2908 | 0.1364 |
| 0.5878 | 3.0 | 20307 | 0.5322 | 37.2522 | 30.1185 | 32.3701 | 16.0437 | 12.6396 | 13.6664 | 35.0062 | 28.3657 | 30.4487 | 34.9742 | 28.3319 | 30.4195 | 23.7569 | 14.5781 | 9.5429 | 6.52 | 0.1407 |
| 0.5761 | 4.0 | 27076 | 0.5309 | 37.5 | 30.4513 | 32.6723 | 16.2813 | 12.9079 | 13.9284 | 35.2662 | 28.6924 | 30.755 | 35.2509 | 28.6759 | 30.7444 | 23.9098 | 14.7458 | 9.684 | 6.6217 | 0.142 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
## Citation
@misc {tahsin_mayeesha_2023,
author = { {Tahsin Mayeesha} },
title = { squad-bn-mt5-base2 (Revision 4ab9b63) },
year = 2023,
url = { https://huggingface.co/Tahsin-Mayeesha/squad-bn-mt5-base2 },
doi = { 10.57967/hf/0940 },
publisher = { Hugging Face }
}
|
runningsnake/bert-finetuned-ner
|
runningsnake
| 2023-07-31T06:45:05Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-31T04:32:34Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: runningsnake/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# runningsnake/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0279
- Validation Loss: 0.0536
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1745 | 0.0692 | 0 |
| 0.0472 | 0.0569 | 1 |
| 0.0279 | 0.0536 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.2
- Tokenizers 0.13.3
|
teftef/teftef_mix
|
teftef
| 2023-07-31T06:40:57Z | 0 | 1 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-16T03:55:02Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
---
# teftef_mix
### Fixed Prompts
<img src="https://cdn-uploads.huggingface.co/production/uploads/63056e2d99870e13d3df4e73/aTF8xXk3yvjGaz0sYYabu.jpeg" width="900" >
・Prompt: one girl , long hair , yellow eyes, black hair,summer ,sunflower , blue sky ,
・Negative prompt: bad-hands-5, EasyNegativeV2,(missing finger, extra finger fusion finger, ::1.4),simple background
・Steps: 19
・Sampler: Euler a
・CFG scale: 7
・Seed: 882503908
・Size: 512x768
・Denoising strength: 0.5
・Clip skip: 2
・Hires upscale: 2
・Hires upscaler: R-ESRGAN 4x+ Anime6B
<img src="https://cdn-uploads.huggingface.co/production/uploads/63056e2d99870e13d3df4e73/ihb6wje8llMGQtY5J28QQ.jpeg" width="900" >
・Prompt: one girl , long hair , yellow eyes, black hair,fireworks, night sky,pink yukata , furisode ,lanthanum,many many fireworks
・Negative prompt: bad-hands-5, EasyNegativeV2,(missing finger, extra finger fusion finger, ::1.4),simple background
・Steps: 19
・Sampler: Euler a
・CFG scale: 7
・Seed: 882503908
・Size: 512x768
・Denoising strength: 0.5
・Clip skip: 2
・Hires upscale: 2
・Hires upscaler: R-ESRGAN 4x+ Anime6B
### LoRA
<img src="https://cdn-uploads.huggingface.co/production/uploads/63056e2d99870e13d3df4e73/_-XwiFtg89-KcPEt4wwfu.jpeg" width="900" >
・Prompt: <lora:mafuyu:0.67>,one girl , long hair , yellow eyes, black hair,fireworks, night sky,light blue yukata , furisode ,lanthanum,many many fireworks,
・Negative prompt: bad-hands-5, EasyNegativeV2,(missing finger, extra finger fusion finger, ::1.4),simple background
・Steps: 19
・Sampler: Euler a
・CFG scale: 7
・Seed: 882503908
・Size: 512x768
・Denoising strength: 0.5
・Clip skip: 2
・Hires upscale: 2
・Hires upscaler: R-ESRGAN 4x+ Anime6B
### license
・Please do not distribute without permission.
・Feel free to use it for merging.
・Images output using these models can be used for commercial purposes.
・Do not use this model for commercial purposes.
・Do not use for crimes.
Public : 2023/01/15 teftef
|
narvind2003/llama-2-7b-miniguanaco
|
narvind2003
| 2023-07-31T06:06:46Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:mlabonne/guanaco-llama2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T05:36:47Z |
---
license: mit
datasets:
- mlabonne/guanaco-llama2
---
# Finetuning Llama2 7b on 1000 samples from the Guanaco dataset, using QLoRA - quantization + low rank adaptation.
The Llama2 model from Meta has several variants. The smallest one containing 7 billion parameters has been finetuned here.
|
RohanMusk/Llama-2-7b-hf-abl
|
RohanMusk
| 2023-07-31T05:44:26Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T05:14:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
kcsteam1/0731_switch_ver1.1
|
kcsteam1
| 2023-07-31T05:39:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T05:39:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
pratsy/rl_course_vizdoom_health_gathering_supreme
|
pratsy
| 2023-07-31T05:25:17Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T04:45:37Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.85 +/- 5.17
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r pratsy/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
himanimaheshwari3/distilbert-base-uncased-finetuned-himani5
|
himanimaheshwari3
| 2023-07-31T05:22:56Z | 66 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-31T05:16:19Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: himanimaheshwari3/distilbert-base-uncased-finetuned-himani5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# himanimaheshwari3/distilbert-base-uncased-finetuned-himani5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5780
- Validation Loss: 5.5369
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -993, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.5780 | 5.5369 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
keelezibel/id-hatespeech
|
keelezibel
| 2023-07-31T05:10:15Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-19T09:04:45Z |
---
license: apache-2.0
language:
- en
---
This model will detect hate speech in native language Indo. Finetuned upon the `indobert-base-p2` model from Gojek.
## Usage
```
HS_DOMAIN = ['hs', 'abusive', 'hs_individual', 'hs_group', 'hs_religion', 'hs_race', 'hs_physical', 'hs_gender', 'hs_other', 'hs_weak', 'hs_moderate', 'hs_strong']
LABEL2INDEX = {'false': 0, 'true': 1}
INDEX2LABEL = {0: 'false', 1: 'true'}
# Load Tokenizer
tokenizer_model_id = "indobenchmark/indobert-base-p2"
tokenizer = BertTokenizer.from_pretrained(tokenizer_model_id)
config = AutoConfig.from_pretrained(tokenizer_model_id)
config.num_labels_list = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
# Load this model
model_id = "keelezibel/id-hatespeech"
model = BertForMultiLabelClassification.from_pretrained(model_id, config=config)
# Encode Text
subwords = tokenizer.encode(text.text)
subwords = torch.LongTensor(subwords).view(1, -1).to(model.device)
logits = model(subwords)[0]
labels = [torch.topk(logit, k=1, dim=-1)[-1].squeeze().item() for logit in logits]
res = dict()
for idx, label in enumerate(labels):
pred = INDEX2LABEL[label]
proba = float(F.softmax(logits[idx], dim=-1).squeeze()[label]*100)
res[HS_DOMAIN[idx]] = (pred, round(proba,2))
```
## Output
```
{
"hs": [
"true",
99.94
],
"abusive": [
"true",
86.8
],
"hs_individual": [
"false",
99.97
],
"hs_group": [
"true",
99.96
],
"hs_religion": [
"false",
99.86
],
"hs_race": [
"false",
99.98
],
"hs_physical": [
"false",
99.99
],
"hs_gender": [
"false",
99.95
],
"hs_other": [
"true",
99.7
],
"hs_weak": [
"false",
99.98
],
"hs_moderate": [
"true",
99.8
],
"hs_strong": [
"false",
99.94
]
}
```
|
reichenbach/whisper-small-dv
|
reichenbach
| 2023-07-31T05:02:46Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-30T11:12:46Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
model-index:
- name: Whisper Small Dv - RCV
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - RCV
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
himanimaheshwari3/distilbert-base-uncased-finetuned-himani3
|
himanimaheshwari3
| 2023-07-31T04:57:32Z | 70 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-31T04:56:53Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: himanimaheshwari3/distilbert-base-uncased-finetuned-himani3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# himanimaheshwari3/distilbert-base-uncased-finetuned-himani3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -973, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| nan | nan | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
hoang14/chatbot_qlora_20230731_0433
|
hoang14
| 2023-07-31T04:38:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T04:38:23Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
w11wo/lao-roberta-base
|
w11wo
| 2023-07-31T04:36:43Z | 144 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"lao-roberta-base",
"lo",
"dataset:oscar-corpus/OSCAR-2109",
"arxiv:1907.11692",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: lo
tags:
- lao-roberta-base
license: mit
datasets:
- oscar-corpus/OSCAR-2109
---
## Lao RoBERTa Base
Lao RoBERTa Base is a masked language model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. It was trained on the [OSCAR-2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109) dataset, specifically the `deduplicated_lo` subset. The model was trained from scratch and achieved an evaluation loss of 1.4556 and an evaluation perplexity of 4.287.
This model was trained using HuggingFace's PyTorch framework and the training script found [here](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py). All training was done on a TPUv3-8, provided by the [TPU Research Cloud](https://sites.research.google/trc/about/) program. You can view the detailed training results in the [Training metrics](https://huggingface.co/w11wo/lao-roberta-base/tensorboard) tab, logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------ | ------- | ------- | ------------------------------------ |
| `lao-roberta-base` | 124M | RoBERTa | OSCAR-2109 `deduplicated_lo` Dataset |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 1024
- total_eval_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
| :-----------: | :---: | :--: | :-------------: |
| No log | 1.0 | 216 | 5.8586 |
| No log | 2.0 | 432 | 5.5095 |
| 6.688 | 3.0 | 648 | 5.3976 |
| 6.688 | 4.0 | 864 | 5.3562 |
| 5.3629 | 5.0 | 1080 | 5.2912 |
| 5.3629 | 6.0 | 1296 | 5.2385 |
| 5.22 | 7.0 | 1512 | 5.1955 |
| 5.22 | 8.0 | 1728 | 5.1785 |
| 5.22 | 9.0 | 1944 | 5.1327 |
| 5.1248 | 10.0 | 2160 | 5.1243 |
| 5.1248 | 11.0 | 2376 | 5.0889 |
| 5.0591 | 12.0 | 2592 | 5.0732 |
| 5.0591 | 13.0 | 2808 | 5.0417 |
| 5.0094 | 14.0 | 3024 | 5.0388 |
| 5.0094 | 15.0 | 3240 | 4.9299 |
| 5.0094 | 16.0 | 3456 | 4.2991 |
| 4.7527 | 17.0 | 3672 | 3.6541 |
| 4.7527 | 18.0 | 3888 | 2.7826 |
| 3.4431 | 19.0 | 4104 | 2.2796 |
| 3.4431 | 20.0 | 4320 | 2.0213 |
| 2.2803 | 21.0 | 4536 | 1.8809 |
| 2.2803 | 22.0 | 4752 | 1.7615 |
| 2.2803 | 23.0 | 4968 | 1.6925 |
| 1.8601 | 24.0 | 5184 | 1.6205 |
| 1.8601 | 25.0 | 5400 | 1.5751 |
| 1.6697 | 26.0 | 5616 | 1.5391 |
| 1.6697 | 27.0 | 5832 | 1.5200 |
| 1.5655 | 28.0 | 6048 | 1.4866 |
| 1.5655 | 29.0 | 6264 | 1.4656 |
| 1.5655 | 30.0 | 6480 | 1.4627 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/lao-roberta-base"
prompt = "REPLACE WITH MASKED PROMPT"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask(prompt)
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/lao-roberta-base"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "ສະບາຍດີຊາວໂລກ."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Author
Lao RoBERTa Base was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google's TPU-RC.
## Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
liuhaotian/llava-llama-2-13b-chat-lightning-gptq
|
liuhaotian
| 2023-07-31T04:35:28Z | 14 | 8 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-31T03:46:31Z |
---
inference: false
---
<br>
<br>
# LLaVA GPTQ Model Card
**NOTE: To use the GPTQ quantized LLaVA checkpoints, you need to use `text-generation-webui`, and the support for LLaMA-2 is WIP. We are working on the PR.**
You can try it out [here](https://github.com/haotian-liu/text-generation-webui).
Instructions and detailed stories here: https://github.com/haotian-liu/LLaVA/issues/310#issuecomment-1657552223
PR: https://github.com/oobabooga/text-generation-webui/pull/3377
These files are GPTQ model files for [LLaVA-LLaMA-2-13B-Chat-Preview](https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview).
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-LLaMA-2-13B-Chat-Preview was trained in July 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 80K GPT-generated multimodal instruction-following data.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs.
We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset.
See https://llava-vl.github.io/ for more details.
|
liuhaotian/llava-llama-2-13b-chat-lightning-preview
|
liuhaotian
| 2023-07-31T04:11:25Z | 2,069 | 44 |
transformers
|
[
"transformers",
"pytorch",
"llava",
"text-generation",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-19T07:38:47Z |
---
inference: false
---
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-LLaMA-2-13B-Chat-Preview was trained in July 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 80K GPT-generated multimodal instruction-following data.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs.
We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset.
See https://llava-vl.github.io/ for more details.
|
hoang14/chatbot_qlora_20230731_0348
|
hoang14
| 2023-07-31T03:52:20Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T03:52:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Tornado24/dreambooth_sd15_jay_lora
|
Tornado24
| 2023-07-31T03:40:58Z | 28 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-31T03:23:56Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a DSLR photo of a sks jay standing on a large basket of rainbow macarons
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Tornado24/dreambooth_sd15_jay_lora
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a DSLR photo of a sks jay standing on a large basket of rainbow macarons using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
GraydientPlatformAPI/occdent
|
GraydientPlatformAPI
| 2023-07-31T03:40:04Z | 43 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-27T03:58:55Z |
---
library_name: diffusers
pipeline_tag: image-to-text
---
|
Ningxin/optb1_lambda_centralized_4
|
Ningxin
| 2023-07-31T03:30:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T03:29:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Ningxin/opt1b_wikitext_collaboration_4
|
Ningxin
| 2023-07-31T03:26:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T02:21:14Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Ravi07bec/llama-7b-finetuned-wikitext2
|
Ravi07bec
| 2023-07-31T03:25:15Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:finetune:huggyllama/llama-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T03:06:13Z |
---
license: other
base_model: huggyllama/llama-7b
tags:
- generated_from_trainer
model-index:
- name: llama-7b-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-finetuned-wikitext2
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | nan |
| No log | 2.0 | 250 | nan |
| No log | 3.0 | 375 | nan |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.1
- Tokenizers 0.13.3
|
azhang1212/angela_shuffle_diacritics_entities_test
|
azhang1212
| 2023-07-31T03:19:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-31T01:58:21Z |
---
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: angela_shuffle_diacritics_entities_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# angela_shuffle_diacritics_entities_test
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1642
- Precision: 0.4241
- Recall: 0.3051
- F1: 0.3549
- Accuracy: 0.9552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1481 | 1.0 | 1283 | 0.1381 | 0.4423 | 0.2037 | 0.2789 | 0.9570 |
| 0.1258 | 2.0 | 2566 | 0.1329 | 0.4983 | 0.2396 | 0.3236 | 0.9592 |
| 0.1074 | 3.0 | 3849 | 0.1416 | 0.4748 | 0.2590 | 0.3352 | 0.9584 |
| 0.0853 | 4.0 | 5132 | 0.1523 | 0.4258 | 0.3156 | 0.3625 | 0.9552 |
| 0.0692 | 5.0 | 6415 | 0.1642 | 0.4241 | 0.3051 | 0.3549 | 0.9552 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
wolfrage89/annual_report_translation_id_en
|
wolfrage89
| 2023-07-31T03:04:25Z | 108 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
### Finetuned on annual report sentence pair
This marianMT has been further finetuned on annual report sentence pairs
## Test out at huggingface spaces!
https://huggingface.co/spaces/wolfrage89/finance_domain_translation_marianMT
## Sample colab notebook
https://colab.research.google.com/drive/1H57vwiah7n1JXvXYMqJ8dklrIuU6Cljb?usp=sharing
## How to use
```python
!pip install transformers
!pip install sentencepiece
from transformers import MarianMTModel, MarianTokenizer
tokenizer = MarianTokenizer.from_pretrained("wolfrage89/annual_report_translation_id_en")
model = MarianMTModel.from_pretrained("wolfrage89/annual_report_translation_id_en")
#tokenizing bahasa sentence
bahasa_sentence = "Interpretasi ini merupakan interpretasi atas PSAK 46: Pajak Penghasilan yang bertujuan untuk mengklarifikasi dan memberikan panduan dalam merefleksikan ketidakpastian perlakuan pajak penghasilan dalam laporan keuangan."
tokenized_bahasa_sentence = tokenizer([bahasa_sentence], return_tensors='pt', max_length=104, truncation=True)
#feeding tokenized sentence into model, the max_legnth have been set to 104 as the model was trained mostly on sentences with this length
translated_tokens = model.generate(**tokenized_bahasa_sentence, max_length=104)[0]
## decoding the tokens to get english sentence
english_sentence = tokenizer.decode(translated_tokens, skip_special_tokens=True)
print(english_sentence)
# This interpretation is an interpretation of PSAK 46: Income Tax that aims to clarify and provide guidance in reflecting the uncertainty of income tax treatments in the financial statements.
```
### opus-mt-id-en (original model)
* source languages: id
* target languages: en
* OPUS readme: [id-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-en/README.md)
|
qiyuw/pcl-roberta-base
|
qiyuw
| 2023-07-31T03:02:42Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"sentence-similarity",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-12T11:07:28Z |
---
license: mit
language:
- en
pipeline_tag: sentence-similarity
---
# Model Description
Refer to [https://github.com/qiyuw/PeerCL](https://github.com/qiyuw/PeerCL)
# Citation
Cite our paper if PCL helps your work:
```bibtex
@inproceedings{wu-etal-2022-pcl,
title = "{PCL}: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings",
author = "Wu, Qiyu and Tao, Chongyang and Shen, Tao and Xu, Can and Geng, Xiubo and Jiang, Daxin",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.826",
pages = "12052--12066",
}
```
|
qiyuw/pcl-roberta-large
|
qiyuw
| 2023-07-31T03:02:34Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"sentence-similarity",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-12T11:09:08Z |
---
license: mit
language:
- en
pipeline_tag: sentence-similarity
---
# Model Description
Refer to [https://github.com/qiyuw/PeerCL](https://github.com/qiyuw/PeerCL)
# Citation
Cite our paper if PCL helps your work:
```bibtex
@inproceedings{wu-etal-2022-pcl,
title = "{PCL}: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings",
author = "Wu, Qiyu and Tao, Chongyang and Shen, Tao and Xu, Can and Geng, Xiubo and Jiang, Daxin",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.826",
pages = "12052--12066",
}
```
|
qiyuw/pcl-bert-base-uncased
|
qiyuw
| 2023-07-31T03:02:27Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"sentence-similarity",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-12T08:38:12Z |
---
license: mit
language:
- en
pipeline_tag: sentence-similarity
---
# Model Description
Refer to [https://github.com/qiyuw/PeerCL](https://github.com/qiyuw/PeerCL)
# Citation
Cite our paper if PCL helps your work:
```bibtex
@inproceedings{wu-etal-2022-pcl,
title = "{PCL}: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings",
author = "Wu, Qiyu and Tao, Chongyang and Shen, Tao and Xu, Can and Geng, Xiubo and Jiang, Daxin",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.826",
pages = "12052--12066",
}
```
|
rzambrano/ppo-ClearRL-LunarLander-v2
|
rzambrano
| 2023-07-31T02:31:25Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T02:26:34Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -91.78 +/- 30.20
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
edures/a2c-AntBulletEnv-v0
|
edures
| 2023-07-31T02:25:11Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T02:24:14Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 966.18 +/- 100.33
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
onahte/ppo-LunarLander-v2
|
onahte
| 2023-07-31T02:16:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T02:16:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.53 +/- 17.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
frncscp/focalnet-small-patacon
|
frncscp
| 2023-07-31T02:14:19Z | 141 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"focalnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/focalnet-small",
"base_model:finetune:microsoft/focalnet-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-31T01:57:04Z |
---
license: apache-2.0
base_model: microsoft/focalnet-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: focal-patacotron
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# focal-patacotron
This model is a fine-tuned version of [microsoft/focalnet-small](https://huggingface.co/microsoft/focalnet-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0897
- Accuracy: 0.9659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0575 | 0.97 | 24 | 0.1208 | 0.9545 |
| 0.0275 | 1.94 | 48 | 0.0897 | 0.9659 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
rzambrano/ppo-ClearRL-CartPole-v1
|
rzambrano
| 2023-07-31T02:10:55Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T02:05:28Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -154.30 +/- 105.48
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
seongj/gpt2lm
|
seongj
| 2023-07-31T01:54:47Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T04:20:02Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2lm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2lm
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5666 | 0.16 | 5000 | 1.8018 |
| 1.6685 | 0.31 | 10000 | 1.5932 |
| 1.4956 | 0.47 | 15000 | 1.4797 |
| 1.3802 | 0.62 | 20000 | 1.3924 |
| 1.2885 | 0.78 | 25000 | 1.3243 |
| 1.2355 | 0.93 | 30000 | 1.2929 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
kelvinlimwan/t5_recommendation_sports_equipment_english
|
kelvinlimwan
| 2023-07-31T01:27:42Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-31T01:13:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_recommendation_sports_equipment_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_recommendation_sports_equipment_english
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4554
- Rouge1: 55.5556
- Rouge2: 47.6190
- Rougel: 55.5556
- Rougelsum: 54.7619
- Gen Len: 3.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.96 | 6 | 6.7375 | 8.7066 | 0.9524 | 8.6656 | 8.6168 | 19.0 |
| No log | 1.96 | 12 | 2.8089 | 23.3333 | 9.5238 | 23.3333 | 23.3333 | 3.1429 |
| No log | 2.96 | 18 | 0.9394 | 9.5238 | 4.7619 | 9.5238 | 9.5238 | 3.1905 |
| No log | 3.96 | 24 | 0.6679 | 32.8571 | 14.2857 | 32.5397 | 32.5397 | 3.5714 |
| No log | 4.96 | 30 | 0.6736 | 25.8730 | 9.5238 | 25.0794 | 25.2381 | 4.2381 |
| No log | 5.96 | 36 | 0.6658 | 37.9365 | 23.8095 | 37.5397 | 37.6190 | 4.0476 |
| No log | 6.96 | 42 | 0.6460 | 45.2381 | 33.3333 | 45.5556 | 45.2381 | 3.8571 |
| No log | 7.96 | 48 | 0.5596 | 50.7937 | 42.8571 | 50.7937 | 50.7937 | 4.0 |
| No log | 8.96 | 54 | 0.5082 | 55.5556 | 47.6190 | 55.5556 | 54.7619 | 3.9524 |
| No log | 9.96 | 60 | 0.4554 | 55.5556 | 47.6190 | 55.5556 | 54.7619 | 3.9048 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.0+cu117
- Datasets 2.8.0
- Tokenizers 0.13.3
|
frncscp/patacoswin
|
frncscp
| 2023-07-31T01:15:58Z | 230 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-31T00:08:31Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: patacoswin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# patacoswin
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0203
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 0.4003 | 0.875 |
| 0.5091 | 1.92 | 12 | 0.1308 | 0.9886 |
| 0.5091 | 2.88 | 18 | 0.0522 | 0.9886 |
| 0.1585 | 4.0 | 25 | 0.0203 | 1.0 |
| 0.0925 | 4.96 | 31 | 0.0156 | 1.0 |
| 0.0925 | 5.92 | 37 | 0.0196 | 1.0 |
| 0.0539 | 6.88 | 43 | 0.0095 | 1.0 |
| 0.0397 | 8.0 | 50 | 0.0089 | 1.0 |
| 0.0397 | 8.96 | 56 | 0.0089 | 1.0 |
| 0.0378 | 9.6 | 60 | 0.0090 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
neverloses87/fine-tune-1e-4
|
neverloses87
| 2023-07-31T01:04:03Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset",
"base_model:finetune:sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-07-30T21:28:56Z |
---
license: cc-by-nc-4.0
base_model: sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset
tags:
- generated_from_trainer
model-index:
- name: fine-tune-1e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-1e-4
This model is a fine-tuned version of [sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3060
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
Liea/ppo-LunarLander-v2
|
Liea
| 2023-07-31T00:46:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T07:04:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.88 +/- 19.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tianpf/llama2-qlora-finetunined-french
|
tianpf
| 2023-07-31T00:30:11Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T00:30:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
yukangcao/dog_dreambooth_lora_21_new
|
yukangcao
| 2023-07-31T00:18:21Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-31T00:01:23Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - RaikkonenCao/dog_dreambooth_lora_21_new
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were trained on a photo of dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
yukangcao/cartoon_dreambooth_lora_21_new
|
yukangcao
| 2023-07-31T00:14:13Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-30T23:57:37Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a model of a red cartoon
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - RaikkonenCao/cartoon_dreambooth_lora_21_new
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were trained on a model of a red cartoon using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ZachBeesley/bert-fine-tuned-cola
|
ZachBeesley
| 2023-07-31T00:12:37Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T15:13:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-fine-tuned-cola
results: []
language:
- en
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2620
- Validation Loss: 0.4343
- Epoch: 1
## Model description
Text-classification model that identifies whether or not text is grammatically correct.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4621 | 0.3909 | 0 |
| 0.2620 | 0.4343 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nealchandra/llama-2-7b-hf-lora-alpaca-json
|
nealchandra
| 2023-07-31T00:09:07Z | 2 | 2 |
peft
|
[
"peft",
"dataset:tatsu-lab/alpaca",
"region:us"
] | null | 2023-07-30T22:29:10Z |
---
library_name: peft
datasets:
- tatsu-lab/alpaca
---
## Overview
This model is simple POC for JSON based text completion for instructions following tasks. It was trained on the 20,000 records from the Alpaca dataset with a simple prompt template to expect and return JSON inputs. The prompt template used is roughly like this:
````
### INPUT:
```json
{"instructions": "<INSTRUCTIONS>", "input": "<INPUT>"}
```
### OUTPUT:
```json
{"response": "<OUTPUT">}
```
````
New-lines are escaped, which means you would want to prompt the model like this:
````
### INPUT:\n```json\n{"instructions": "Explain what an alpaca is"}\n```\n### OUTPUT:\n
````
As you can see from this example, the `input` arg in the input JSON can be omitted if they are not needed. The training dataset include examples with and without additional inputs and the model was trained to handle both cases.
Ultimately, you can expect the model to behave like an Alpaca finetune on top of llama-2-7b, the only difference is that it should reliably expect and respond in json format.
### Training procedure
The adapter was trained for 5 epochs using QLoRA with an average training loss of 0.7535.
The following hyperparameters were used:
- Learning Rate: 2e-4
- Lora R: 16
- Lora Alpha: 16
- Lora Dropout: 0.05
- Target Modules: "q_proj", "k_proj", "v_proj", "o_proj"
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Za88yes/Dean
|
Za88yes
| 2023-07-31T00:07:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T23:24:22Z |
---
license: creativeml-openrail-m
---
|
ailabturkiye/emircan
|
ailabturkiye
| 2023-07-31T00:04:04Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-31T00:00:31Z |
---
license: openrail
---
ses sanatçısı emircan iğrek ses modelidir Rvc V2 550 Epoch olarak eğitilmiştir dataset ve train benim tarafımdan yapılmıştır
Modelin izinsiz bir şekilde Ai Lab Discord Sunucusu dışında paylaşılması tamamen yasaktır model openrail lisansına sahiptir
Credits
herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur
|
ZachBeesley/distilbert-base-uncased-finetuned-imdb
|
ZachBeesley
| 2023-07-31T00:03:18Z | 71 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"en",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-10T16:55:13Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ZachBeesley/distilbert-base-uncased-finetuned-imdb
results: []
datasets:
- imdb
language:
- en
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ZachBeesley/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8559
- Validation Loss: 2.5946
- Epoch: 0
## Model description
Fined tuned version of distillbert-base-uncased trained on the imdb dataset for masked language predictions.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8559 | 2.5946 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ZachBeesley/Spam-Detector
|
ZachBeesley
| 2023-07-31T00:02:14Z | 128 | 11 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"spam",
"en",
"dataset:sms_spam",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-29T20:21:24Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
- spam
model-index:
- name: ZachBeesley/Spam-Detector
results: []
datasets:
- sms_spam
widget:
- text: >-
WINNER!! As a valued network customer you have been selected to receivea
£900 prize reward! To claim call 09061701461. Claim code KL341. Valid 12
hours only.
example_title: Example 1
language:
- en
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ZachBeesley/Spam-Detector
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0093
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.0644 | 0 |
| 0.0209 | 1 |
| 0.0093 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
impro/redpj3B-lora-int8-alpaca
|
impro
| 2023-07-30T23:20:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-18T08:32:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
yukangcao/dogs1_dreambooth_lora_21
|
yukangcao
| 2023-07-30T23:19:14Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-30T22:51:39Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - RaikkonenCao/dogs1_dreambooth_lora_21
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were trained on a photo of dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Inzamam567/Useless_Breakdomain
|
Inzamam567
| 2023-07-30T22:56:48Z | 0 | 1 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-30T22:56:48Z |
---
license: openrail
duplicated_from: KSD2023/breakdomain
---
|
pedroroblesduten/Taxi-v3
|
pedroroblesduten
| 2023-07-30T22:37:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T22:37:44Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pedroroblesduten/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gnsepili/code-alpaca-llama2-7b
|
gnsepili
| 2023-07-30T22:31:19Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T22:31:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
hruslen/rl_course_vizdoom_health_gathering_supreme
|
hruslen
| 2023-07-30T22:26:16Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T21:41:49Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.43 +/- 4.75
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r hruslen/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
yukangcao/dog_dreambooth_15
|
yukangcao
| 2023-07-30T22:17:42Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-30T22:12:43Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - RaikkonenCao/dog_dreambooth_15
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
PowerSage/AnnaBaby
|
PowerSage
| 2023-07-30T22:03:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T21:59:32Z |
---
license: creativeml-openrail-m
---
|
lsoni/bert-finetuned-semantic-augmentation-ner
|
lsoni
| 2023-07-30T21:55:09Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:tweetner7",
"base_model:tner/roberta-base-tweetner7-2021",
"base_model:finetune:tner/roberta-base-tweetner7-2021",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-30T13:37:29Z |
---
base_model: tner/roberta-base-tweetner7-2021
tags:
- generated_from_trainer
datasets:
- tweetner7
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-semantic-augmentation-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tweetner7
type: tweetner7
config: tweetner7
split: validation_2021
args: tweetner7
metrics:
- name: Precision
type: precision
value: 0.7156323644933229
- name: Recall
type: recall
value: 0.7214889123548046
- name: F1
type: f1
value: 0.7185487051400026
- name: Accuracy
type: accuracy
value: 0.8840362386754139
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-semantic-augmentation-ner
This model is a fine-tuned version of [tner/roberta-base-tweetner7-2021](https://huggingface.co/tner/roberta-base-tweetner7-2021) on the tweetner7 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7890
- Precision: 0.7156
- Recall: 0.7215
- F1: 0.7185
- Accuracy: 0.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 312 | 0.7153 | 0.7074 | 0.7072 | 0.7073 | 0.8823 |
| 0.0508 | 2.0 | 624 | 0.7532 | 0.7196 | 0.7215 | 0.7205 | 0.8861 |
| 0.0508 | 3.0 | 936 | 0.7890 | 0.7156 | 0.7215 | 0.7185 | 0.8840 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
alexeynoskov/ppo-Huggy
|
alexeynoskov
| 2023-07-30T21:31:52Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-30T21:31:48Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: alexeynoskov/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Rubb/segformer-b0-scene-parse-150
|
Rubb
| 2023-07-30T21:29:00Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-07-30T20:34:13Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6066
- Mean Iou: 0.0688
- Mean Accuracy: 0.1262
- Overall Accuracy: 0.5008
- Per Category Iou: [0.43853467784626365, 0.24573297715749878, 0.6380510638879026, 0.4055319712687706, 0.39704872582170575, 0.3731125869306434, 0.2434579810174668, 0.0, 0.0, 0.6120472479537323, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.41646126174138964, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.01480455169405637, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
- Per Category Accuracy: [0.8826735992812765, 0.3043340877296272, 0.9976989832282037, 0.6216524065587666, 0.9707018877486381, 0.4822455259535185, 0.9546882874336703, nan, 0.0, 0.8994367536265288, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5587433525494928, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.01480455169405637, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.9836 | 1.0 | 20 | 4.8270 | 0.0124 | 0.0559 | 0.2326 | [0.12345389372087832, 0.11147143096565497, 0.4150538794240512, 0.0, 0.293640108509997, 0.0004316952407802231, 0.1743424495064011, 0.0, 0.1276976560079854, 0.0007056565428474653, 0.0, 0.061583158696894645, 4.939552229590388e-05, 0.0, 0.0, 0.0, 0.0, 0.000493114015004755, 0.0, 0.0, 0.0, 0.0, 0.045756685731952386, 0.0, 0.0, 0.0, 0.0505284969399829, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04502755997205186, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.012764610732118997, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0005076142131979696, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0] | [0.18841535794035377, 0.14909229381485062, 0.9449891489174277, 0.0, 0.4628468263017864, 0.0004959915782654466, 0.59263925532861, nan, 0.18719882937659257, 0.0007202639046946801, 0.0, 0.11125770830345619, 0.006802721088435374, nan, 0.0, 0.0, 0.0, 0.0006256424006792689, nan, 0.0, 0.0, nan, 0.09346974864216244, 0.0, 0.0, 0.0, 0.11449794096139784, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.09154040404040405, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.018720990201134605, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0006021678040947411, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.688 | 2.0 | 40 | 4.3114 | 0.0170 | 0.0751 | 0.3110 | [0.1784793666623604, 0.29020648660310755, 0.3967341143059993, 3.0891847641407435e-05, 0.2034361629960977, 0.008162343072599663, 0.2815558456255147, 0.0, 0.155603858659529, 0.05218973982730402, 0.0, 0.005494705432844511, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0069754894404193005, 0.0, 0.0, 0.0, 0.0, 0.052440097468525786, 0.0, 0.0, 0.0, 0.037968658793947285, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.00025331419403867264, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0] | [0.320788946588342, 0.42139874795421767, 0.9844934960239391, 3.090425860683602e-05, 0.9593627264664893, 0.011701352336221556, 0.7449381910888447, nan, 0.2185331886873376, 0.053285123669312436, 0.0, 0.008661981930302596, 0.0, nan, 0.0, 0.0, 0.0, 0.008088662465924833, nan, 0.0, 0.0, nan, 0.19571807502841987, 0.0, 0.0, 0.0, 0.05424059320837984, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0002704530087897228, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.2628 | 3.0 | 60 | 3.8641 | 0.0296 | 0.0768 | 0.3737 | [0.2672464483195483, 0.30063989353452375, 0.46343013043315767, 0.0006310846712622577, 0.2152098765432099, 0.017871182607620207, 0.3743446908107241, 0.0, 0.0, 0.11977134437501705, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015490085477502495, nan, 0.0, 0.0, nan, 0.05556126644736842, 0.0, 0.0, 0.0, 0.006406069436768327, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.6253046220083026, 0.48686378195606494, 0.9930141985014027, 0.000631329854396793, 0.9662359052324845, 0.020477366588387724, 0.7609907212521863, nan, 0.0, 0.12646393638629194, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.01595388121732136, nan, 0.0, 0.0, nan, 0.06827080965012, 0.0, 0.0, 0.0, 0.006542739483508448, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.9402 | 4.0 | 80 | 3.8193 | 0.0295 | 0.0767 | 0.3659 | [0.2834437731005024, 0.2261241333235565, 0.4999395656786774, 0.03137167210354625, 0.15235063300510845, 0.034387711222180706, 0.32747194719471945, 0.0, 0.0, 0.05056918263795778, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.023358697575650327, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.02178028516261313, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.694789905754345, 0.2998239141824703, 0.9937082406831511, 0.03158856719027311, 0.9776700874192322, 0.04937646773018058, 0.9192037470725996, nan, 0.0, 0.05541710482720869, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.02359565625418957, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.022213241991558583, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.5912 | 5.0 | 100 | 3.6448 | 0.0364 | 0.0844 | 0.3882 | [0.3058604423164221, 0.21762328839300885, 0.5110845587230063, 0.06504290063659009, 0.17468867816748543, 0.056915544675642596, 0.3486286890567001, 0.0, 0.0, 0.29609841403847315, 0.0004989190088142358, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02452937820878494, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.002005190651750417, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7690818698094906, 0.2732895099304839, 0.9979312204197888, 0.0674375071742029, 0.9686114278474598, 0.08566483116041784, 0.931328372810008, nan, 0.0, 0.3536639824831818, 0.0005012810515762504, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.024981007284265094, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.002026966349792813, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.7503 | 6.0 | 120 | 3.5438 | 0.0379 | 0.0878 | 0.3915 | [0.3003434283183761, 0.24077015199577623, 0.4792921923103039, 0.09790288306795357, 0.2083111892400147, 0.03344137652223869, 0.30019005561921797, 0.0, 0.0, 0.4232145412909689, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.002411898700254589, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7211691408954096, 0.3029729335573116, 0.9986519565316043, 0.10393985148296292, 0.9683580387685291, 0.04772653656166491, 0.955192244982658, nan, 0.0, 0.5537532952073639, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0024131921169057513, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.0005 | 7.0 | 140 | 3.2844 | 0.0457 | 0.0939 | 0.4316 | [0.340086962197138, 0.2329738372267101, 0.6071201608407407, 0.09058077433820517, 0.23975259295537668, 0.21063787549869042, 0.3683457750454802, 0.0, 0.0, 0.4163949307820021, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.007040070843480186, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8932938671967398, 0.31521251829726094, 0.9971277331247648, 0.09666852092218307, 0.9613264918282022, 0.3329520608956191, 0.8703346871016512, nan, 0.0, 0.5021823996312249, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.007105510122000268, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.8459 | 8.0 | 160 | 3.3441 | 0.0378 | 0.0873 | 0.3928 | [0.3181429690424099, 0.24517417128180433, 0.4681665209078311, 0.09063296542722396, 0.19925386668048395, 0.04982817869415808, 0.259815432537341, 0.0, 0.0, 0.41043604151841584, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7156875722900848, 0.32998536219124214, 0.9974293745345246, 0.09748527632822089, 0.9744077030279995, 0.06957041055955948, 0.9593869504639374, nan, 0.0, 0.48189976807502266, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.542 | 9.0 | 180 | 3.2691 | 0.0476 | 0.0996 | 0.4401 | [0.3851627875624717, 0.22081091187972593, 0.5790867180693906, 0.18838475774976532, 0.21901223496252642, 0.2521684440420488, 0.23761887209730925, 0.0, 0.0, 0.4907447961088956, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8679801309413498, 0.2822612443757866, 0.9972051455219597, 0.21885071477135262, 0.9746294184720639, 0.369321402542716, 0.9555331574422672, nan, 0.0, 0.6133479306818018, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.3346 | 10.0 | 200 | 3.1146 | 0.0470 | 0.0974 | 0.4389 | [0.36951993744582906, 0.158561580851804, 0.620955974717199, 0.20030040971721333, 0.18516240010512106, 0.24869762512501306, 0.3345117330250833, 0.0, 0.0, 0.41860492462547916, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0019024865056189717, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9455271832419377, 0.19325958614431474, 0.9962708579696062, 0.2378480040263834, 0.9819143544913215, 0.3372945177747186, 0.9565262502593899, nan, 0.0, 0.5128134948645183, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0019216159449434687, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.5007 | 11.0 | 220 | 3.0861 | 0.0496 | 0.1047 | 0.4407 | [0.39932979468827307, 0.15410283566943336, 0.6026386202450033, 0.2857631376927908, 0.22131758002588595, 0.2552826502464619, 0.19573801450706038, nan, 0.0, 0.5656075363244452, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0011145290000445813, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8298413469989171, 0.18783387437411214, 0.9980246491750241, 0.36155333633546133, 0.9694666159888509, 0.48805571301319944, 0.9463581655945217, nan, 0.0, 0.7654388567971304, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0011172185726415517, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.5514 | 12.0 | 240 | 3.0739 | 0.0510 | 0.1048 | 0.4512 | [0.39812504466456033, 0.17963027942241153, 0.6074822228973307, 0.30676413423459875, 0.2307548747349876, 0.2599304422396646, 0.2363971516035945, 0.0, 0.0, 0.5307679456479367, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.004081455429638314, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8583537499701617, 0.21844633973759972, 0.9981634576113737, 0.4083335540780377, 0.9790637273533511, 0.4418070289092234, 0.9580381229063528, nan, 0.0, 0.6864835275644996, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.004200741833132234, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.5364 | 13.0 | 260 | 3.0196 | 0.0534 | 0.1066 | 0.4613 | [0.39037707296318724, 0.19401447027302607, 0.6003591627460209, 0.30691481323753655, 0.24809209157960418, 0.29044208446530007, 0.2855536390313837, 0.0, 0.0, 0.5646911067322059, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0010191873089023795, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8939991363165856, 0.24152384450434003, 0.9977096608002306, 0.3945370100571287, 0.9822944381097175, 0.4404405214997166, 0.9447277147007381, nan, 0.0, 0.7513216842651147, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0010278410868302275, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.9923 | 14.0 | 280 | 2.9121 | 0.0587 | 0.1111 | 0.4770 | [0.4003601087057121, 0.2165569312557928, 0.617564295746405, 0.28882350745804464, 0.36863047724696746, 0.30499745028046915, 0.31694786154160176, nan, 0.0, 0.6173305367154115, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03859508547008547, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9293407144267681, 0.26755051664443086, 0.998857499793122, 0.4099935542546334, 0.9683263651336628, 0.423789375657948, 0.9436605104793525, nan, 0.0, 0.9073164407438885, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.038745140099209006, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.4601 | 15.0 | 300 | 3.0272 | 0.0545 | 0.1099 | 0.4712 | [0.42763557560401255, 0.1884258427114067, 0.6216074714162483, 0.38372865417786295, 0.22663831404185666, 0.31046379191608836, 0.21840208338284875, nan, 0.0, 0.5612259779889384, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005261749754748952, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8401708270311208, 0.23215510675877862, 0.9983316293707973, 0.6197451723133162, 0.9823577853794502, 0.47458296218317275, 0.9534283935612012, nan, 0.0, 0.7206384419251214, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.005273271662868124, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.4757 | 16.0 | 320 | 2.9359 | 0.0605 | 0.1130 | 0.4783 | [0.40838892654733505, 0.20456332189822757, 0.597947342081406, 0.3248657869500402, 0.38020923717678556, 0.31806270757792743, 0.2883925187810696, nan, 0.0, 0.5836412209401467, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.1007275246838732, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.916491796092592, 0.24736276379113845, 0.9982889190826898, 0.4800844127749375, 0.9715254022551628, 0.42264555834480527, 0.9587792369489817, nan, 0.0, 0.887480372808597, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.10394601599856997, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.5002 | 17.0 | 340 | 2.9437 | 0.0587 | 0.1132 | 0.4730 | [0.4334935006918794, 0.19620878748899873, 0.6054893446985568, 0.35955102619210694, 0.31993476692766865, 0.3114472841176052, 0.19905019975467098, nan, 0.0, 0.5939577211650928, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.09271008675385213, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8633709259858035, 0.2348152989407844, 0.9990977451637272, 0.5376899508180799, 0.9693399214493855, 0.44283950117418414, 0.9548809770847538, nan, 0.0, 0.8997536697445944, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0959914197613621, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.8686 | 18.0 | 360 | 2.9289 | 0.0625 | 0.1178 | 0.4917 | [0.4296237400024062, 0.2062017989213499, 0.6197935922410578, 0.41335799155510455, 0.307953867568105, 0.31741824122306816, 0.25779427359490986, 0.0, 0.0, 0.616197289994839, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.20619775394747952, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8911520191312386, 0.2535986863781956, 0.9979365592058022, 0.6690418796853064, 0.9810591663499303, 0.41695683861041377, 0.9368570835680194, nan, 0.0, 0.8771661936933692, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.21825982035125352, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.0618 | 19.0 | 380 | 2.9298 | 0.0624 | 0.1176 | 0.4809 | [0.38836646526789886, 0.23491913687009022, 0.6185607302669277, 0.4036649097730927, 0.378118011353862, 0.2611442407632143, 0.26658279522959805, 0.0, 0.0, 0.6273165461647137, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.2519770248897028, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8027503325615157, 0.301166163435726, 0.9970396431555428, 0.6383760253591516, 0.9746610921069302, 0.4552392906308203, 0.923398452553879, nan, 0.0, 0.8699059335340469, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.2705456495508781, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.8544 | 20.0 | 400 | 2.9010 | 0.0713 | 0.1241 | 0.5012 | [0.425754285291852, 0.2493873567480257, 0.6212088999913631, 0.4144953043935115, 0.4112527283015817, 0.2941432644479072, 0.3247339357429719, nan, 0.0, 0.6878630140162537, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4236939211497378, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8623488282767345, 0.31606864106040394, 0.9983796784449184, 0.6966173081507775, 0.9668060306600785, 0.437555672524091, 0.9588088815106869, nan, 0.0, 0.8547083651449892, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.4874648076149618, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.3835 | 21.0 | 420 | 2.8881 | 0.0643 | 0.1227 | 0.4837 | [0.4010587192313724, 0.2036044791336755, 0.610993951056319, 0.4149278288834044, 0.2957705955691954, 0.26976131842038753, 0.321550977841004, 0.0, 0.0, 0.6767961360434696, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.40500882612533096, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0011930877881691875, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8158032364257395, 0.25348255616111315, 0.9984437438770798, 0.685827307002905, 0.9768148992778412, 0.4645922746781116, 0.941437168351466, nan, 0.0, 0.8720091041357554, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.4921571256200563, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0011930877881691875, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.6614 | 22.0 | 440 | 2.8529 | 0.0705 | 0.1250 | 0.5104 | [0.4275250128683567, 0.22069077840150272, 0.6474809300488378, 0.4096026912128202, 0.41153851325006385, 0.362831926918015, 0.3804799302433823, 0.0, 0.0, 0.6463536255973406, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.37283839581374434, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9228261978182228, 0.274464315614923, 0.9976509341540827, 0.7235525769736784, 0.9694982896237172, 0.4744007611952385, 0.9572228974594611, nan, 0.0, 0.8962675924458722, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.407561335299638, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.0297 | 23.0 | 460 | 2.7994 | 0.0703 | 0.1240 | 0.4994 | [0.4165913235440176, 0.21871795421878018, 0.6109694252896113, 0.4113335606037128, 0.35222971206420295, 0.3193788432403529, 0.40558772310342445, 0.0, 0.0, 0.713528618230873, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.4160953686776356, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0002565780189611156, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8829123057526089, 0.2753393432971259, 0.9976028850799616, 0.7031601811872533, 0.9814392499683263, 0.43530852700623535, 0.9562446269231909, nan, 0.0, 0.853368674282257, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.48822451624435803, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0002565780189611156, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.1478 | 24.0 | 480 | 2.9134 | 0.0637 | 0.1230 | 0.4970 | [0.4469981477390521, 0.2035232512200525, 0.6304917584596665, 0.40325375059240326, 0.38913054538054537, 0.3211502549737844, 0.24884024577572963, 0.0, 0.0, 0.5372866127583109, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.38531776346902397, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9133170000238706, 0.24068932736297985, 0.9972745497401346, 0.6874519880268072, 0.9690548587355885, 0.36208397441088347, 0.960454154685323, nan, 0.0, 0.9131217678157277, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.4733431648567726, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.2557 | 25.0 | 500 | 2.8144 | 0.0614 | 0.1197 | 0.4797 | [0.4221356220844282, 0.18421616601778762, 0.623690862524923, 0.392985064346149, 0.2858839450934902, 0.31056780691092206, 0.24850758866227768, 0.0, 0.0, 0.5798371530628093, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39296364669128087, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8768014200864985, 0.222861988689457, 0.9978351222715467, 0.6045182026083195, 0.983561383504371, 0.4526074985828812, 0.9600984199448611, nan, 0.0, 0.7529638859678186, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.49465969522277337, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.3409 | 26.0 | 520 | 2.8329 | 0.0642 | 0.1225 | 0.4813 | [0.42149870924148447, 0.16065456282847587, 0.6203691411047018, 0.3889120346399578, 0.36916196819061586, 0.32594182164997615, 0.2553157965225749, 0.0, 0.0, 0.553429127188304, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.43538298123450964, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.00012187455900652991, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8446932296334554, 0.18888714843602306, 0.9978911795246879, 0.7129744907419671, 0.9726339794754846, 0.42895173698275163, 0.9607209557406694, nan, 0.0, 0.889381869516991, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.4945703177369621, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.00012187455900652991, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.3109 | 27.0 | 540 | 2.8133 | 0.0650 | 0.1226 | 0.4814 | [0.4092627638427401, 0.18495216925336402, 0.6305232327762818, 0.3937206341594305, 0.35816105853080016, 0.3273878103577395, 0.25208843284312765, 0.0, 0.0, 0.617103114486395, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4012490789796849, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.856928021318658, 0.2265106380680253, 0.9981928209344477, 0.6089419264831837, 0.975674648422653, 0.46020932869058223, 0.954065751637862, nan, 0.0, 0.9036575001080396, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5110604638691514, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.0517 | 28.0 | 560 | 2.7287 | 0.0687 | 0.1240 | 0.4971 | [0.41872312853941007, 0.243188145951677, 0.6154017129478707, 0.410863670998888, 0.3791664615233125, 0.33392716672065154, 0.2615378358042015, 0.0, 0.0, 0.6999535153980244, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.41312089283747566, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8582626074992893, 0.30477430227345154, 0.9977630486603651, 0.6736951780526785, 0.9757063220575193, 0.438284476475828, 0.9531171256632971, nan, 0.0, 0.8676443048733056, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5028824239174152, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.9763 | 29.0 | 580 | 2.7479 | 0.0666 | 0.1215 | 0.4964 | [0.41805875576036866, 0.21560327494271442, 0.6183435184756053, 0.40744417974524166, 0.353610518266586, 0.3463267413231673, 0.32909110299681577, 0.0, 0.0, 0.6258885562577768, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.41151154522660083, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0031751529846438054, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9134537137301793, 0.26961385352468714, 0.9972078149149665, 0.6486141647461877, 0.9813442290637273, 0.44139201554781765, 0.9604986215278807, nan, 0.0, 0.7318457482821706, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.49139741699066003, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0031751529846438054, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.687 | 30.0 | 600 | 2.7641 | 0.0645 | 0.1215 | 0.4785 | [0.40539745577335573, 0.1923509931594783, 0.6339750538159723, 0.3849997862151531, 0.3802708982154333, 0.30438908351152677, 0.24414113284620922, 0.0, 0.0, 0.6321250014235767, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4263946468834097, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.008172009903911532, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8284828901711525, 0.23671119225222403, 0.9976429259750625, 0.6360537910695523, 0.9745977448371975, 0.4740566037735849, 0.9602911095959447, nan, 0.0, 0.7995649606015645, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5239755105688877, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.008172009903911532, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.741 | 31.0 | 620 | 2.6534 | 0.0678 | 0.1247 | 0.4920 | [0.3994730961862104, 0.24263777664888295, 0.6453627358417291, 0.39236818054311334, 0.3991649485872849, 0.32099363714747847, 0.25790756214664273, 0.0, 0.0, 0.6424219818475693, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4204489479980297, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.008409236690186016, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8581541045577745, 0.30055580462036224, 0.9980807064281653, 0.5743026674790072, 0.9750411757253262, 0.5162664993116851, 0.9506714493226218, nan, 0.0, 0.893184862933779, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5340304777226617, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.008409344571450563, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.4749 | 32.0 | 640 | 2.7574 | 0.0632 | 0.1218 | 0.4894 | [0.43470742617779384, 0.2281054835565503, 0.6100867510845925, 0.40377349797272394, 0.3626811935240964, 0.33848015026583045, 0.22885345803489246, 0.0, 0.0, 0.552662321988897, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.381770280904643, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 7.69418191610777e-05, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8755861871415334, 0.27304104527998185, 0.9981501106463402, 0.6190387892594457, 0.9763397947548461, 0.4304802008259778, 0.9618622713663179, nan, 0.0, 0.8245869286506576, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.4986369933413773, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 7.697340568833467e-05, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.9114 | 33.0 | 660 | 2.6547 | 0.0691 | 0.1253 | 0.5081 | [0.4338933598179044, 0.26752190237797246, 0.6312128042825134, 0.427079109150189, 0.35218494374096954, 0.34551851442815895, 0.29373937196628364, 0.0, 0.0, 0.6384169015938483, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3991776840463616, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.013707680662997601, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8835806838723398, 0.3325105192371055, 0.9971597658408454, 0.697438478450902, 0.9804890409223362, 0.4433557373066645, 0.9499006907182878, nan, 0.0, 0.8383871850646076, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5032846226035662, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.013707680662997601, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.6192 | 34.0 | 680 | 2.7609 | 0.0633 | 0.1220 | 0.4815 | [0.42937108345558256, 0.18527541332094785, 0.616699748053478, 0.3883906891527968, 0.3251654933277293, 0.33849542399270877, 0.2239150010256339, 0.0, 0.0, 0.6374389785955689, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39881145326850354, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0034124876521828373, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8580890027928657, 0.22408540702290736, 0.9977390241233046, 0.6214360767485188, 0.9801723045736729, 0.4511296461251923, 0.9546141760294075, nan, 0.0, 0.8803065443178381, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.49483845019439604, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0034124876521828373, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.393 | 35.0 | 700 | 2.6922 | 0.0680 | 0.1236 | 0.4957 | [0.421972673874879, 0.22492129318018977, 0.6293340384110154, 0.4010899511331872, 0.39305555555555555, 0.3491350346142426, 0.24377055998790034, 0.0, 0.0, 0.6695312929281829, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.405382643720249, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.004514527930435229, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8929813787251772, 0.2797711964653203, 0.9975281420757733, 0.6163854379133444, 0.9770366147219055, 0.45229370799255003, 0.9555924465656775, nan, 0.0, 0.8776271625923738, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.4947490727085847, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.004515773133715635, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 0.8548 | 36.0 | 720 | 2.8306 | 0.0605 | 0.1190 | 0.4712 | [0.4283980381297898, 0.18132266375240189, 0.5995935969874214, 0.38920288892866006, 0.26227912802112247, 0.3218979320872324, 0.21322861676329996, 0.0, 0.0, 0.5681595556677607, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.36106462409886714, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0016485137718251677, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.841004129621954, 0.21687723145562476, 0.9979739307078963, 0.5952601697085287, 0.9816609654123907, 0.4098307555267633, 0.9501230249310764, nan, 0.0, 0.8104697561186419, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5013630066586227, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0016485137718251677, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.0338 | 37.0 | 740 | 2.6875 | 0.0665 | 0.1202 | 0.4951 | [0.43124776575596574, 0.24373739415389775, 0.6091774781733035, 0.38990956687291767, 0.36250778834510894, 0.3537801271818491, 0.27804252278313724, 0.0, 0.0, 0.5913839226271894, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39816965857092573, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.001167429986273076, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.913635998671924, 0.3061327557430443, 0.9979552449568493, 0.5967524039098302, 0.9766882047383758, 0.4105291926471779, 0.9573562979871343, nan, 0.0, 0.7037842665552658, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5055190597488493, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.001167429986273076, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.8221 | 38.0 | 760 | 2.7024 | 0.0648 | 0.1217 | 0.4847 | [0.42947861925851916, 0.2022166251366904, 0.6122340042310438, 0.39238065575627157, 0.3746899172705388, 0.33450465640314525, 0.23480576326828378, 0.0, 0.0, 0.5911730545876888, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3895871969228656, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0017639738803576699, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8730927895455246, 0.243716815115293, 0.9981020615722191, 0.6118204374277061, 0.9711769922716331, 0.42975139687424085, 0.954139863042125, nan, 0.0, 0.8578775263256457, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5069490995218304, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0017639738803576699, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 0.9134 | 39.0 | 780 | 2.7209 | 0.0658 | 0.1228 | 0.4930 | [0.43697104186512636, 0.24683751863819814, 0.6139479804483221, 0.3962824506181786, 0.38747105607570725, 0.3443358588053122, 0.22562202039182935, 0.0, 0.0, 0.6154462190940263, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4184565043339019, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0016225853620994845, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8610467929785577, 0.30491473881503967, 0.9978217753065131, 0.6397578872082859, 0.9752312175345242, 0.441179447728561, 0.9610322236385735, nan, 0.0, 0.7859951886371166, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5372033784689637, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0016228559699290562, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.4629 | 40.0 | 800 | 2.7029 | 0.0675 | 0.1240 | 0.4922 | [0.43950151410791677, 0.22220900991363554, 0.6312859065025483, 0.3997307004308233, 0.40703417773481465, 0.3539492822361348, 0.22753915532227234, 0.0, 0.0, 0.5887955649111106, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.43656320466905424, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.006855728284849418, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8765128022620693, 0.27252791176264063, 0.9981768045764073, 0.630415971320848, 0.9735841885214748, 0.4532350797635436, 0.9580381229063528, nan, 0.0, 0.8139270228611764, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.588327300353041, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.006857047556735814, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.4657 | 41.0 | 820 | 2.6219 | 0.0721 | 0.1267 | 0.5059 | [0.42797824686882074, 0.2482129112642935, 0.6526077944845963, 0.40345143790868343, 0.4352150994728788, 0.41790140193623504, 0.2663257990886393, 0.0, 0.0, 0.6519937148452623, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4549110108745061, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.007158526729015125, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9027314530496922, 0.3152773351626093, 0.9976589423331029, 0.6235199067574368, 0.9728240212846826, 0.530447809539234, 0.9581270565914682, nan, 0.0, 0.8248750342125355, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5814005452026635, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.007158526729015125, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.5582 | 42.0 | 840 | 2.6751 | 0.0685 | 0.1250 | 0.4976 | [0.4343875200078906, 0.22662708331059472, 0.6271548976318774, 0.4157405544824013, 0.39279180153785137, 0.3529379244984994, 0.26381029261716366, 0.0, 0.0, 0.6254327280450249, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42345740281224153, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.004483700881345495, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8792600967412226, 0.2803842559834069, 0.9979178734547551, 0.6660794857531368, 0.9724122640314202, 0.45235444165519473, 0.9542584412889455, nan, 0.0, 0.8484276638960515, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5719712204495687, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.004483700881345495, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.429 | 43.0 | 860 | 2.5953 | 0.0703 | 0.1259 | 0.5094 | [0.4446826467888162, 0.26470199750987927, 0.6395916768314067, 0.41168766601798323, 0.39235322634143854, 0.3862909898532422, 0.2582474888482398, 0.0, 0.0, 0.647119308157721, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4015934395626375, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.01790914572348587, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9124034052563165, 0.33015010505733594, 0.9979926164589434, 0.6397711318905459, 0.9741226403142025, 0.4489432342699814, 0.9542287967272404, nan, 0.0, 0.8612627666777107, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5383652857845109, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.01790914572348587, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.584 | 44.0 | 880 | 2.5978 | 0.0709 | 0.1266 | 0.5091 | [0.44308074922669594, 0.2609984765787449, 0.6463303275881749, 0.41199320223355185, 0.383229366006412, 0.4133584288863023, 0.2601257363896617, 0.0, 0.0, 0.6541824104956397, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39943504724753676, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.02496504124491655, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8977207872105413, 0.32712531800774564, 0.997618901438002, 0.6443184727998376, 0.9768148992778412, 0.5068527816017492, 0.9548957993656064, nan, 0.0, 0.8504588081072905, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.530812888233454, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.02496504124491655, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.1475 | 45.0 | 900 | 2.5513 | 0.0712 | 0.1267 | 0.5107 | [0.4460489773705448, 0.26110799405825236, 0.6476142770728291, 0.41342919238558584, 0.4012683996241779, 0.4068029834263828, 0.2613738601823708, 0.0, 0.0, 0.6417619652689538, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4173316963372212, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.017319016279875303, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9142088942031218, 0.3280354548253456, 0.9974000112114506, 0.6407909724245715, 0.9739642721398708, 0.46982549194266743, 0.9559481813061393, nan, 0.0, 0.8730750947147035, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5468561469365867, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.017319016279875303, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.1308 | 46.0 | 920 | 2.5953 | 0.0702 | 0.1255 | 0.5053 | [0.4408555494449196, 0.2450651027864728, 0.6420333998168697, 0.40837601270437823, 0.4194139194139194, 0.3933323534751822, 0.26032639597388507, 0.0, 0.0, 0.6190101307257246, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.41738981306808437, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.014618532630309562, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9150074758526704, 0.3059518086606135, 0.9976242402240154, 0.625559587825488, 0.9719371595084252, 0.46050287472669854, 0.9568671627189992, nan, 0.0, 0.8608306083348939, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5438173124190017, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.014618532630309562, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.8663 | 47.0 | 940 | 2.6424 | 0.0672 | 0.1242 | 0.4940 | [0.43704087833557725, 0.22529774235339764, 0.6321397223345009, 0.40153092894146036, 0.36451685473080975, 0.3641938785300796, 0.23377773280376432, 0.0, 0.0, 0.6218380062305296, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.40584502338009354, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.011699957664626871, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8833419774010074, 0.27757012374619877, 0.997755040481345, 0.6111582033147025, 0.9751045229950589, 0.4610595999676087, 0.9588088815106869, nan, 0.0, 0.8626312680966306, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5430129150466997, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.011699957664626871, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.591 | 48.0 | 960 | 2.6426 | 0.0685 | 0.1259 | 0.4979 | [0.44151907962878684, 0.23794545582343432, 0.6315182267956954, 0.40611183502995885, 0.41001206272617613, 0.3608927959994585, 0.23455964892523312, 0.0, 0.0, 0.5947358216190014, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.43547749291504645, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.013459010612035523, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8778690890310036, 0.2925617245607307, 0.9979979552449568, 0.6301863968283401, 0.9689281641961232, 0.45876184306421575, 0.9570302078083776, nan, 0.0, 0.8837205952260908, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5905617374983242, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.013463931544984542, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.518 | 49.0 | 980 | 2.6536 | 0.0681 | 0.1246 | 0.4959 | [0.43788100249764617, 0.2327002408765022, 0.6344844675799025, 0.39711397990519065, 0.399981744924437, 0.3702829804891543, 0.2332744666789456, 0.0, 0.0, 0.6118559477947978, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.41649069552593376, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.012463197326474192, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8951970087909084, 0.2877760793358432, 0.997899187703708, 0.5880462327708759, 0.9715887495248955, 0.46066483116041784, 0.9574155871105445, nan, 0.0, 0.8671257148619254, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5641060016981723, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.01246327727103619, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.3176 | 50.0 | 1000 | 2.6066 | 0.0688 | 0.1262 | 0.5008 | [0.43853467784626365, 0.24573297715749878, 0.6380510638879026, 0.4055319712687706, 0.39704872582170575, 0.3731125869306434, 0.2434579810174668, 0.0, 0.0, 0.6120472479537323, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.41646126174138964, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.01480455169405637, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8826735992812765, 0.3043340877296272, 0.9976989832282037, 0.6216524065587666, 0.9707018877486381, 0.4822455259535185, 0.9546882874336703, nan, 0.0, 0.8994367536265288, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5587433525494928, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.01480455169405637, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e9_s6789_v3_l54_r4
|
KingKazma
| 2023-07-30T21:25:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T21:25:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
MindNetML/a2c-PandaReachDense-v2
|
MindNetML
| 2023-07-30T21:24:02Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T21:21:39Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.96 +/- 1.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/all-base-miss-cbt-seed
|
NasimB
| 2023-07-30T21:21:48Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T17:22:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-miss-cbt-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-miss-cbt-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.3272 | 0.31 | 500 | 5.3559 |
| 5.0184 | 0.61 | 1000 | 4.9377 |
| 4.6905 | 0.92 | 1500 | 4.7047 |
| 4.4183 | 1.22 | 2000 | 4.5602 |
| 4.2842 | 1.53 | 2500 | 4.4477 |
| 4.1746 | 1.83 | 3000 | 4.3341 |
| 4.0002 | 2.14 | 3500 | 4.2774 |
| 3.8776 | 2.44 | 4000 | 4.2189 |
| 3.8524 | 2.75 | 4500 | 4.1618 |
| 3.7691 | 3.05 | 5000 | 4.1277 |
| 3.575 | 3.36 | 5500 | 4.1122 |
| 3.5747 | 3.66 | 6000 | 4.0774 |
| 3.548 | 3.97 | 6500 | 4.0453 |
| 3.3299 | 4.27 | 7000 | 4.0648 |
| 3.2994 | 4.58 | 7500 | 4.0483 |
| 3.2955 | 4.88 | 8000 | 4.0332 |
| 3.1853 | 5.19 | 8500 | 4.0422 |
| 3.1201 | 5.49 | 9000 | 4.0422 |
| 3.1215 | 5.8 | 9500 | 4.0409 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e9_s6789_v3_l4_r4
|
KingKazma
| 2023-07-30T21:20:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T21:20:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
BlunderPanini/Reinforce-1
|
BlunderPanini
| 2023-07-30T21:18:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T20:56:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e8_s6789_v3_l54_r4
|
KingKazma
| 2023-07-30T21:18:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T21:18:39Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
brunoboat/ppo-SnowballTarget
|
brunoboat
| 2023-07-30T21:15:02Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-30T21:14:58Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: brunoboat/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.