modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 12:31:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 12:30:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Joserzapata/speecht5_finetuned_voxpopuli_es
|
Joserzapata
| 2023-07-14T00:12:01Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"es",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-13T21:42:55Z |
---
language:
- es
license: mit
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 spanish Speaker
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 spanish Speaker
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Vox Populi es dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5134 | 4.32 | 1000 | 0.4636 |
| 0.4907 | 8.64 | 2000 | 0.4527 |
| 0.4814 | 12.97 | 3000 | 0.4459 |
| 0.4777 | 17.29 | 4000 | 0.4448 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
soBeauty/3_20230714_01-xlm-roberta-base-confusion
|
soBeauty
| 2023-07-13T23:40:45Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-13T16:06:37Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 3_20230714_01-xlm-roberta-base-confusion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3_20230714_01-xlm-roberta-base-confusion
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.4517
- Loss: 2.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 3.9937 | 3.85 | 500 | 0.3272 | 3.7611 |
| 3.3422 | 7.69 | 1000 | 0.4517 | 2.9346 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
conorjudge/xlm-roberta-base-finetuned-panx-de
|
conorjudge
| 2023-07-13T23:30:34Z | 134 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T23:25:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8609120891618334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1400
- F1: 0.8609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2581 | 1.0 | 525 | 0.1584 | 0.8233 |
| 0.1252 | 2.0 | 1050 | 0.1384 | 0.8491 |
| 0.0811 | 3.0 | 1575 | 0.1400 | 0.8609 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
Jonathaniu/vicuna-breast-cancer-7b-ep-1
|
Jonathaniu
| 2023-07-13T23:04:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T23:04:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
Karras10/sks-dog-model
|
Karras10
| 2023-07-13T22:10:33Z | 33 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T22:06:28Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Karras10/sks-dog-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
NasimB/gpt2-concat-guten-rarity-no-cut-corrected
|
NasimB
| 2023-07-13T21:58:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T20:05:03Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-rarity-no-cut-corrected
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-rarity-no-cut-corrected
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7039 | 0.29 | 500 | 5.6444 |
| 5.3477 | 0.58 | 1000 | 5.1977 |
| 4.9877 | 0.87 | 1500 | 4.9542 |
| 4.7147 | 1.16 | 2000 | 4.8034 |
| 4.5565 | 1.46 | 2500 | 4.6723 |
| 4.4503 | 1.75 | 3000 | 4.5667 |
| 4.3289 | 2.04 | 3500 | 4.4930 |
| 4.1305 | 2.33 | 4000 | 4.4433 |
| 4.0991 | 2.62 | 4500 | 4.3879 |
| 4.0629 | 2.91 | 5000 | 4.3392 |
| 3.8648 | 3.2 | 5500 | 4.3323 |
| 3.8005 | 3.49 | 6000 | 4.2991 |
| 3.7818 | 3.79 | 6500 | 4.2701 |
| 3.6998 | 4.08 | 7000 | 4.2639 |
| 3.5113 | 4.37 | 7500 | 4.2592 |
| 3.5113 | 4.66 | 8000 | 4.2454 |
| 3.5008 | 4.95 | 8500 | 4.2317 |
| 3.3469 | 5.24 | 9000 | 4.2439 |
| 3.3188 | 5.53 | 9500 | 4.2429 |
| 3.3168 | 5.82 | 10000 | 4.2418 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Leon68/falcon7b-openassistant
|
Leon68
| 2023-07-13T21:57:22Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2023-07-13T21:10:15Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon7b-openassistant
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7b-openassistant
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
VK246/IC_ver6c_coco_swin_gpt2_50Apc_1e
|
VK246
| 2023-07-13T21:57:18Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-13T18:49:51Z |
---
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
- bleu
model-index:
- name: IC_ver6c_coco_swin_gpt2_50Apc_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver6c_coco_swin_gpt2_50Apc_1e
This model is a fine-tuned version of [VK246/IC_ver6b_coco_swin_gpt2_50Bpc_1e](https://huggingface.co/VK246/IC_ver6b_coco_swin_gpt2_50Bpc_1e) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7946
- Rouge1: 41.9094
- Rouge2: 16.3068
- Rougel: 38.073
- Rougelsum: 38.0746
- Bleu: 10.1966
- Gen Len: 11.2806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|
| 0.8232 | 0.17 | 500 | 0.8331 | 40.454 | 15.1311 | 36.7639 | 36.7714 | 9.2957 | 11.2806 |
| 0.8016 | 0.34 | 1000 | 0.8200 | 40.6374 | 15.5346 | 36.902 | 36.9055 | 9.6894 | 11.2806 |
| 0.8048 | 0.51 | 1500 | 0.8136 | 41.3382 | 15.9333 | 37.6502 | 37.6442 | 9.7743 | 11.2806 |
| 0.8018 | 0.68 | 2000 | 0.8028 | 41.5968 | 16.106 | 37.8326 | 37.836 | 9.9815 | 11.2806 |
| 0.8075 | 0.85 | 2500 | 0.7978 | 41.7017 | 16.1589 | 37.8899 | 37.8954 | 10.1244 | 11.2806 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lovelyxs/a2c-PandaReachDense-v2
|
lovelyxs
| 2023-07-13T21:46:19Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T21:45:52Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.94 +/- 0.38
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nolanaatama/jhpfbtsrvcv1mscnd
|
nolanaatama
| 2023-07-13T21:44:52Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T21:41:18Z |
---
license: creativeml-openrail-m
---
|
SerchOnodera117/Lora-chan
|
SerchOnodera117
| 2023-07-13T21:07:43Z | 0 | 0 |
allennlp
|
[
"allennlp",
"code",
"es",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
] | null | 2023-07-13T21:05:50Z |
---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- es
metrics:
- character
- accuracy
- code_eval
library_name: allennlp
tags:
- code
---
|
lovelyxs/a2c-AntBulletEnv-v0
|
lovelyxs
| 2023-07-13T20:49:06Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T20:38:39Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1134.23 +/- 127.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jonathaniu/vicuna-breast-cancer-7b-epoch-1
|
Jonathaniu
| 2023-07-13T20:35:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T20:35:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
jliu596/a2c-AntBulletEnv-v0
|
jliu596
| 2023-07-13T20:34:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T19:50:18Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 520.21 +/- 33.39
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
magooie/dqn-SpaceInvadersNoFrameskip-v4
|
magooie
| 2023-07-13T20:20:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T20:33:09Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 508.00 +/- 223.02
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga magooie -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga magooie -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga magooie
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
RushTurtle/crnn_vgg16_bn_20230713-182606
|
RushTurtle
| 2023-07-13T20:19:04Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T20:18:57Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/tmp/dataset/train3_1100/",
"val_path": "/tmp/dataset/val3_1100/",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 1200,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": null,
"vocab": "french",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
}
|
LarryAIDraw/fubuki-v2
|
LarryAIDraw
| 2023-07-13T20:01:20Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T17:28:08Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/8855/fubuki-one-punch-man-or-goofy-ai
|
Tasaloris13/finetuned-college-1
|
Tasaloris13
| 2023-07-13T19:59:48Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T19:59:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
grace-pro/afro-xlmr-base-hausa-5e-5
|
grace-pro
| 2023-07-13T19:51:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T19:22:13Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afro-xlmr-base-hausa-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-hausa-5e-5
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1512
- Precision: 0.7391
- Recall: 0.5807
- F1: 0.6504
- Accuracy: 0.9616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1604 | 1.0 | 1312 | 0.1395 | 0.6845 | 0.4906 | 0.5716 | 0.9535 |
| 0.1221 | 2.0 | 2624 | 0.1261 | 0.7140 | 0.5440 | 0.6175 | 0.9582 |
| 0.0939 | 3.0 | 3936 | 0.1311 | 0.7433 | 0.5693 | 0.6448 | 0.9610 |
| 0.0723 | 4.0 | 5248 | 0.1419 | 0.7508 | 0.5583 | 0.6404 | 0.9613 |
| 0.0557 | 5.0 | 6560 | 0.1512 | 0.7391 | 0.5807 | 0.6504 | 0.9616 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
csalaam/bias-classification-setfit-model-womenbias
|
csalaam
| 2023-07-13T19:41:40Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-13T19:00:14Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# csalaam/bias-classification-setfit-model-womenbias
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("csalaam/bias-classification-setfit-model-womenbias")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
KingAsiedu/tweet_sentiments_analysis
|
KingAsiedu
| 2023-07-13T19:40:12Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T19:37:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tweet_sentiments_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_sentiments_analysis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7995
- eval_accuracy: 0.7575
- eval_runtime: 64.2032
- eval_samples_per_second: 31.151
- eval_steps_per_second: 3.894
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Stancld/longt5-tglobal-large-16384-pubmed-3k_steps
|
Stancld
| 2023-07-13T19:39:23Z | 1,066 | 21 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"longt5",
"text2text-generation",
"en",
"dataset:ccdv/pubmed-summarization",
"arxiv:2112.07916",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-10T12:24:12Z |
---
language: en
datasets:
- ccdv/pubmed-summarization
license: apache-2.0
---
## Introduction
[Google's LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) introduced as an extension of a successful [T5 model](https://arxiv.org/pdf/1910.10683.pdf).
This is an unofficial *longt5-large-16384-pubmed-3k_steps* checkpoint. I.e., this is a large configuration of the LongT5 model with a `transient-global` attention fine-tuned on [pubmed summarization dataset](https://huggingface.co/datasets/ccdv/pubmed-summarization) for 3,000 training steps. It may be worth continuing in the fine-tuning as we did not train the model until the convergence.
## Results and Fine-tuning Details
The fine-tuned model achieves the following results on the evaluation set using `beam_search=3` and without any specific calibration of generation parameters are presented below, altogether with the results from the original paper (the original scores are higher, very likely due to a higher number of training steps).
| Metric | Score | Score (original paper)
| --- | --- | --- |
| Rouge-1 | 47.44 | 49.98 |
| Rouge-2 | 22.68 | 24.69 |
| Rouge-L | 29.83 | x |
| Rouge-Lsum | 43.13 | 46.46 |
The training parameters follow the ones specified in the paper. We accumulated batch size to 128 examples and used `Adafactor` optimizer with a constant learning rate `0.001`. The full training hyper-parameters and logs can be found via the following [W&B run](https://wandb.ai/stancld/LongT5/runs/1lwncl8a?workspace=user-stancld). The model was trained using the [HuggingFace's trainer](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py).
The only specific adjustment, I made for the training, was dropping very short input articles (less than 16 words (a bit of mistake, should be less than 16 tokens)) as this sequences do not contribute to gradient creation in the *transient-global* attention, which resulted in training crashes when DDP used.
## Usage
```python
LONG_ARTICLE = """"anxiety affects quality of life in those living
with parkinson 's disease ( pd ) more so than
overall cognitive status , motor deficits , apathy
, and depression [ 13 ] . although anxiety and
depression are often related and coexist in pd
patients , recent research suggests that anxiety
rather than depression is the most prominent and
prevalent mood disorder in pd [ 5 , 6 ] . yet ,
our current understanding of anxiety and its
impact on cognition in pd , as well as its neural
basis and best treatment practices , remains
meager and lags far behind that of depression .
overall , neuropsychiatric symptoms in pd have
been shown to be negatively associated with
cognitive performance . for example , higher
depression scores have been correlated with lower
scores on the mini - mental state exam ( mmse ) [
8 , 9 ] as well as tests of memory and executive
functions ( e.g. , attention ) [ 1014 ] . likewise
, apathy and anhedonia in pd patients have been
associated with executive dysfunction [ 10 , 1523
] . however , few studies have specifically
investigated the relationship between anxiety and
cognition in pd . one study showed a strong
negative relationship between anxiety ( both state
and trait ) and overall cognitive performance (
measured by the total of the repeatable battery
for the assessment of neuropsychological status
index ) within a sample of 27 pd patients .
furthermore , trait anxiety was negatively
associated with each of the cognitive domains
assessed by the rbans ( i.e. , immediate memory ,
visuospatial construction , language , attention ,
and delayed memory ) . two further studies have
examined whether anxiety differentially affects
cognition in patients with left - sided dominant
pd ( lpd ) versus right - sided dominant pd ( rpd
) ; however , their findings were inconsistent .
the first study found that working memory
performance was worse in lpd patients with anxiety
compared to rpd patients with anxiety , whereas
the second study reported that , in lpd , apathy
but not anxiety was associated with performance on
nonverbally mediated executive functions and
visuospatial tasks ( e.g. , tmt - b , wms - iii
spatial span ) , while in rpd , anxiety but not
apathy significantly correlated with performance
on verbally mediated tasks ( e.g. , clock reading
test and boston naming test ) . furthermore ,
anxiety was significantly correlated with
neuropsychological measures of attention and
executive and visuospatial functions . taken
together , it is evident that there are limited
and inconsistent findings describing the
relationship between anxiety and cognition in pd
and more specifically how anxiety might influence
particular domains of cognition such as attention
and memory and executive functioning . it is also
striking that , to date , no study has examined
the influence of anxiety on cognition in pd by
directly comparing groups of pd patients with and
without anxiety while excluding depression . given
that research on healthy young adults suggests
that anxiety reduces processing capacity and
impairs processing efficiency , especially in the
central executive and attentional systems of
working memory [ 26 , 27 ] , we hypothesized that
pd patients with anxiety would show impairments in
attentional set - shifting and working memory
compared to pd patients without anxiety .
furthermore , since previous work , albeit limited
, has focused on the influence of symptom
laterality on anxiety and cognition , we also
explored this relationship . seventeen pd patients
with anxiety and thirty - three pd patients
without anxiety were included in this study ( see
table 1 ) . the cross - sectional data from these
participants was taken from a patient database
that has been compiled over the past 8 years (
since 2008 ) at the parkinson 's disease research
clinic at the brain and mind centre , university
of sydney . inclusion criteria involved a
diagnosis of idiopathic pd according to the united
kingdom parkinson 's disease society brain bank
criteria and were confirmed by a neurologist (
sjgl ) . patients also had to have an adequate
proficiency in english and have completed a full
neuropsychological assessment . ten patients in
this study ( 5 pd with anxiety ; 5 pd without
anxiety ) were taking psychotropic drugs ( i.e. ,
benzodiazepine or selective serotonin reuptake
inhibitor ) . patients were also excluded if they
had other neurological disorders , psychiatric
disorders other than affective disorders ( such as
anxiety ) , or if they reported a score greater
than six on the depression subscale of the
hospital anxiety and depression scale ( hads ) .
thus , all participants who scored within a
depressed ( hads - d > 6 ) range were excluded
from this study , in attempt to examine a refined
sample of pd patients with and without anxiety in
order to determine the independent effect of
anxiety on cognition . this research was approved
by the human research ethics committee of the
university of sydney , and written informed
consent was obtained from all participants . self
- reported hads was used to assess anxiety in pd
and has been previously shown to be a useful
measure of clinical anxiety in pd . a cut - off
score of > 8 on the anxiety subscale of the hads (
hads - a ) was used to identify pd cases with
anxiety ( pda+ ) , while a cut - off score of < 6
on the hads - a was used to identify pd cases
without anxiety ( pda ) . this criterion was more
stringent than usual ( > 7 cut - off score ) , in
effort to create distinct patient groups . the
neurological evaluation rated participants
according to hoehn and yahr ( h&y ) stages and
assessed their motor symptoms using part iii of
the revised mds task force unified parkinson 's
disease rating scale ( updrs ) . in a similar way
this was determined by calculating a total left
and right score from rigidity items 3035 ,
voluntary movement items 3643 , and tremor items
5057 from the mds - updrs part iii ( see table 1 )
. processing speed was assessed using the trail
making test , part a ( tmt - a , z - score ) .
attentional set - shifting was measured using the
trail making test , part b ( tmt - b , z - score )
. working memory was assessed using the digit span
forward and backward subtest of the wechsler
memory scale - iii ( raw scores ) . language was
assessed with semantic and phonemic verbal fluency
via the controlled oral word associated test (
cowat animals and letters , z - score ) . the
ability to retain learned verbal memory was
assessed using the logical memory subtest from the
wechsler memory scale - iii ( lm - i z - score ,
lm - ii z - score , % lm retention z - score ) .
the mini - mental state examination ( mmse )
demographic , clinical , and neuropsychological
variables were compared between the two groups
with the independent t - test or mann whitney u
test , depending on whether the variable met
parametric assumptions . chi - square tests were
used to examine gender and symptom laterality
differences between groups . all analyses employed
an alpha level of p < 0.05 and were two - tailed .
spearman correlations were performed separately in
each group to examine associations between anxiety
and/or depression ratings and cognitive functions
. as expected , the pda+ group reported
significant greater levels of anxiety on the hads
- a ( u = 0 , p < 0.001 ) and higher total score
on the hads ( u = 1 , p < 0.001 ) compared to the
pda group ( table 1 ) . groups were matched in age
( t(48 ) = 1.31 , p = 0.20 ) , disease duration (
u = 259 , p = 0.66 ) , updrs - iii score ( u =
250.5 , p = 0.65 ) , h&y ( u = 245 , p = 0.43 ) ,
ledd ( u = 159.5 , p = 0.80 ) , and depression (
hads - d ) ( u = 190.5 , p = 0.06 ) . additionally
, all groups were matched in the distribution of
gender ( = 0.098 , p = 0.75 ) and side - affected
( = 0.765 , p = 0.38 ) . there were no group
differences for tmt - a performance ( u = 256 , p
= 0.62 ) ( table 2 ) ; however , the pda+ group
had worse performance on the trail making test
part b ( t(46 ) = 2.03 , p = 0.048 ) compared to
the pda group ( figure 1 ) . the pda+ group also
demonstrated significantly worse performance on
the digit span forward subtest ( t(48 ) = 2.22 , p
= 0.031 ) and backward subtest ( u = 190.5 , p =
0.016 ) compared to the pda group ( figures 2(a )
and 2(b ) ) . neither semantic verbal fluency (
t(47 ) = 0.70 , p = 0.49 ) nor phonemic verbal
fluency ( t(47 ) = 0.39 , p = 0.70 ) differed
between groups . logical memory i immediate recall
test ( u = 176 , p = 0.059 ) showed a trend that
the pda+ group had worse new verbal learning and
immediate recall abilities than the pda group .
however , logical memory ii test performance ( u =
219 , p = 0.204 ) and logical memory % retention (
u = 242.5 , p = 0.434 ) did not differ between
groups . there were also no differences between
groups in global cognition ( mmse ) ( u = 222.5 ,
p = 0.23 ) . participants were split into lpd and
rpd , and then further group differences were
examined between pda+ and pda. importantly , the
groups remained matched in age , disease duration
, updrs - iii , dde , h&y stage , and depression
but remained significantly different on self -
reported anxiety . lpda+ demonstrated worse
performance on the digit span forward test ( t(19
) = 2.29 , p = 0.033 ) compared to lpda , whereas
rpda+ demonstrated worse performance on the digit
span backward test ( u = 36.5 , p = 0.006 ) , lm -
i immediate recall ( u = 37.5 , p = 0.008 ) , and
lm - ii ( u = 45.0 , p = 0.021 ) but not lm %
retention ( u = 75.5 , p = 0.39 ) compared to
rpda. this study is the first to directly compare
cognition between pd patients with and without
anxiety . the findings confirmed our hypothesis
that anxiety negatively influences attentional set
- shifting and working memory in pd . more
specifically , we found that pd patients with
anxiety were more impaired on the trail making
test part b which assessed attentional set -
shifting , on both digit span tests which assessed
working memory and attention , and to a lesser
extent on the logical memory test which assessed
memory and new verbal learning compared to pd
patients without anxiety . taken together , these
findings suggest that anxiety in pd may reduce
processing capacity and impair processing
efficiency , especially in the central executive
and attentional systems of working memory in a
similar way as seen in young healthy adults [ 26 ,
27 ] . although the neurobiology of anxiety in pd
remains unknown , many researchers have postulated
that anxiety disorders are related to
neurochemical changes that occur during the early
, premotor stages of pd - related degeneration [
37 , 38 ] such as nigrostriatal dopamine depletion
, as well as cell loss within serotonergic and
noradrenergic brainstem nuclei ( i.e. , raphe
nuclei and locus coeruleus , resp . , which
provide massive inputs to corticolimbic regions )
. over time , chronic dysregulation of
adrenocortical and catecholamine functions can
lead to hippocampal damage as well as
dysfunctional prefrontal neural circuitries [ 39 ,
40 ] , which play a key role in memory and
attention . recent functional neuroimaging work
has suggested that enhanced hippocampal activation
during executive functioning and working memory
tasks may represent compensatory processes for
impaired frontostriatal functions in pd patients
compared to controls . therefore , chronic stress
from anxiety , for example , may disrupt
compensatory processes in pd patients and explain
the cognitive impairments specifically in working
memory and attention seen in pd patients with
anxiety . it has also been suggested that
hyperactivation within the putamen may reflect a
compensatory striatal mechanism to maintain normal
working memory performance in pd patients ;
however , losing this compensatory activation has
been shown to contribute to poor working memory
performance . anxiety in mild pd has been linked
to reduced putamen dopamine uptake which becomes
more extensive as the disease progresses . this
further supports the notion that anxiety may
disrupt compensatory striatal mechanisms as well ,
providing another possible explanation for the
cognitive impairments observed in pd patients with
anxiety in this study . noradrenergic and
serotonergic systems should also be considered
when trying to explain the mechanisms by which
anxiety may influence cognition in pd . although
these neurotransmitter systems are relatively
understudied in pd cognition , treating the
noradrenergic and serotonergic systems has shown
beneficial effects on cognition in pd . selective
serotonin reuptake inhibitor , citalopram , was
shown to improve response inhibition deficits in
pd , while noradrenaline reuptake blocker ,
atomoxetine , has been recently reported to have
promising effects on cognition in pd [ 45 , 46 ] .
overall , very few neuroimaging studies have been
conducted in pd in order to understand the neural
correlates of pd anxiety and its underlying neural
pathology . future research should focus on
relating anatomical changes and neurochemical
changes to neural activation in order to gain a
clearer understanding on how these pathologies
affect anxiety in pd . to further understand how
anxiety and cognitive dysfunction are related ,
future research should focus on using advanced
structural and function imaging techniques to
explain both cognitive and neural breakdowns that
are associated with anxiety in pd patients .
research has indicated that those with amnestic
mild cognitive impairment who have more
neuropsychiatric symptoms have a greater risk of
developing dementia compared to those with fewer
neuropsychiatric symptoms . future studies should
also examine whether treating neuropsychiatric
symptoms might impact the progression of cognitive
decline and improve cognitive impairments in pd
patients . previous studies have used pd symptom
laterality as a window to infer asymmetrical
dysfunction of neural circuits . for example , lpd
patients have greater inferred right hemisphere
pathology , whereas rpd patients have greater
inferred left hemisphere pathology . thus ,
cognitive domains predominantly subserved by the
left hemisphere ( e.g. , verbally mediated tasks
of executive function and verbal memory ) might be
hypothesized to be more affected in rpd than lpd ;
however , this remains controversial . it has also
been suggested that since anxiety is a common
feature of left hemisphere involvement [ 48 , 49 ]
, cognitive domains subserved by the left
hemisphere may also be more strongly related to
anxiety . results from this study showed selective
verbal memory deficits in rpd patients with
anxiety compared to rpd without anxiety , whereas
lpd patients with anxiety had greater attentional
/ working memory deficits compared to lpd without
anxiety . although these results align with
previous research , interpretations of these
findings should be made with caution due to the
small sample size in the lpd comparison
specifically . recent work has suggested that the
hads questionnaire may underestimate the burden of
anxiety related symptomology and therefore be a
less sensitive measure of anxiety in pd [ 30 , 50
] . in addition , our small sample size also
limited the statistical power for detecting
significant findings . based on these limitations
, our findings are likely conservative and
underrepresent the true impact anxiety has on
cognition in pd . additionally , the current study
employed a very brief neuropsychological
assessment including one or two tests for each
cognitive domain . future studies are encouraged
to collect a more complex and comprehensive
battery from a larger sample of pd participants in
order to better understand the role anxiety plays
on cognition in pd . another limitation of this
study was the absence of diagnostic interviews to
characterize participants ' psychiatric symptoms
and specify the type of anxiety disorders included
in this study . future studies should perform
diagnostic interviews with participants ( e.g. ,
using dsm - v criteria ) rather than relying on
self - reported measures to group participants ,
in order to better understand whether the type of
anxiety disorder ( e.g. , social anxiety , phobias
, panic disorders , and generalized anxiety )
influences cognitive performance differently in pd
. one advantage the hads questionnaire provided
over other anxiety scales was that it assessed
both anxiety and depression simultaneously and
allowed us to control for coexisting depression .
although there was a trend that the pda+ group
self - reported higher levels of depression than
the pda group , all participants included in the
study scored < 6 on the depression subscale of the
hads . controlling for depression while assessing
anxiety has been identified as a key shortcoming
in the majority of recent work . considering many
previous studies have investigated the influence
of depression on cognition in pd without
accounting for the presence of anxiety and the
inconsistent findings reported to date , we
recommend that future research should try to
disentangle the influence of anxiety versus
depression on cognitive impairments in pd .
considering the growing number of clinical trials
for treating depression , there are few if any for
the treatment of anxiety in pd . anxiety is a key
contributor to decreased quality of life in pd and
greatly requires better treatment options .
moreover , anxiety has been suggested to play a
key role in freezing of gait ( fog ) , which is
also related to attentional set - shifting [ 52 ,
53 ] . future research should examine the link
between anxiety , set - shifting , and fog , in
order to determine whether treating anxiety might
be a potential therapy for improving fog ."""
import torch
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
input_ids = tokenizer(LONG_ARTICLE, return_tensors="pt").input_ids.to("cuda")
model = LongT5ForConditionalGeneration.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps", return_dict_in_generate=True).to("cuda")
sequences = model.generate(input_ids).sequences
summary = tokenizer.batch_decode(sequences)
```
|
ruggedmug/q-Taxi-v3
|
ruggedmug
| 2023-07-13T19:38:06Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T19:38:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ruggedmug/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
grace-pro/xlmr-base-hausa-5e-5
|
grace-pro
| 2023-07-13T19:15:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T18:46:41Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlmr-base-hausa-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-base-hausa-5e-5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1493
- Precision: 0.7153
- Recall: 0.5631
- F1: 0.6301
- Accuracy: 0.9588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.177 | 1.0 | 1312 | 0.1549 | 0.6557 | 0.4168 | 0.5097 | 0.9479 |
| 0.1412 | 2.0 | 2624 | 0.1386 | 0.6723 | 0.5262 | 0.5903 | 0.9539 |
| 0.1154 | 3.0 | 3936 | 0.1400 | 0.7078 | 0.5353 | 0.6096 | 0.9567 |
| 0.0921 | 4.0 | 5248 | 0.1418 | 0.7200 | 0.5496 | 0.6234 | 0.9585 |
| 0.0731 | 5.0 | 6560 | 0.1493 | 0.7153 | 0.5631 | 0.6301 | 0.9588 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kastan/sf_medium_bf16
|
kastan
| 2023-07-13T19:11:15Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T19:05:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
grace-pro/afriberta-small-hausa-5e-5
|
grace-pro
| 2023-07-13T18:41:38Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T18:31:08Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-small-hausa-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-small-hausa-5e-5
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1600
- Precision: 0.6808
- Recall: 0.4937
- F1: 0.5724
- Accuracy: 0.9623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1523 | 1.0 | 1312 | 0.1338 | 0.6526 | 0.4261 | 0.5156 | 0.9583 |
| 0.1162 | 2.0 | 2624 | 0.1300 | 0.6862 | 0.4603 | 0.5510 | 0.9614 |
| 0.089 | 3.0 | 3936 | 0.1375 | 0.6953 | 0.4630 | 0.5559 | 0.9619 |
| 0.0698 | 4.0 | 5248 | 0.1507 | 0.6860 | 0.4888 | 0.5708 | 0.9623 |
| 0.0559 | 5.0 | 6560 | 0.1600 | 0.6808 | 0.4937 | 0.5724 | 0.9623 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ericserafa/ppo-Huggy
|
ericserafa
| 2023-07-13T18:38:06Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T17:36:51Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ericserafa/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NTU-NLP-sg/xCodeEval-nl-code-starencoder-ckpt-37
|
NTU-NLP-sg
| 2023-07-13T18:35:21Z | 0 | 0 | null |
[
"arxiv:2303.03004",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-07-13T06:59:15Z |
---
license: cc-by-nc-4.0
---
## Model Description
**StarEncoder** trained with training split of `retrieval_nl_code` subset of [xCodeEval](https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval). Trained for 37 epochs.
Code Repo used to train: https://github.com/facebookresearch/DPR
For details result, please follow our [paper](https://arxiv.org/abs/2303.03004).
|
mayapapaya/Keyword-Extractor
|
mayapapaya
| 2023-07-13T18:33:59Z | 204 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T14:23:08Z |
# Model Card for Model ID
This model is meant to extract keywords from text.
- **Model type:** text-classification
- **Language(s) (NLP):** English
- **License:** cc
- **Finetuned from model [optional]:** [More Information Needed]
## Training Details
This model is a fine-tuned version of the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model.
## Training Data
Trained on [51la5/keyword-extraction](https://huggingface.co/datasets/51la5/keyword-extraction) from HuggingFace Hub.
## How to Get Started with the Model
Note: model inputs were tokenized using distilbert-base-uncased tokenizer
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("mayapapaya/Keyword-Extractor")
```
|
Schwab/vit-base-patch16-224-finetuned-flower
|
Schwab
| 2023-07-13T18:31:03Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T18:19:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
chunwoolee0/my_doccls_korean_model
|
chunwoolee0
| 2023-07-13T18:27:18Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:nsmc",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T02:48:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- nsmc
metrics:
- accuracy
model-index:
- name: my_doccls_korean_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: nsmc
type: nsmc
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.90372
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_doccls_korean_model
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the nsmc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2942
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.267 | 1.0 | 2344 | 0.2482 | 0.8987 |
| 0.1751 | 2.0 | 4688 | 0.2523 | 0.9024 |
| 0.1108 | 3.0 | 7032 | 0.2942 | 0.9037 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grace-pro/afriberta-base-hausa-5e-5
|
grace-pro
| 2023-07-13T18:24:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T18:07:00Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-base-hausa-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-base-hausa-5e-5
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1657
- Precision: 0.6982
- Recall: 0.5348
- F1: 0.6056
- Accuracy: 0.9648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1444 | 1.0 | 1312 | 0.1255 | 0.6832 | 0.4535 | 0.5452 | 0.9612 |
| 0.1065 | 2.0 | 2624 | 0.1204 | 0.6788 | 0.5101 | 0.5824 | 0.9631 |
| 0.0752 | 3.0 | 3936 | 0.1303 | 0.6818 | 0.5248 | 0.5931 | 0.9635 |
| 0.0529 | 4.0 | 5248 | 0.1461 | 0.6963 | 0.5307 | 0.6023 | 0.9648 |
| 0.0386 | 5.0 | 6560 | 0.1657 | 0.6982 | 0.5348 | 0.6056 | 0.9648 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Joserzapata/speecht5_finetuned_voxpopuli_nl
|
Joserzapata
| 2023-07-13T18:21:28Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-13T04:28:20Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.521 | 4.3 | 1000 | 0.4799 |
| 0.5021 | 8.61 | 2000 | 0.4676 |
| 0.4958 | 12.91 | 3000 | 0.4637 |
| 0.4874 | 17.21 | 4000 | 0.4624 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Sandrro/text_to_topic
|
Sandrro
| 2023-07-13T18:15:06Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T17:18:08Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: text_to_subfunction_v10_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_to_subfunction_v10_2
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5115
- F1: 0.5638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.8616 | 1.0 | 5400 | 1.7457 | 0.4607 |
| 1.4576 | 2.0 | 10800 | 1.5115 | 0.5638 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0.dev20230414+cu117
- Datasets 2.9.0
- Tokenizers 0.13.3
|
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t5.0_a0.9
|
ayanban011
| 2023-07-13T18:14:29Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T15:47:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6_e_200-tiny_tobacco3482_kd_CEKD_t5.0_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6_e_200-tiny_tobacco3482_kd_CEKD_t5.0_a0.9
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5583
- Accuracy: 0.82
- Brier Loss: 0.2563
- Nll: 1.8898
- F1 Micro: 0.82
- F1 Macro: 0.8009
- Ece: 0.1578
- Aurc: 0.0530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 1.9764 | 0.23 | 0.8621 | 4.6756 | 0.23 | 0.1902 | 0.2733 | 0.7604 |
| No log | 2.0 | 50 | 1.2764 | 0.535 | 0.5973 | 2.7212 | 0.535 | 0.4337 | 0.2769 | 0.2592 |
| No log | 3.0 | 75 | 0.9774 | 0.68 | 0.4478 | 2.1874 | 0.68 | 0.5915 | 0.2144 | 0.1334 |
| No log | 4.0 | 100 | 0.8047 | 0.755 | 0.3617 | 1.4629 | 0.755 | 0.7257 | 0.1850 | 0.0888 |
| No log | 5.0 | 125 | 0.7616 | 0.765 | 0.3363 | 1.4885 | 0.765 | 0.7391 | 0.2017 | 0.0843 |
| No log | 6.0 | 150 | 1.0029 | 0.72 | 0.4200 | 1.6550 | 0.72 | 0.7047 | 0.2303 | 0.1169 |
| No log | 7.0 | 175 | 0.6286 | 0.825 | 0.2766 | 1.2493 | 0.825 | 0.7930 | 0.1954 | 0.0646 |
| No log | 8.0 | 200 | 0.6859 | 0.82 | 0.2857 | 1.4847 | 0.82 | 0.7971 | 0.1837 | 0.0699 |
| No log | 9.0 | 225 | 0.6365 | 0.81 | 0.2765 | 1.1457 | 0.81 | 0.7913 | 0.1604 | 0.0669 |
| No log | 10.0 | 250 | 0.6085 | 0.81 | 0.2614 | 1.5809 | 0.81 | 0.7928 | 0.1874 | 0.0536 |
| No log | 11.0 | 275 | 0.5900 | 0.84 | 0.2620 | 1.1457 | 0.8400 | 0.8308 | 0.1695 | 0.0674 |
| No log | 12.0 | 300 | 0.8544 | 0.75 | 0.3667 | 1.9577 | 0.75 | 0.7330 | 0.1988 | 0.1329 |
| No log | 13.0 | 325 | 0.5265 | 0.845 | 0.2278 | 1.2521 | 0.845 | 0.8209 | 0.1518 | 0.0399 |
| No log | 14.0 | 350 | 0.5702 | 0.815 | 0.2567 | 1.5233 | 0.815 | 0.8032 | 0.1551 | 0.0519 |
| No log | 15.0 | 375 | 0.5933 | 0.845 | 0.2581 | 1.4776 | 0.845 | 0.8341 | 0.1659 | 0.0738 |
| No log | 16.0 | 400 | 0.5697 | 0.84 | 0.2496 | 1.6732 | 0.8400 | 0.8235 | 0.1470 | 0.0557 |
| No log | 17.0 | 425 | 0.5471 | 0.825 | 0.2428 | 1.7010 | 0.825 | 0.8093 | 0.1406 | 0.0461 |
| No log | 18.0 | 450 | 0.5696 | 0.825 | 0.2546 | 1.4095 | 0.825 | 0.7977 | 0.1461 | 0.0612 |
| No log | 19.0 | 475 | 0.6544 | 0.805 | 0.2959 | 1.8251 | 0.805 | 0.7970 | 0.1681 | 0.0605 |
| 0.4416 | 20.0 | 500 | 0.5113 | 0.83 | 0.2327 | 1.4103 | 0.83 | 0.8093 | 0.1380 | 0.0541 |
| 0.4416 | 21.0 | 525 | 0.5255 | 0.84 | 0.2375 | 1.6750 | 0.8400 | 0.8220 | 0.1320 | 0.0462 |
| 0.4416 | 22.0 | 550 | 0.5889 | 0.835 | 0.2681 | 1.7850 | 0.835 | 0.8242 | 0.1507 | 0.0683 |
| 0.4416 | 23.0 | 575 | 0.5456 | 0.835 | 0.2492 | 1.8481 | 0.835 | 0.8137 | 0.1716 | 0.0550 |
| 0.4416 | 24.0 | 600 | 0.5661 | 0.83 | 0.2611 | 1.8434 | 0.83 | 0.8156 | 0.1618 | 0.0591 |
| 0.4416 | 25.0 | 625 | 0.5444 | 0.83 | 0.2484 | 1.7579 | 0.83 | 0.8091 | 0.1478 | 0.0530 |
| 0.4416 | 26.0 | 650 | 0.5418 | 0.83 | 0.2503 | 1.7188 | 0.83 | 0.8125 | 0.1564 | 0.0484 |
| 0.4416 | 27.0 | 675 | 0.5532 | 0.835 | 0.2540 | 1.8931 | 0.835 | 0.8146 | 0.1694 | 0.0514 |
| 0.4416 | 28.0 | 700 | 0.5492 | 0.835 | 0.2518 | 1.8959 | 0.835 | 0.8155 | 0.1505 | 0.0495 |
| 0.4416 | 29.0 | 725 | 0.5478 | 0.825 | 0.2505 | 1.8907 | 0.825 | 0.8069 | 0.1548 | 0.0503 |
| 0.4416 | 30.0 | 750 | 0.5478 | 0.835 | 0.2510 | 1.8881 | 0.835 | 0.8178 | 0.1467 | 0.0521 |
| 0.4416 | 31.0 | 775 | 0.5472 | 0.825 | 0.2505 | 1.8888 | 0.825 | 0.8064 | 0.1527 | 0.0510 |
| 0.4416 | 32.0 | 800 | 0.5522 | 0.83 | 0.2527 | 1.8927 | 0.83 | 0.8126 | 0.1449 | 0.0520 |
| 0.4416 | 33.0 | 825 | 0.5513 | 0.825 | 0.2524 | 1.8989 | 0.825 | 0.8064 | 0.1625 | 0.0509 |
| 0.4416 | 34.0 | 850 | 0.5465 | 0.835 | 0.2504 | 1.8880 | 0.835 | 0.8148 | 0.1519 | 0.0520 |
| 0.4416 | 35.0 | 875 | 0.5489 | 0.825 | 0.2515 | 1.8866 | 0.825 | 0.8064 | 0.1538 | 0.0510 |
| 0.4416 | 36.0 | 900 | 0.5508 | 0.825 | 0.2521 | 1.8922 | 0.825 | 0.8053 | 0.1356 | 0.0526 |
| 0.4416 | 37.0 | 925 | 0.5495 | 0.825 | 0.2522 | 1.8881 | 0.825 | 0.8064 | 0.1517 | 0.0514 |
| 0.4416 | 38.0 | 950 | 0.5483 | 0.825 | 0.2514 | 1.8859 | 0.825 | 0.8064 | 0.1749 | 0.0511 |
| 0.4416 | 39.0 | 975 | 0.5508 | 0.825 | 0.2524 | 1.8868 | 0.825 | 0.8064 | 0.1459 | 0.0514 |
| 0.0519 | 40.0 | 1000 | 0.5519 | 0.825 | 0.2529 | 1.8862 | 0.825 | 0.8064 | 0.1532 | 0.0513 |
| 0.0519 | 41.0 | 1025 | 0.5522 | 0.825 | 0.2530 | 1.8882 | 0.825 | 0.8064 | 0.1665 | 0.0519 |
| 0.0519 | 42.0 | 1050 | 0.5507 | 0.825 | 0.2525 | 1.8870 | 0.825 | 0.8064 | 0.1613 | 0.0508 |
| 0.0519 | 43.0 | 1075 | 0.5528 | 0.825 | 0.2536 | 1.8884 | 0.825 | 0.8064 | 0.1634 | 0.0517 |
| 0.0519 | 44.0 | 1100 | 0.5520 | 0.825 | 0.2531 | 1.8879 | 0.825 | 0.8064 | 0.1519 | 0.0525 |
| 0.0519 | 45.0 | 1125 | 0.5524 | 0.825 | 0.2535 | 1.8876 | 0.825 | 0.8053 | 0.1582 | 0.0515 |
| 0.0519 | 46.0 | 1150 | 0.5525 | 0.825 | 0.2534 | 1.8867 | 0.825 | 0.8064 | 0.1592 | 0.0519 |
| 0.0519 | 47.0 | 1175 | 0.5532 | 0.825 | 0.2539 | 1.8875 | 0.825 | 0.8064 | 0.1621 | 0.0521 |
| 0.0519 | 48.0 | 1200 | 0.5540 | 0.825 | 0.2540 | 1.8865 | 0.825 | 0.8064 | 0.1502 | 0.0522 |
| 0.0519 | 49.0 | 1225 | 0.5523 | 0.825 | 0.2538 | 1.8268 | 0.825 | 0.8072 | 0.1625 | 0.0514 |
| 0.0519 | 50.0 | 1250 | 0.5535 | 0.825 | 0.2539 | 1.8871 | 0.825 | 0.8064 | 0.1684 | 0.0517 |
| 0.0519 | 51.0 | 1275 | 0.5526 | 0.825 | 0.2534 | 1.8850 | 0.825 | 0.8064 | 0.1621 | 0.0519 |
| 0.0519 | 52.0 | 1300 | 0.5543 | 0.825 | 0.2543 | 1.8865 | 0.825 | 0.8064 | 0.1429 | 0.0521 |
| 0.0519 | 53.0 | 1325 | 0.5526 | 0.825 | 0.2538 | 1.8866 | 0.825 | 0.8064 | 0.1613 | 0.0515 |
| 0.0519 | 54.0 | 1350 | 0.5530 | 0.82 | 0.2538 | 1.8877 | 0.82 | 0.8009 | 0.1620 | 0.0518 |
| 0.0519 | 55.0 | 1375 | 0.5550 | 0.825 | 0.2547 | 1.8872 | 0.825 | 0.8064 | 0.1567 | 0.0522 |
| 0.0519 | 56.0 | 1400 | 0.5565 | 0.825 | 0.2552 | 1.8859 | 0.825 | 0.8064 | 0.1400 | 0.0523 |
| 0.0519 | 57.0 | 1425 | 0.5552 | 0.825 | 0.2548 | 1.8874 | 0.825 | 0.8064 | 0.1543 | 0.0520 |
| 0.0519 | 58.0 | 1450 | 0.5537 | 0.825 | 0.2542 | 1.8860 | 0.825 | 0.8064 | 0.1531 | 0.0516 |
| 0.0519 | 59.0 | 1475 | 0.5559 | 0.825 | 0.2551 | 1.8879 | 0.825 | 0.8064 | 0.1564 | 0.0525 |
| 0.0508 | 60.0 | 1500 | 0.5548 | 0.825 | 0.2545 | 1.8866 | 0.825 | 0.8064 | 0.1526 | 0.0522 |
| 0.0508 | 61.0 | 1525 | 0.5557 | 0.825 | 0.2550 | 1.8884 | 0.825 | 0.8064 | 0.1443 | 0.0524 |
| 0.0508 | 62.0 | 1550 | 0.5548 | 0.82 | 0.2546 | 1.8874 | 0.82 | 0.8009 | 0.1709 | 0.0527 |
| 0.0508 | 63.0 | 1575 | 0.5556 | 0.825 | 0.2551 | 1.8899 | 0.825 | 0.8064 | 0.1606 | 0.0524 |
| 0.0508 | 64.0 | 1600 | 0.5562 | 0.825 | 0.2553 | 1.8872 | 0.825 | 0.8064 | 0.1467 | 0.0527 |
| 0.0508 | 65.0 | 1625 | 0.5569 | 0.825 | 0.2554 | 1.8879 | 0.825 | 0.8064 | 0.1537 | 0.0524 |
| 0.0508 | 66.0 | 1650 | 0.5567 | 0.825 | 0.2555 | 1.8873 | 0.825 | 0.8064 | 0.1601 | 0.0525 |
| 0.0508 | 67.0 | 1675 | 0.5556 | 0.825 | 0.2550 | 1.8878 | 0.825 | 0.8064 | 0.1601 | 0.0527 |
| 0.0508 | 68.0 | 1700 | 0.5570 | 0.825 | 0.2555 | 1.8879 | 0.825 | 0.8064 | 0.1679 | 0.0528 |
| 0.0508 | 69.0 | 1725 | 0.5560 | 0.825 | 0.2553 | 1.8886 | 0.825 | 0.8064 | 0.1525 | 0.0521 |
| 0.0508 | 70.0 | 1750 | 0.5562 | 0.825 | 0.2553 | 1.8878 | 0.825 | 0.8064 | 0.1531 | 0.0528 |
| 0.0508 | 71.0 | 1775 | 0.5572 | 0.82 | 0.2557 | 1.8883 | 0.82 | 0.8009 | 0.1718 | 0.0530 |
| 0.0508 | 72.0 | 1800 | 0.5567 | 0.82 | 0.2555 | 1.8888 | 0.82 | 0.8009 | 0.1630 | 0.0525 |
| 0.0508 | 73.0 | 1825 | 0.5571 | 0.825 | 0.2556 | 1.8882 | 0.825 | 0.8064 | 0.1598 | 0.0528 |
| 0.0508 | 74.0 | 1850 | 0.5580 | 0.825 | 0.2561 | 1.8901 | 0.825 | 0.8064 | 0.1543 | 0.0530 |
| 0.0508 | 75.0 | 1875 | 0.5579 | 0.82 | 0.2561 | 1.8892 | 0.82 | 0.8009 | 0.1721 | 0.0530 |
| 0.0508 | 76.0 | 1900 | 0.5574 | 0.82 | 0.2559 | 1.8892 | 0.82 | 0.8009 | 0.1636 | 0.0528 |
| 0.0508 | 77.0 | 1925 | 0.5569 | 0.82 | 0.2557 | 1.8393 | 0.82 | 0.8009 | 0.1634 | 0.0526 |
| 0.0508 | 78.0 | 1950 | 0.5572 | 0.82 | 0.2558 | 1.8887 | 0.82 | 0.8009 | 0.1637 | 0.0530 |
| 0.0508 | 79.0 | 1975 | 0.5578 | 0.82 | 0.2560 | 1.8888 | 0.82 | 0.8009 | 0.1579 | 0.0530 |
| 0.0507 | 80.0 | 2000 | 0.5577 | 0.82 | 0.2559 | 1.8889 | 0.82 | 0.8009 | 0.1578 | 0.0532 |
| 0.0507 | 81.0 | 2025 | 0.5578 | 0.82 | 0.2560 | 1.8889 | 0.82 | 0.8009 | 0.1578 | 0.0533 |
| 0.0507 | 82.0 | 2050 | 0.5579 | 0.825 | 0.2561 | 1.8891 | 0.825 | 0.8064 | 0.1602 | 0.0528 |
| 0.0507 | 83.0 | 2075 | 0.5581 | 0.825 | 0.2562 | 1.8894 | 0.825 | 0.8064 | 0.1544 | 0.0528 |
| 0.0507 | 84.0 | 2100 | 0.5579 | 0.82 | 0.2561 | 1.8894 | 0.82 | 0.8009 | 0.1581 | 0.0531 |
| 0.0507 | 85.0 | 2125 | 0.5580 | 0.82 | 0.2561 | 1.8896 | 0.82 | 0.8009 | 0.1578 | 0.0528 |
| 0.0507 | 86.0 | 2150 | 0.5581 | 0.82 | 0.2562 | 1.8891 | 0.82 | 0.8009 | 0.1580 | 0.0532 |
| 0.0507 | 87.0 | 2175 | 0.5582 | 0.82 | 0.2562 | 1.8467 | 0.82 | 0.8009 | 0.1581 | 0.0528 |
| 0.0507 | 88.0 | 2200 | 0.5583 | 0.82 | 0.2562 | 1.8891 | 0.82 | 0.8009 | 0.1580 | 0.0531 |
| 0.0507 | 89.0 | 2225 | 0.5584 | 0.815 | 0.2563 | 1.8894 | 0.815 | 0.7976 | 0.1608 | 0.0534 |
| 0.0507 | 90.0 | 2250 | 0.5578 | 0.82 | 0.2561 | 1.8894 | 0.82 | 0.8009 | 0.1578 | 0.0530 |
| 0.0507 | 91.0 | 2275 | 0.5584 | 0.815 | 0.2563 | 1.8896 | 0.815 | 0.7976 | 0.1607 | 0.0532 |
| 0.0507 | 92.0 | 2300 | 0.5583 | 0.82 | 0.2562 | 1.8893 | 0.82 | 0.8009 | 0.1581 | 0.0531 |
| 0.0507 | 93.0 | 2325 | 0.5582 | 0.82 | 0.2562 | 1.8898 | 0.82 | 0.8009 | 0.1579 | 0.0530 |
| 0.0507 | 94.0 | 2350 | 0.5582 | 0.82 | 0.2562 | 1.8392 | 0.82 | 0.8009 | 0.1578 | 0.0530 |
| 0.0507 | 95.0 | 2375 | 0.5584 | 0.82 | 0.2563 | 1.8897 | 0.82 | 0.8009 | 0.1582 | 0.0531 |
| 0.0507 | 96.0 | 2400 | 0.5582 | 0.82 | 0.2562 | 1.8898 | 0.82 | 0.8009 | 0.1578 | 0.0530 |
| 0.0507 | 97.0 | 2425 | 0.5583 | 0.82 | 0.2563 | 1.8896 | 0.82 | 0.8009 | 0.1580 | 0.0530 |
| 0.0507 | 98.0 | 2450 | 0.5582 | 0.82 | 0.2562 | 1.8898 | 0.82 | 0.8009 | 0.1578 | 0.0530 |
| 0.0507 | 99.0 | 2475 | 0.5583 | 0.82 | 0.2563 | 1.8898 | 0.82 | 0.8009 | 0.1578 | 0.0530 |
| 0.0507 | 100.0 | 2500 | 0.5583 | 0.82 | 0.2563 | 1.8898 | 0.82 | 0.8009 | 0.1578 | 0.0530 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Tanor/BERTovoSENTNEG6
|
Tanor
| 2023-07-13T18:11:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T01:32:38Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: BERTovoSENTNEG6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTovoSENTNEG6
This model is a fine-tuned version of [Tanor/BERTicovoSENTNEG6](https://huggingface.co/Tanor/BERTicovoSENTNEG6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0837
- F1: 0.4878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 53 | 0.0536 | 0.0769 |
| No log | 2.0 | 106 | 0.0482 | 0.5909 |
| No log | 3.0 | 159 | 0.0610 | 0.5532 |
| No log | 4.0 | 212 | 0.0718 | 0.5 |
| No log | 5.0 | 265 | 0.0837 | 0.4878 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Dlychan/Toketenk
|
Dlychan
| 2023-07-13T17:54:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T17:46:45Z |
---
license: creativeml-openrail-m
---
|
MBMMurad/BanglaBERT_Person_Name_Extractor
|
MBMMurad
| 2023-07-13T17:52:09Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"token-classification",
"bn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-12T21:19:24Z |
---
language:
- bn
metrics:
- f1
pipeline_tag: token-classification
---
# Bangla-Person-Name-Extractor
This repository contains the implementation of a Bangla Person Name Extractor model which is able to extract Person name entities from a given sentence. We approached it as a token classification task i.e. tagging each token with either a Person's name or not. We leveraged the [BanglaBERT](http://https://github.com/csebuetnlp/banglabert) model for our task, finetuning it for a binary classification task using a custom-prepare dataset. We have deployed the model into huggingface for easier access and use case.
# How to use it?
[This Notebook](https://github.com/MBMMurad/Bangla-Person-Name-Extractor/blob/main/Inference_template.ipynb) contains the required Inference Template on a sentence.
<br></br>
You can also directly infer using the following code snippet. Just change the sentence.
```
from transformers import AutoModelForPreTraining, AutoTokenizer,AutoModelForTokenClassification #!pip install transformers==4.30.2
from normalizer import normalize #pip install git+https://github.com/csebuetnlp/normalizer
import torch #pip install torch
import numpy as np #!pip install numpy==1.23.5
model = AutoModelForTokenClassification.from_pretrained("MBMMurad/BanglaBERT_Person_Name_Extractor")
tokenizer = AutoTokenizer.from_pretrained("MBMMurad/BanglaBERT_Person_Name_Extractor")
def inference_fn(sentence):
sentence = normalize(sentence)
tokens = tokenizer.tokenize(sentence)
inputs = tokenizer.encode(sentence,return_tensors="pt")
outputs = model(inputs).logits
predictions = torch.argmax(outputs[0],axis=1)[1:-1].numpy()
idxs = np.where(predictions==1)
return np.array(tokens)[idxs]
sentence = "আব্দুর রহিম নামের কাস্টমারকে একশ টাকা বাকি দিলাম।"
pred = inference_fn(sentence)
print(f"Input Sentence : {sentence}")
print(f"Person Name Entities : {pred}")
sentence = "ইঞ্জিনিয়ার্স ইনস্টিটিউশন চট্টগ্রামের সাবেক সভাপতি প্রকৌশলী দেলোয়ার হোসেন মজুমদার প্রথম আলোকে বলেন, 'সংকট নিরসনে বর্তমান খালগুলোকে পূর্ণ প্রবাহে ফিরিয়ে আনার পাশাপাশি নতুন তিনটি খাল খনন জরুরি।'"
pred = inference_fn(sentence)
print(f"Input Sentence : {sentence}")
print(f"Person Name Entities : {pred}")
sentence = "দলীয় নেতারা তাঁর বাসভবনে যেতে চাইলে আটক হন।"
pred = inference_fn(sentence)
print(f"Input Sentence : {sentence}")
print(f"Person Name Entities : {pred}")
```
**Output:**
```
Input Sentence : আব্দুর রহিম নামের কাস্টমারকে একশ টাকা বাকি দিলাম।
Person Name Entities : ['আব্দুর' 'রহিম']
Input Sentence : ইঞ্জিনিয়ার্স ইনস্টিটিউশন চট্টগ্রামের সাবেক সভাপতি প্রকৌশলী দেলোয়ার হোসেন মজুমদার প্রথম আলোকে বলেন, 'সংকট নিরসনে বর্তমান খালগুলোকে পূর্ণ প্রবাহে ফিরিয়ে আনার পাশাপাশি নতুন তিনটি খাল খনন জরুরি।'
Person Name Entities : ['দেলোয়ার' 'হোসেন' 'মজুমদার']
Input Sentence : দলীয় নেতারা তাঁর বাসভবনে যেতে চাইলে আটক হন।
Person Name Entities : []
```
# Datasets
We used two datasets to train and evaluate our pipeline.
1. [Bengali-NER/annotated data at master · Rifat1493/Bengali-NER](http://https://github.com/Rifat1493/Bengali-NER/tree/master/annotated%20data)
2. [banglakit/bengali-ner-data](http://https://raw.githubusercontent.com/banglakit/bengali-ner-data/master/main.jsonl)
The annotation formats for both datasets were quite different, so we had to preprocess both of them before merging them. Please refer to [this notebook](https://github.com/MBMMurad/Bangla-Person-Name-Extractor/blob/main/prepare-dataset.ipynb) for preparing the dataset as required.
# Training and Evaluation
We treated this problem as a token classification task.So it seemed perfect to finetune the BanglaBERT model for our purpose. [BanglaBERT ](https://huggingface.co/csebuetnlp/banglabert)is an [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) discriminator model pretrained with the Replaced Token Detection (RTD) objective. Finetuned models using this checkpoint achieve state-of-the-art results on many of the NLP tasks in bengali.
We mainly finetuned two checkpoints of BanglaBERT.
1. [BanglaBERT](https://huggingface.co/csebuetnlp/banglabert)
2. [BanglaEERT small](https://huggingface.co/csebuetnlp/banglabert_small)
BanglaBERT performed better than BanglaBERT small ( 83% F1 score vs 79% F1 score on the test set) .
Please refer to [this notebook](https://github.com/MBMMurad/Bangla-Person-Name-Extractor/blob/main/Training%20Notebook%20%3A%20Person%20Name%20Extractor%20using%20BanglaBERT.ipynb) to see the training process.
**Quantitative results**
Please refer to [this notebook](https://github.com/MBMMurad/Bangla-Person-Name-Extractor/blob/main/Inference%20and%20Evaluation%20Notebook.ipynb) to see the evaluation process.
<br></br>

|
Tanor/BERTovoSENTPOS6
|
Tanor
| 2023-07-13T17:48:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T00:21:54Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: BERTovoSENTPOS6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTovoSENTPOS6
This model is a fine-tuned version of [Tanor/BERTicovoSENTPOS6](https://huggingface.co/Tanor/BERTicovoSENTPOS6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0541
- F1: 0.5143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 53 | 0.0452 | 0.0 |
| No log | 2.0 | 106 | 0.0436 | 0.0870 |
| No log | 3.0 | 159 | 0.0449 | 0.4138 |
| No log | 4.0 | 212 | 0.0506 | 0.5 |
| No log | 5.0 | 265 | 0.0541 | 0.5143 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Geotrend/distilbert-base-ar-cased
|
Geotrend
| 2023-07-13T17:37:33Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"ar",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: ar
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-ar-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ar-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7
|
ayanban011
| 2023-07-13T17:33:13Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T15:25:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4925
- Accuracy: 0.845
- Brier Loss: 0.2526
- Nll: 1.5547
- F1 Micro: 0.845
- F1 Macro: 0.8258
- Ece: 0.1785
- Aurc: 0.0736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 1.8463 | 0.245 | 0.8631 | 4.7256 | 0.245 | 0.2002 | 0.2955 | 0.7640 |
| No log | 2.0 | 50 | 1.1593 | 0.535 | 0.5972 | 2.7208 | 0.535 | 0.4319 | 0.2539 | 0.2591 |
| No log | 3.0 | 75 | 0.9039 | 0.67 | 0.4555 | 2.3747 | 0.67 | 0.5677 | 0.2448 | 0.1349 |
| No log | 4.0 | 100 | 0.7631 | 0.73 | 0.3757 | 1.5518 | 0.7300 | 0.7026 | 0.1947 | 0.0987 |
| No log | 5.0 | 125 | 0.7412 | 0.775 | 0.3497 | 1.4677 | 0.775 | 0.7456 | 0.2239 | 0.0892 |
| No log | 6.0 | 150 | 0.9198 | 0.72 | 0.3977 | 1.7618 | 0.72 | 0.6958 | 0.2190 | 0.1118 |
| No log | 7.0 | 175 | 0.6117 | 0.81 | 0.2969 | 1.2112 | 0.81 | 0.7726 | 0.2244 | 0.0661 |
| No log | 8.0 | 200 | 0.6296 | 0.78 | 0.3090 | 1.3439 | 0.78 | 0.7443 | 0.1959 | 0.0771 |
| No log | 9.0 | 225 | 0.6850 | 0.785 | 0.3187 | 1.6325 | 0.785 | 0.7651 | 0.2194 | 0.0986 |
| No log | 10.0 | 250 | 0.6304 | 0.79 | 0.3111 | 1.3598 | 0.79 | 0.7821 | 0.2106 | 0.0838 |
| No log | 11.0 | 275 | 0.6668 | 0.775 | 0.3242 | 1.9754 | 0.775 | 0.6942 | 0.2005 | 0.0947 |
| No log | 12.0 | 300 | 0.6795 | 0.775 | 0.3263 | 1.6182 | 0.775 | 0.7692 | 0.2155 | 0.0875 |
| No log | 13.0 | 325 | 0.5156 | 0.85 | 0.2454 | 0.9647 | 0.85 | 0.8378 | 0.2033 | 0.0515 |
| No log | 14.0 | 350 | 0.5341 | 0.845 | 0.2644 | 1.0410 | 0.845 | 0.8402 | 0.2050 | 0.0503 |
| No log | 15.0 | 375 | 0.4678 | 0.865 | 0.2245 | 0.9232 | 0.865 | 0.8564 | 0.1836 | 0.0363 |
| No log | 16.0 | 400 | 0.5620 | 0.82 | 0.2819 | 1.1475 | 0.82 | 0.7980 | 0.2050 | 0.0710 |
| No log | 17.0 | 425 | 0.5253 | 0.83 | 0.2642 | 0.8809 | 0.83 | 0.8145 | 0.1811 | 0.0723 |
| No log | 18.0 | 450 | 0.6295 | 0.815 | 0.2997 | 1.8144 | 0.815 | 0.8062 | 0.2120 | 0.0636 |
| No log | 19.0 | 475 | 0.5748 | 0.83 | 0.2774 | 1.7900 | 0.83 | 0.8200 | 0.1920 | 0.0506 |
| 0.466 | 20.0 | 500 | 0.4704 | 0.84 | 0.2275 | 0.8869 | 0.8400 | 0.8135 | 0.1882 | 0.0472 |
| 0.466 | 21.0 | 525 | 0.5693 | 0.82 | 0.2820 | 1.3315 | 0.82 | 0.8013 | 0.2011 | 0.0821 |
| 0.466 | 22.0 | 550 | 0.5251 | 0.81 | 0.2677 | 1.2663 | 0.81 | 0.7890 | 0.2037 | 0.0745 |
| 0.466 | 23.0 | 575 | 0.5158 | 0.83 | 0.2638 | 1.2621 | 0.83 | 0.8070 | 0.1927 | 0.0614 |
| 0.466 | 24.0 | 600 | 0.5056 | 0.835 | 0.2590 | 1.5337 | 0.835 | 0.8080 | 0.1887 | 0.0617 |
| 0.466 | 25.0 | 625 | 0.4897 | 0.85 | 0.2476 | 1.4341 | 0.85 | 0.8361 | 0.1870 | 0.0627 |
| 0.466 | 26.0 | 650 | 0.4994 | 0.85 | 0.2556 | 1.5846 | 0.85 | 0.8302 | 0.1965 | 0.0718 |
| 0.466 | 27.0 | 675 | 0.4720 | 0.845 | 0.2406 | 1.3093 | 0.845 | 0.8234 | 0.1873 | 0.0704 |
| 0.466 | 28.0 | 700 | 0.4858 | 0.84 | 0.2486 | 1.4459 | 0.8400 | 0.8192 | 0.1676 | 0.0730 |
| 0.466 | 29.0 | 725 | 0.4908 | 0.84 | 0.2510 | 1.4941 | 0.8400 | 0.8159 | 0.1754 | 0.0717 |
| 0.466 | 30.0 | 750 | 0.4805 | 0.855 | 0.2442 | 1.3279 | 0.855 | 0.8334 | 0.1827 | 0.0667 |
| 0.466 | 31.0 | 775 | 0.4783 | 0.845 | 0.2428 | 1.4150 | 0.845 | 0.8264 | 0.1759 | 0.0660 |
| 0.466 | 32.0 | 800 | 0.4822 | 0.855 | 0.2449 | 1.4848 | 0.855 | 0.8322 | 0.1928 | 0.0702 |
| 0.466 | 33.0 | 825 | 0.4845 | 0.84 | 0.2462 | 1.4925 | 0.8400 | 0.8227 | 0.1837 | 0.0692 |
| 0.466 | 34.0 | 850 | 0.4843 | 0.85 | 0.2466 | 1.4881 | 0.85 | 0.8295 | 0.1752 | 0.0683 |
| 0.466 | 35.0 | 875 | 0.4837 | 0.85 | 0.2464 | 1.4939 | 0.85 | 0.8295 | 0.1842 | 0.0718 |
| 0.466 | 36.0 | 900 | 0.4843 | 0.85 | 0.2467 | 1.4910 | 0.85 | 0.8295 | 0.1950 | 0.0705 |
| 0.466 | 37.0 | 925 | 0.4862 | 0.85 | 0.2479 | 1.4938 | 0.85 | 0.8295 | 0.1871 | 0.0713 |
| 0.466 | 38.0 | 950 | 0.4854 | 0.85 | 0.2478 | 1.4945 | 0.85 | 0.8295 | 0.1859 | 0.0719 |
| 0.466 | 39.0 | 975 | 0.4850 | 0.85 | 0.2471 | 1.4891 | 0.85 | 0.8295 | 0.1855 | 0.0724 |
| 0.0749 | 40.0 | 1000 | 0.4869 | 0.85 | 0.2484 | 1.4967 | 0.85 | 0.8295 | 0.1969 | 0.0718 |
| 0.0749 | 41.0 | 1025 | 0.4857 | 0.85 | 0.2482 | 1.5544 | 0.85 | 0.8295 | 0.1904 | 0.0726 |
| 0.0749 | 42.0 | 1050 | 0.4872 | 0.85 | 0.2487 | 1.5559 | 0.85 | 0.8295 | 0.1877 | 0.0732 |
| 0.0749 | 43.0 | 1075 | 0.4873 | 0.85 | 0.2488 | 1.5534 | 0.85 | 0.8295 | 0.1871 | 0.0723 |
| 0.0749 | 44.0 | 1100 | 0.4870 | 0.85 | 0.2489 | 1.5542 | 0.85 | 0.8295 | 0.1787 | 0.0730 |
| 0.0749 | 45.0 | 1125 | 0.4874 | 0.85 | 0.2490 | 1.5544 | 0.85 | 0.8295 | 0.1867 | 0.0724 |
| 0.0749 | 46.0 | 1150 | 0.4868 | 0.85 | 0.2486 | 1.5531 | 0.85 | 0.8295 | 0.1954 | 0.0723 |
| 0.0749 | 47.0 | 1175 | 0.4879 | 0.85 | 0.2493 | 1.5546 | 0.85 | 0.8295 | 0.1842 | 0.0727 |
| 0.0749 | 48.0 | 1200 | 0.4882 | 0.85 | 0.2495 | 1.5537 | 0.85 | 0.8295 | 0.1864 | 0.0730 |
| 0.0749 | 49.0 | 1225 | 0.4875 | 0.85 | 0.2492 | 1.5537 | 0.85 | 0.8295 | 0.1884 | 0.0727 |
| 0.0749 | 50.0 | 1250 | 0.4880 | 0.85 | 0.2494 | 1.5528 | 0.85 | 0.8295 | 0.1877 | 0.0726 |
| 0.0749 | 51.0 | 1275 | 0.4888 | 0.85 | 0.2499 | 1.5539 | 0.85 | 0.8295 | 0.1754 | 0.0725 |
| 0.0749 | 52.0 | 1300 | 0.4894 | 0.85 | 0.2501 | 1.5540 | 0.85 | 0.8295 | 0.1883 | 0.0736 |
| 0.0749 | 53.0 | 1325 | 0.4889 | 0.85 | 0.2501 | 1.5533 | 0.85 | 0.8295 | 0.1708 | 0.0727 |
| 0.0749 | 54.0 | 1350 | 0.4891 | 0.85 | 0.2500 | 1.5531 | 0.85 | 0.8295 | 0.1785 | 0.0729 |
| 0.0749 | 55.0 | 1375 | 0.4904 | 0.85 | 0.2509 | 1.5541 | 0.85 | 0.8295 | 0.1744 | 0.0730 |
| 0.0749 | 56.0 | 1400 | 0.4903 | 0.85 | 0.2507 | 1.5541 | 0.85 | 0.8295 | 0.1897 | 0.0730 |
| 0.0749 | 57.0 | 1425 | 0.4894 | 0.85 | 0.2503 | 1.5536 | 0.85 | 0.8295 | 0.1792 | 0.0730 |
| 0.0749 | 58.0 | 1450 | 0.4889 | 0.85 | 0.2501 | 1.5531 | 0.85 | 0.8295 | 0.1892 | 0.0730 |
| 0.0749 | 59.0 | 1475 | 0.4907 | 0.85 | 0.2511 | 1.5542 | 0.85 | 0.8295 | 0.1767 | 0.0733 |
| 0.0712 | 60.0 | 1500 | 0.4897 | 0.85 | 0.2506 | 1.5540 | 0.85 | 0.8295 | 0.1813 | 0.0732 |
| 0.0712 | 61.0 | 1525 | 0.4906 | 0.85 | 0.2512 | 1.5545 | 0.85 | 0.8295 | 0.1853 | 0.0733 |
| 0.0712 | 62.0 | 1550 | 0.4905 | 0.85 | 0.2512 | 1.5541 | 0.85 | 0.8295 | 0.1723 | 0.0733 |
| 0.0712 | 63.0 | 1575 | 0.4904 | 0.85 | 0.2512 | 1.5543 | 0.85 | 0.8295 | 0.1817 | 0.0732 |
| 0.0712 | 64.0 | 1600 | 0.4915 | 0.85 | 0.2515 | 1.5544 | 0.85 | 0.8295 | 0.1942 | 0.0736 |
| 0.0712 | 65.0 | 1625 | 0.4898 | 0.85 | 0.2506 | 1.5534 | 0.85 | 0.8295 | 0.1712 | 0.0735 |
| 0.0712 | 66.0 | 1650 | 0.4911 | 0.85 | 0.2516 | 1.5548 | 0.85 | 0.8295 | 0.1824 | 0.0733 |
| 0.0712 | 67.0 | 1675 | 0.4908 | 0.85 | 0.2513 | 1.5546 | 0.85 | 0.8295 | 0.1896 | 0.0734 |
| 0.0712 | 68.0 | 1700 | 0.4911 | 0.85 | 0.2516 | 1.5548 | 0.85 | 0.8295 | 0.1744 | 0.0734 |
| 0.0712 | 69.0 | 1725 | 0.4912 | 0.85 | 0.2516 | 1.5541 | 0.85 | 0.8295 | 0.1726 | 0.0733 |
| 0.0712 | 70.0 | 1750 | 0.4910 | 0.85 | 0.2514 | 1.5543 | 0.85 | 0.8295 | 0.1827 | 0.0736 |
| 0.0712 | 71.0 | 1775 | 0.4918 | 0.85 | 0.2520 | 1.5546 | 0.85 | 0.8295 | 0.1909 | 0.0736 |
| 0.0712 | 72.0 | 1800 | 0.4916 | 0.85 | 0.2519 | 1.5545 | 0.85 | 0.8295 | 0.1830 | 0.0734 |
| 0.0712 | 73.0 | 1825 | 0.4913 | 0.85 | 0.2517 | 1.5540 | 0.85 | 0.8295 | 0.1835 | 0.0733 |
| 0.0712 | 74.0 | 1850 | 0.4918 | 0.85 | 0.2521 | 1.5544 | 0.85 | 0.8295 | 0.1831 | 0.0736 |
| 0.0712 | 75.0 | 1875 | 0.4919 | 0.85 | 0.2521 | 1.5548 | 0.85 | 0.8295 | 0.1829 | 0.0734 |
| 0.0712 | 76.0 | 1900 | 0.4916 | 0.85 | 0.2520 | 1.5547 | 0.85 | 0.8295 | 0.1831 | 0.0733 |
| 0.0712 | 77.0 | 1925 | 0.4919 | 0.85 | 0.2521 | 1.5542 | 0.85 | 0.8295 | 0.1732 | 0.0735 |
| 0.0712 | 78.0 | 1950 | 0.4920 | 0.85 | 0.2521 | 1.5541 | 0.85 | 0.8295 | 0.1831 | 0.0734 |
| 0.0712 | 79.0 | 1975 | 0.4920 | 0.85 | 0.2522 | 1.5544 | 0.85 | 0.8295 | 0.1833 | 0.0734 |
| 0.0712 | 80.0 | 2000 | 0.4922 | 0.845 | 0.2523 | 1.5549 | 0.845 | 0.8258 | 0.1859 | 0.0735 |
| 0.0712 | 81.0 | 2025 | 0.4920 | 0.85 | 0.2522 | 1.5542 | 0.85 | 0.8295 | 0.1830 | 0.0732 |
| 0.0712 | 82.0 | 2050 | 0.4920 | 0.845 | 0.2522 | 1.5549 | 0.845 | 0.8258 | 0.1783 | 0.0734 |
| 0.0712 | 83.0 | 2075 | 0.4922 | 0.85 | 0.2524 | 1.5546 | 0.85 | 0.8295 | 0.1832 | 0.0734 |
| 0.0712 | 84.0 | 2100 | 0.4920 | 0.845 | 0.2522 | 1.5543 | 0.845 | 0.8258 | 0.1784 | 0.0735 |
| 0.0712 | 85.0 | 2125 | 0.4921 | 0.845 | 0.2523 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0735 |
| 0.0712 | 86.0 | 2150 | 0.4921 | 0.85 | 0.2523 | 1.5545 | 0.85 | 0.8295 | 0.1836 | 0.0733 |
| 0.0712 | 87.0 | 2175 | 0.4924 | 0.85 | 0.2524 | 1.5547 | 0.85 | 0.8295 | 0.1836 | 0.0734 |
| 0.0712 | 88.0 | 2200 | 0.4925 | 0.845 | 0.2524 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0735 |
| 0.0712 | 89.0 | 2225 | 0.4924 | 0.85 | 0.2525 | 1.5548 | 0.85 | 0.8295 | 0.1835 | 0.0734 |
| 0.0712 | 90.0 | 2250 | 0.4921 | 0.845 | 0.2523 | 1.5545 | 0.845 | 0.8258 | 0.1688 | 0.0735 |
| 0.0712 | 91.0 | 2275 | 0.4925 | 0.845 | 0.2525 | 1.5546 | 0.845 | 0.8258 | 0.1785 | 0.0735 |
| 0.0712 | 92.0 | 2300 | 0.4924 | 0.845 | 0.2524 | 1.5546 | 0.845 | 0.8258 | 0.1785 | 0.0736 |
| 0.0712 | 93.0 | 2325 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 |
| 0.0712 | 94.0 | 2350 | 0.4924 | 0.845 | 0.2525 | 1.5547 | 0.845 | 0.8258 | 0.1786 | 0.0736 |
| 0.0712 | 95.0 | 2375 | 0.4926 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0736 |
| 0.0712 | 96.0 | 2400 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 |
| 0.0712 | 97.0 | 2425 | 0.4925 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0735 |
| 0.0712 | 98.0 | 2450 | 0.4926 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 |
| 0.0712 | 99.0 | 2475 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 |
| 0.0711 | 100.0 | 2500 | 0.4925 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0736 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jucamohedano/example-california-housing
|
jucamohedano
| 2023-07-13T17:20:04Z | 0 | 0 |
sklearn
|
[
"sklearn",
"skops",
"tabular-regression",
"region:us"
] |
tabular-regression
| 2023-07-12T19:23:39Z |
---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-regression
model_format: skops
model_file: model.skops
widget:
structuredData:
AveBedrms:
- 0.9290780141843972
- 0.9458483754512635
- 1.087360594795539
AveOccup:
- 3.1134751773049647
- 3.0613718411552346
- 3.2657992565055762
AveRooms:
- 6.304964539007092
- 6.945848375451264
- 3.8884758364312266
HouseAge:
- 17.0
- 15.0
- 24.0
Latitude:
- 34.23
- 36.84
- 34.04
Longitude:
- -117.41
- -119.77
- -118.3
MedInc:
- 6.1426
- 5.3886
- 1.7109
Population:
- 439.0
- 848.0
- 1757.0
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
[More Information Needed]
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-----------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| cv | |
| estimators | [('knn@5', Pipeline(steps=[('select_cols',<br /> ColumnTransformer(transformers=[('long_and_lat', 'passthrough',<br /> ['Longitude', 'Latitude'])])),<br /> ('knn', KNeighborsRegressor())]))] |
| final_estimator__alpha | 0.9 |
| final_estimator__ccp_alpha | 0.0 |
| final_estimator__criterion | friedman_mse |
| final_estimator__init | |
| final_estimator__learning_rate | 0.1 |
| final_estimator__loss | squared_error |
| final_estimator__max_depth | 3 |
| final_estimator__max_features | |
| final_estimator__max_leaf_nodes | |
| final_estimator__min_impurity_decrease | 0.0 |
| final_estimator__min_samples_leaf | 1 |
| final_estimator__min_samples_split | 2 |
| final_estimator__min_weight_fraction_leaf | 0.0 |
| final_estimator__n_estimators | 500 |
| final_estimator__n_iter_no_change | |
| final_estimator__random_state | 0 |
| final_estimator__subsample | 1.0 |
| final_estimator__tol | 0.0001 |
| final_estimator__validation_fraction | 0.1 |
| final_estimator__verbose | 0 |
| final_estimator__warm_start | False |
| final_estimator | GradientBoostingRegressor(n_estimators=500, random_state=0) |
| n_jobs | |
| passthrough | True |
| verbose | 0 |
| knn@5 | Pipeline(steps=[('select_cols',<br /> ColumnTransformer(transformers=[('long_and_lat', 'passthrough',<br /> ['Longitude', 'Latitude'])])),<br /> ('knn', KNeighborsRegressor())]) |
| knn@5__memory | |
| knn@5__steps | [('select_cols', ColumnTransformer(transformers=[('long_and_lat', 'passthrough',<br /> ['Longitude', 'Latitude'])])), ('knn', KNeighborsRegressor())] |
| knn@5__verbose | False |
| knn@5__select_cols | ColumnTransformer(transformers=[('long_and_lat', 'passthrough',<br /> ['Longitude', 'Latitude'])]) |
| knn@5__knn | KNeighborsRegressor() |
| knn@5__select_cols__n_jobs | |
| knn@5__select_cols__remainder | drop |
| knn@5__select_cols__sparse_threshold | 0.3 |
| knn@5__select_cols__transformer_weights | |
| knn@5__select_cols__transformers | [('long_and_lat', 'passthrough', ['Longitude', 'Latitude'])] |
| knn@5__select_cols__verbose | False |
| knn@5__select_cols__verbose_feature_names_out | True |
| knn@5__select_cols__long_and_lat | passthrough |
| knn@5__knn__algorithm | auto |
| knn@5__knn__leaf_size | 30 |
| knn@5__knn__metric | minkowski |
| knn@5__knn__metric_params | |
| knn@5__knn__n_jobs | |
| knn@5__knn__n_neighbors | 5 |
| knn@5__knn__p | 2 |
| knn@5__knn__weights | uniform |
</details>
### Model Plot
<style>#sk-container-id-2 {color: black;}#sk-container-id-2 pre{padding: 0;}#sk-container-id-2 div.sk-toggleable {background-color: white;}#sk-container-id-2 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-2 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-2 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-2 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-2 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-2 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-2 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-2 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-2 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-2 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-2 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-2 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-2 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-2 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-2 div.sk-item {position: relative;z-index: 1;}#sk-container-id-2 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-2 div.sk-item::before, #sk-container-id-2 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-2 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-2 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-2 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-2 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-2 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-2 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-2 div.sk-label-container {text-align: center;}#sk-container-id-2 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-2 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-2" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>StackingRegressor(estimators=[('knn@5',Pipeline(steps=[('select_cols',ColumnTransformer(transformers=[('long_and_lat','passthrough',['Longitude','Latitude'])])),('knn',KNeighborsRegressor())]))],final_estimator=GradientBoostingRegressor(n_estimators=500,random_state=0),passthrough=True)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-7" type="checkbox" ><label for="sk-estimator-id-7" class="sk-toggleable__label sk-toggleable__label-arrow">StackingRegressor</label><div class="sk-toggleable__content"><pre>StackingRegressor(estimators=[('knn@5',Pipeline(steps=[('select_cols',ColumnTransformer(transformers=[('long_and_lat','passthrough',['Longitude','Latitude'])])),('knn',KNeighborsRegressor())]))],final_estimator=GradientBoostingRegressor(n_estimators=500,random_state=0),passthrough=True)</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><label>knn@5</label></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-8" type="checkbox" ><label for="sk-estimator-id-8" class="sk-toggleable__label sk-toggleable__label-arrow">select_cols: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('long_and_lat', 'passthrough',['Longitude', 'Latitude'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-9" type="checkbox" ><label for="sk-estimator-id-9" class="sk-toggleable__label sk-toggleable__label-arrow">long_and_lat</label><div class="sk-toggleable__content"><pre>['Longitude', 'Latitude']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-10" type="checkbox" ><label for="sk-estimator-id-10" class="sk-toggleable__label sk-toggleable__label-arrow">passthrough</label><div class="sk-toggleable__content"><pre>passthrough</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-11" type="checkbox" ><label for="sk-estimator-id-11" class="sk-toggleable__label sk-toggleable__label-arrow">KNeighborsRegressor</label><div class="sk-toggleable__content"><pre>KNeighborsRegressor()</pre></div></div></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><label>final_estimator</label></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-12" type="checkbox" ><label for="sk-estimator-id-12" class="sk-toggleable__label sk-toggleable__label-arrow">GradientBoostingRegressor</label><div class="sk-toggleable__content"><pre>GradientBoostingRegressor(n_estimators=500, random_state=0)</pre></div></div></div></div></div></div></div></div></div></div></div></div>
## Evaluation Results
[More Information Needed]
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
|
sheileshr/roaModel
|
sheileshr
| 2023-07-13T17:14:57Z | 0 | 0 |
keras
|
[
"keras",
"zero-shot-classification",
"en",
"dataset:openchat/openchat_sharegpt4_dataset",
"arxiv:1910.09700",
"license:lgpl-3.0",
"region:us"
] |
zero-shot-classification
| 2023-07-13T17:11:47Z |
---
license: lgpl-3.0
datasets:
- openchat/openchat_sharegpt4_dataset
language:
- en
metrics:
- accuracy
library_name: keras
pipeline_tag: zero-shot-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-09_weighted
|
jordyvl
| 2023-07-13T17:10:33Z | 142 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T08:06:07Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-09_weighted
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-09_weighted
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2223
- Accuracy: 0.9400
- Exit 0 Accuracy: 0.2580
- Exit 1 Accuracy: 0.5214
- Exit 2 Accuracy: 0.7781
- Exit 3 Accuracy: 0.8564
- Exit 4 Accuracy: 0.9330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| 2.4115 | 1.0 | 2222 | 0.2907 | 0.9172 | 0.209 | 0.3731 | 0.645 | 0.7651 | 0.9109 |
| 2.0272 | 2.0 | 4444 | 0.2444 | 0.9310 | 0.2243 | 0.4579 | 0.7297 | 0.8172 | 0.9234 |
| 1.8196 | 3.0 | 6666 | 0.2268 | 0.9350 | 0.2383 | 0.4979 | 0.7589 | 0.8439 | 0.9285 |
| 1.7287 | 4.0 | 8888 | 0.2216 | 0.9387 | 0.2438 | 0.5163 | 0.7728 | 0.8533 | 0.9315 |
| 1.6664 | 5.0 | 11110 | 0.2223 | 0.9400 | 0.2580 | 0.5214 | 0.7781 | 0.8564 | 0.9330 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mindchain/ybelkada-falcon-7b-sharded-bf16-yizhongw_self_instruct
|
mindchain
| 2023-07-13T17:09:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T16:17:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut
|
NasimB
| 2023-07-13T16:46:31Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T15:02:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7018 | 0.29 | 500 | 5.6444 |
| 5.3406 | 0.58 | 1000 | 5.2034 |
| 4.9891 | 0.88 | 1500 | 4.9570 |
| 4.7257 | 1.17 | 2000 | 4.8069 |
| 4.5644 | 1.46 | 2500 | 4.6833 |
| 4.4557 | 1.75 | 3000 | 4.5769 |
| 4.3292 | 2.04 | 3500 | 4.4986 |
| 4.137 | 2.34 | 4000 | 4.4485 |
| 4.1027 | 2.63 | 4500 | 4.3900 |
| 4.064 | 2.92 | 5000 | 4.3414 |
| 3.8721 | 3.21 | 5500 | 4.3322 |
| 3.8018 | 3.5 | 6000 | 4.3007 |
| 3.7893 | 3.79 | 6500 | 4.2661 |
| 3.6925 | 4.09 | 7000 | 4.2635 |
| 3.5253 | 4.38 | 7500 | 4.2599 |
| 3.5119 | 4.67 | 8000 | 4.2446 |
| 3.506 | 4.96 | 8500 | 4.2295 |
| 3.3528 | 5.25 | 9000 | 4.2434 |
| 3.3251 | 5.55 | 9500 | 4.2431 |
| 3.325 | 5.84 | 10000 | 4.2415 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
anyachan/ernalora
|
anyachan
| 2023-07-13T16:46:05Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T16:41:22Z |
---
license: creativeml-openrail-m
---
|
grace-pro/afriberta-base-finetuned-hausa-2e-3
|
grace-pro
| 2023-07-13T16:45:14Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T16:28:08Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-base-finetuned-hausa-2e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-base-finetuned-hausa-2e-3
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2360
- Precision: 0.1719
- Recall: 0.0276
- F1: 0.0476
- Accuracy: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2428 | 1.0 | 1312 | 0.2368 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2435 | 2.0 | 2624 | 0.2385 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2428 | 3.0 | 3936 | 0.2371 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2434 | 4.0 | 5248 | 0.2359 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2411 | 5.0 | 6560 | 0.2360 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
eliaschaves/ppo-Huggy
|
eliaschaves
| 2023-07-13T16:43:46Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T16:43:41Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eliaschaves/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
brunogs/distilbert-base-uncased-finetuned-cola
|
brunogs
| 2023-07-13T16:42:33Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T15:53:06Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: brunogs/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# brunogs/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1860
- Validation Loss: 0.5510
- Train Matthews Correlation: 0.5076
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5165 | 0.4641 | 0.4474 | 0 |
| 0.3176 | 0.4989 | 0.5060 | 1 |
| 0.1860 | 0.5510 | 0.5076 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
1aurent/poca-SoccerTwos
|
1aurent
| 2023-07-13T16:33:04Z | 25 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-13T15:40:45Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 1aurent/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
miasik/Yohan-Anything.V5
|
miasik
| 2023-07-13T16:27:23Z | 0 | 0 | null |
[
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-07T07:03:52Z |
---
license: creativeml-openrail-m
language:
- en
---
1. Original Yohan was CLIP fixed and pruned
2. Anything.V5 was merged as "train difference" with (Yohan-Anything.V3)*1 using Supermerger
3. ClearVAE.V2.3 was baked in during merging


|
grace-pro/afriberta-large-finetuned-hausa-2e-3
|
grace-pro
| 2023-07-13T16:24:53Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T16:02:55Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-large-finetuned-hausa-2e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-large-finetuned-hausa-2e-3
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- Precision: 0.1719
- Recall: 0.0276
- F1: 0.0476
- Accuracy: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2428 | 1.0 | 1312 | 0.2358 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2436 | 2.0 | 2624 | 0.2366 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2429 | 3.0 | 3936 | 0.2365 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2434 | 4.0 | 5248 | 0.2358 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2411 | 5.0 | 6560 | 0.2359 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
alesthehuman/ppo-LunarLander-v2-unit8
|
alesthehuman
| 2023-07-13T16:15:30Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T15:26:57Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -13.55 +/- 101.24
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'alesthehuman/ppo-LunarLander-v2-unit8'
'batch_size': 512
'minibatch_size': 128}
```
|
flaviagiammarino/medsam-vit-base
|
flaviagiammarino
| 2023-07-13T15:43:56Z | 9,672 | 11 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"sam",
"mask-generation",
"medical",
"vision",
"arxiv:2304.12306",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
mask-generation
| 2023-07-11T07:37:57Z |
---
license: apache-2.0
tags:
- medical
- vision
---
# Model Card for MedSAM
MedSAM is a fine-tuned version of [SAM](https://huggingface.co/docs/transformers/main/model_doc/sam) for the medical domain.
This repository is based on the paper, code and pre-trained model released by the authors in July 2023.
## Model Description
MedSAM was trained on a large-scale medical image segmentation dataset of 1,090,486 image-mask pairs collected from different publicly available sources.
The image-mask pairs cover 15 imaging modalities and over 30 cancer types.
MedSAM was initialized using the pre-trained SAM model with the ViT-Base backbone. The prompt encoder weights were frozen, while the image encoder and mask decoder weights were updated during training.
The training was performed for 100 epochs with a batch size of 160 using the AdamW optimizer with a learning rate of 10−4 and a weight decay of 0.01.
- **Repository:** [MedSAM Official GitHub Repository](https://github.com/bowang-lab/medsam)
- **Paper:** [Segment Anything in Medical Images](https://arxiv.org/abs/2304.12306v1)
## Usage
```python
import requests
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
from transformers import SamModel, SamProcessor
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamModel.from_pretrained("flaviagiammarino/medsam-vit-base").to(device)
processor = SamProcessor.from_pretrained("flaviagiammarino/medsam-vit-base")
img_url = "https://huggingface.co/flaviagiammarino/medsam-vit-base/resolve/main/scripts/input.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_boxes = [95., 255., 190., 350.]
inputs = processor(raw_image, input_boxes=[[input_boxes]], return_tensors="pt").to(device)
outputs = model(**inputs, multimask_output=False)
probs = processor.image_processor.post_process_masks(outputs.pred_masks.sigmoid().cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu(), binarize=False)
def show_mask(mask, ax, random_color):
if random_color:
color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
else:
color = np.array([251/255, 252/255, 30/255, 0.6])
h, w = mask.shape[-2:]
mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
ax.imshow(mask_image)
def show_box(box, ax):
x0, y0 = box[0], box[1]
w, h = box[2] - box[0], box[3] - box[1]
ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor="blue", facecolor=(0, 0, 0, 0), lw=2))
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(np.array(raw_image))
show_box(input_boxes, ax[0])
ax[0].set_title("Input Image and Bounding Box")
ax[0].axis("off")
ax[1].imshow(np.array(raw_image))
show_mask(mask=probs[0] > 0.5, ax=ax[1], random_color=False)
show_box(input_boxes, ax[1])
ax[1].set_title("MedSAM Segmentation")
ax[1].axis("off")
plt.show()
```

## Additional Information
### Licensing Information
The authors have released the model code and pre-trained checkpoint under the [Apache License 2.0](https://github.com/bowang-lab/MedSAM/blob/main/LICENSE).
### Citation Information
```
@article{ma2023segment,
title={Segment anything in medical images},
author={Ma, Jun and Wang, Bo},
journal={arXiv preprint arXiv:2304.12306},
year={2023}
}
```
|
Faith-nchifor/distilbert-base-uncased-finetuned-cola-2
|
Faith-nchifor
| 2023-07-13T15:32:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T15:27:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.1229361555243494
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0843
- Matthews Correlation: 0.1229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 381 | 3.9140 | 0.1059 |
| 0.0791 | 2.0 | 762 | 4.4408 | 0.0927 |
| 0.0561 | 3.0 | 1143 | 3.5105 | 0.1140 |
| 0.041 | 4.0 | 1524 | 4.0843 | 0.1229 |
| 0.041 | 5.0 | 1905 | 4.4197 | 0.1194 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
1aurent/rl_course_vizdoom_defend_the_line
|
1aurent
| 2023-07-13T15:24:15Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T15:24:07Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_defend_the_line
type: doom_defend_the_line
metrics:
- type: mean_reward
value: 20.10 +/- 3.39
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_defend_the_line** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r 1aurent/rl_course_vizdoom_defend_the_line
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_defend_the_line --train_dir=./train_dir --experiment=rl_course_vizdoom_defend_the_line
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_defend_the_line --train_dir=./train_dir --experiment=rl_course_vizdoom_defend_the_line --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
grace-pro/xlmr-base-finetuned-igbo-2e-4
|
grace-pro
| 2023-07-13T15:21:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T14:46:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlmr-base-finetuned-igbo-2e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-base-finetuned-igbo-2e-4
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3850
- Precision: 0.0223
- Recall: 0.0016
- F1: 0.0029
- Accuracy: 0.8715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3879 | 1.0 | 1257 | 0.3848 | 0.0223 | 0.0016 | 0.0029 | 0.8715 |
| 0.3885 | 2.0 | 2514 | 0.3861 | 0.0223 | 0.0016 | 0.0029 | 0.8715 |
| 0.3823 | 3.0 | 3771 | 0.3847 | 0.0223 | 0.0016 | 0.0029 | 0.8715 |
| 0.3855 | 4.0 | 5028 | 0.3848 | 0.0223 | 0.0016 | 0.0029 | 0.8715 |
| 0.3846 | 5.0 | 6285 | 0.3850 | 0.0223 | 0.0016 | 0.0029 | 0.8715 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
orya16215/ppo-Huggy
|
orya16215
| 2023-07-13T15:17:58Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T15:17:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: orya16215/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Arthuerwang/output_models_girls
|
Arthuerwang
| 2023-07-13T15:13:35Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T11:35:54Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of gril in anime
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Arthuerwang/output_models_girls
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of gril in anime using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
chrisjsc96/data
|
chrisjsc96
| 2023-07-13T14:52:25Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-13T14:31:16Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dead-owwl/falcon7b-ft-haystack
|
dead-owwl
| 2023-07-13T14:50:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T14:45:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
ayanban011/vit-base_tobacco_lr1e-5_wr_0.05_wd_0.1
|
ayanban011
| 2023-07-13T14:14:51Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T10:54:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_lr1e-5_wr_0.05_wd_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_lr1e-5_wr_0.05_wd_0.1
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9592
- Accuracy: 0.775
- Brier Loss: 0.3981
- Nll: 1.5416
- F1 Micro: 0.775
- F1 Macro: 0.7418
- Ece: 0.2227
- Aurc: 0.1082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 0.7440 | 0.815 | 0.3076 | 1.1842 | 0.815 | 0.7942 | 0.2216 | 0.0733 |
| No log | 2.0 | 25 | 0.7436 | 0.82 | 0.3075 | 1.1869 | 0.82 | 0.8049 | 0.2132 | 0.0741 |
| No log | 2.96 | 37 | 0.7454 | 0.81 | 0.3085 | 1.1880 | 0.81 | 0.7914 | 0.2312 | 0.0755 |
| No log | 4.0 | 50 | 0.7439 | 0.815 | 0.3077 | 1.1846 | 0.815 | 0.7926 | 0.2369 | 0.0760 |
| No log | 4.96 | 62 | 0.7370 | 0.82 | 0.3040 | 1.1982 | 0.82 | 0.8028 | 0.2374 | 0.0745 |
| No log | 6.0 | 75 | 0.7507 | 0.82 | 0.3112 | 1.1980 | 0.82 | 0.8005 | 0.2513 | 0.0809 |
| No log | 6.96 | 87 | 0.7370 | 0.805 | 0.3060 | 1.1778 | 0.805 | 0.7841 | 0.2522 | 0.0746 |
| No log | 8.0 | 100 | 0.7437 | 0.81 | 0.3076 | 1.1846 | 0.81 | 0.7877 | 0.2301 | 0.0804 |
| No log | 8.96 | 112 | 0.7311 | 0.81 | 0.3031 | 1.1975 | 0.81 | 0.7920 | 0.2084 | 0.0753 |
| No log | 10.0 | 125 | 0.7305 | 0.8 | 0.3020 | 1.1785 | 0.8000 | 0.7792 | 0.2131 | 0.0777 |
| No log | 10.96 | 137 | 0.7478 | 0.805 | 0.3119 | 1.3979 | 0.805 | 0.7860 | 0.2133 | 0.0827 |
| No log | 12.0 | 150 | 0.7469 | 0.805 | 0.3082 | 1.3337 | 0.805 | 0.7844 | 0.2213 | 0.0843 |
| No log | 12.96 | 162 | 0.7545 | 0.805 | 0.3114 | 1.4280 | 0.805 | 0.7893 | 0.2092 | 0.0935 |
| No log | 14.0 | 175 | 0.7283 | 0.795 | 0.3012 | 1.1856 | 0.795 | 0.7739 | 0.2182 | 0.0806 |
| No log | 14.96 | 187 | 0.7219 | 0.815 | 0.2972 | 1.2792 | 0.815 | 0.8043 | 0.2024 | 0.0734 |
| No log | 16.0 | 200 | 0.7284 | 0.805 | 0.3001 | 1.2528 | 0.805 | 0.7899 | 0.2068 | 0.0858 |
| No log | 16.96 | 212 | 0.7191 | 0.805 | 0.2981 | 1.3067 | 0.805 | 0.7919 | 0.2062 | 0.0809 |
| No log | 18.0 | 225 | 0.7221 | 0.8 | 0.3011 | 1.1747 | 0.8000 | 0.7792 | 0.2091 | 0.0803 |
| No log | 18.96 | 237 | 0.7253 | 0.81 | 0.2995 | 1.3143 | 0.81 | 0.7955 | 0.2136 | 0.0889 |
| No log | 20.0 | 250 | 0.7186 | 0.8 | 0.2981 | 1.1839 | 0.8000 | 0.7819 | 0.1899 | 0.0812 |
| No log | 20.96 | 262 | 0.7247 | 0.805 | 0.3012 | 1.2501 | 0.805 | 0.7925 | 0.2214 | 0.0891 |
| No log | 22.0 | 275 | 0.7317 | 0.805 | 0.3058 | 1.3767 | 0.805 | 0.7853 | 0.2141 | 0.0893 |
| No log | 22.96 | 287 | 0.7250 | 0.81 | 0.3031 | 1.3683 | 0.81 | 0.7907 | 0.1886 | 0.0838 |
| No log | 24.0 | 300 | 0.7137 | 0.805 | 0.2983 | 1.3088 | 0.805 | 0.7851 | 0.1799 | 0.0782 |
| No log | 24.96 | 312 | 0.7334 | 0.81 | 0.3070 | 1.4296 | 0.81 | 0.7909 | 0.1903 | 0.0898 |
| No log | 26.0 | 325 | 0.7284 | 0.81 | 0.3035 | 1.2467 | 0.81 | 0.7984 | 0.2152 | 0.0916 |
| No log | 26.96 | 337 | 0.7242 | 0.805 | 0.3020 | 1.3077 | 0.805 | 0.7862 | 0.2071 | 0.0891 |
| No log | 28.0 | 350 | 0.7285 | 0.81 | 0.3028 | 1.3756 | 0.81 | 0.7910 | 0.2158 | 0.0915 |
| No log | 28.96 | 362 | 0.7253 | 0.8 | 0.3016 | 1.3714 | 0.8000 | 0.7716 | 0.2057 | 0.0894 |
| No log | 30.0 | 375 | 0.7321 | 0.8 | 0.3068 | 1.3688 | 0.8000 | 0.7736 | 0.1943 | 0.0885 |
| No log | 30.96 | 387 | 0.7294 | 0.8 | 0.3047 | 1.3713 | 0.8000 | 0.7746 | 0.2138 | 0.0900 |
| No log | 32.0 | 400 | 0.7296 | 0.81 | 0.3054 | 1.3749 | 0.81 | 0.7921 | 0.2074 | 0.0910 |
| No log | 32.96 | 412 | 0.7311 | 0.805 | 0.3061 | 1.3704 | 0.805 | 0.7811 | 0.1984 | 0.0920 |
| No log | 34.0 | 425 | 0.7291 | 0.805 | 0.3049 | 1.3686 | 0.805 | 0.7811 | 0.2126 | 0.0916 |
| No log | 34.96 | 437 | 0.7301 | 0.795 | 0.3048 | 1.3712 | 0.795 | 0.7654 | 0.1917 | 0.0904 |
| No log | 36.0 | 450 | 0.7318 | 0.81 | 0.3072 | 1.3695 | 0.81 | 0.7844 | 0.1976 | 0.0900 |
| No log | 36.96 | 462 | 0.7403 | 0.795 | 0.3102 | 1.3712 | 0.795 | 0.7656 | 0.2039 | 0.0934 |
| No log | 38.0 | 475 | 0.7376 | 0.795 | 0.3095 | 1.3653 | 0.795 | 0.7654 | 0.1982 | 0.0920 |
| No log | 38.96 | 487 | 0.7326 | 0.805 | 0.3049 | 1.3815 | 0.805 | 0.7744 | 0.1820 | 0.0948 |
| 0.1331 | 40.0 | 500 | 0.7268 | 0.8 | 0.3038 | 1.3702 | 0.8000 | 0.7704 | 0.2051 | 0.0899 |
| 0.1331 | 40.96 | 512 | 0.7371 | 0.8 | 0.3074 | 1.3824 | 0.8000 | 0.7684 | 0.1946 | 0.0939 |
| 0.1331 | 42.0 | 525 | 0.7374 | 0.81 | 0.3107 | 1.3600 | 0.81 | 0.7844 | 0.2109 | 0.0910 |
| 0.1331 | 42.96 | 537 | 0.7366 | 0.8 | 0.3071 | 1.4434 | 0.8000 | 0.7776 | 0.2042 | 0.0935 |
| 0.1331 | 44.0 | 550 | 0.7362 | 0.805 | 0.3083 | 1.3721 | 0.805 | 0.7829 | 0.1782 | 0.0929 |
| 0.1331 | 44.96 | 562 | 0.7389 | 0.8 | 0.3110 | 1.3695 | 0.8000 | 0.7704 | 0.1966 | 0.0917 |
| 0.1331 | 46.0 | 575 | 0.7426 | 0.79 | 0.3108 | 1.5068 | 0.79 | 0.7644 | 0.1938 | 0.0968 |
| 0.1331 | 46.96 | 587 | 0.7395 | 0.8 | 0.3096 | 1.3760 | 0.8000 | 0.7704 | 0.1951 | 0.0927 |
| 0.1331 | 48.0 | 600 | 0.7540 | 0.805 | 0.3185 | 1.4936 | 0.805 | 0.7821 | 0.1958 | 0.0979 |
| 0.1331 | 48.96 | 612 | 0.7413 | 0.805 | 0.3116 | 1.4368 | 0.805 | 0.7829 | 0.1835 | 0.0955 |
| 0.1331 | 50.0 | 625 | 0.7543 | 0.805 | 0.3167 | 1.4402 | 0.805 | 0.7831 | 0.2143 | 0.0974 |
| 0.1331 | 50.96 | 637 | 0.7378 | 0.805 | 0.3087 | 1.3850 | 0.805 | 0.7829 | 0.1886 | 0.0935 |
| 0.1331 | 52.0 | 650 | 0.7545 | 0.795 | 0.3175 | 1.3873 | 0.795 | 0.7656 | 0.2007 | 0.0957 |
| 0.1331 | 52.96 | 662 | 0.7464 | 0.8 | 0.3140 | 1.3734 | 0.8000 | 0.7707 | 0.1872 | 0.0938 |
| 0.1331 | 54.0 | 675 | 0.7439 | 0.8 | 0.3120 | 1.3765 | 0.8000 | 0.7704 | 0.2036 | 0.0942 |
| 0.1331 | 54.96 | 687 | 0.7506 | 0.8 | 0.3150 | 1.3788 | 0.8000 | 0.7707 | 0.1788 | 0.0959 |
| 0.1331 | 56.0 | 700 | 0.7511 | 0.805 | 0.3158 | 1.4378 | 0.805 | 0.7829 | 0.2054 | 0.0955 |
| 0.1331 | 56.96 | 712 | 0.7587 | 0.805 | 0.3196 | 1.4494 | 0.805 | 0.7831 | 0.1844 | 0.0972 |
| 0.1331 | 58.0 | 725 | 0.7505 | 0.8 | 0.3154 | 1.3759 | 0.8000 | 0.7704 | 0.1913 | 0.0953 |
| 0.1331 | 58.96 | 737 | 0.7553 | 0.79 | 0.3167 | 1.4457 | 0.79 | 0.7549 | 0.1977 | 0.0959 |
| 0.1331 | 60.0 | 750 | 0.7543 | 0.8 | 0.3175 | 1.3807 | 0.8000 | 0.7707 | 0.1963 | 0.0953 |
| 0.1331 | 60.96 | 762 | 0.7592 | 0.795 | 0.3200 | 1.3759 | 0.795 | 0.7681 | 0.1986 | 0.0961 |
| 0.1331 | 62.0 | 775 | 0.7557 | 0.795 | 0.3185 | 1.3785 | 0.795 | 0.7634 | 0.1971 | 0.0948 |
| 0.1331 | 62.96 | 787 | 0.7591 | 0.79 | 0.3200 | 1.4466 | 0.79 | 0.7613 | 0.2033 | 0.0963 |
| 0.1331 | 64.0 | 800 | 0.7624 | 0.795 | 0.3210 | 1.4423 | 0.795 | 0.7621 | 0.2030 | 0.0962 |
| 0.1331 | 64.96 | 812 | 0.7674 | 0.79 | 0.3240 | 1.4454 | 0.79 | 0.7596 | 0.1973 | 0.0969 |
| 0.1331 | 66.0 | 825 | 0.7645 | 0.79 | 0.3224 | 1.4497 | 0.79 | 0.7611 | 0.1999 | 0.0964 |
| 0.1331 | 66.96 | 837 | 0.7652 | 0.795 | 0.3234 | 1.4418 | 0.795 | 0.7668 | 0.1819 | 0.0968 |
| 0.1331 | 68.0 | 850 | 0.7695 | 0.795 | 0.3250 | 1.4969 | 0.795 | 0.7606 | 0.1914 | 0.0979 |
| 0.1331 | 68.96 | 862 | 0.7708 | 0.785 | 0.3258 | 1.4482 | 0.785 | 0.7516 | 0.1954 | 0.0976 |
| 0.1331 | 70.0 | 875 | 0.7691 | 0.795 | 0.3249 | 1.4960 | 0.795 | 0.7673 | 0.1895 | 0.0976 |
| 0.1331 | 70.96 | 887 | 0.7741 | 0.785 | 0.3272 | 1.5043 | 0.785 | 0.7519 | 0.1898 | 0.0982 |
| 0.1331 | 72.0 | 900 | 0.7788 | 0.79 | 0.3293 | 1.5094 | 0.79 | 0.7611 | 0.1738 | 0.0995 |
| 0.1331 | 72.96 | 912 | 0.7837 | 0.785 | 0.3329 | 1.5306 | 0.785 | 0.7577 | 0.2002 | 0.1004 |
| 0.1331 | 74.0 | 925 | 0.7755 | 0.785 | 0.3280 | 1.4985 | 0.785 | 0.7514 | 0.1906 | 0.0981 |
| 0.1331 | 74.96 | 937 | 0.7797 | 0.785 | 0.3308 | 1.4611 | 0.785 | 0.7580 | 0.1925 | 0.0994 |
| 0.1331 | 76.0 | 950 | 0.7744 | 0.785 | 0.3273 | 1.4441 | 0.785 | 0.7519 | 0.1929 | 0.0976 |
| 0.1331 | 76.96 | 962 | 0.7766 | 0.785 | 0.3295 | 1.4420 | 0.785 | 0.7516 | 0.1899 | 0.0972 |
| 0.1331 | 78.0 | 975 | 0.7888 | 0.785 | 0.3339 | 1.4991 | 0.785 | 0.7573 | 0.1879 | 0.0998 |
| 0.1331 | 78.96 | 987 | 0.7765 | 0.795 | 0.3292 | 1.4915 | 0.795 | 0.7663 | 0.1750 | 0.0948 |
| 0.071 | 80.0 | 1000 | 0.7821 | 0.785 | 0.3303 | 1.4990 | 0.785 | 0.7519 | 0.1940 | 0.0986 |
| 0.071 | 80.96 | 1012 | 0.7860 | 0.79 | 0.3330 | 1.4977 | 0.79 | 0.7644 | 0.1698 | 0.0976 |
| 0.071 | 82.0 | 1025 | 0.7882 | 0.78 | 0.3342 | 1.5243 | 0.78 | 0.7482 | 0.1930 | 0.1006 |
| 0.071 | 82.96 | 1037 | 0.7879 | 0.78 | 0.3333 | 1.5037 | 0.78 | 0.7491 | 0.2055 | 0.0995 |
| 0.071 | 84.0 | 1050 | 0.7842 | 0.78 | 0.3326 | 1.4959 | 0.78 | 0.7488 | 0.1945 | 0.0985 |
| 0.071 | 84.96 | 1062 | 0.7866 | 0.78 | 0.3338 | 1.4961 | 0.78 | 0.7488 | 0.1877 | 0.0982 |
| 0.071 | 86.0 | 1075 | 0.7931 | 0.785 | 0.3369 | 1.5006 | 0.785 | 0.7573 | 0.1898 | 0.1003 |
| 0.071 | 86.96 | 1087 | 0.7937 | 0.78 | 0.3360 | 1.5043 | 0.78 | 0.7488 | 0.1828 | 0.0999 |
| 0.071 | 88.0 | 1100 | 0.7948 | 0.78 | 0.3374 | 1.5034 | 0.78 | 0.7488 | 0.1893 | 0.0999 |
| 0.071 | 88.96 | 1112 | 0.7962 | 0.78 | 0.3372 | 1.5078 | 0.78 | 0.7494 | 0.1943 | 0.1011 |
| 0.071 | 90.0 | 1125 | 0.7956 | 0.785 | 0.3377 | 1.5039 | 0.785 | 0.7516 | 0.1918 | 0.0999 |
| 0.071 | 90.96 | 1137 | 0.7996 | 0.78 | 0.3382 | 1.5060 | 0.78 | 0.7491 | 0.1982 | 0.1013 |
| 0.071 | 92.0 | 1150 | 0.7980 | 0.78 | 0.3381 | 1.5023 | 0.78 | 0.7488 | 0.1902 | 0.1004 |
| 0.071 | 92.96 | 1162 | 0.8015 | 0.78 | 0.3396 | 1.5029 | 0.78 | 0.7488 | 0.1978 | 0.1007 |
| 0.071 | 94.0 | 1175 | 0.8044 | 0.78 | 0.3411 | 1.5047 | 0.78 | 0.7488 | 0.1929 | 0.1012 |
| 0.071 | 94.96 | 1187 | 0.7977 | 0.78 | 0.3392 | 1.4989 | 0.78 | 0.7488 | 0.1866 | 0.0989 |
| 0.071 | 96.0 | 1200 | 0.8071 | 0.78 | 0.3425 | 1.5021 | 0.78 | 0.7488 | 0.1941 | 0.1018 |
| 0.071 | 96.96 | 1212 | 0.8033 | 0.78 | 0.3406 | 1.4967 | 0.78 | 0.7488 | 0.1913 | 0.1000 |
| 0.071 | 98.0 | 1225 | 0.8148 | 0.775 | 0.3466 | 1.4555 | 0.775 | 0.7462 | 0.1828 | 0.1036 |
| 0.071 | 98.96 | 1237 | 0.8062 | 0.78 | 0.3417 | 1.5007 | 0.78 | 0.7488 | 0.1949 | 0.1004 |
| 0.071 | 100.0 | 1250 | 0.8123 | 0.77 | 0.3456 | 1.5069 | 0.7700 | 0.7424 | 0.1964 | 0.1020 |
| 0.071 | 100.96 | 1262 | 0.8117 | 0.78 | 0.3452 | 1.5048 | 0.78 | 0.7488 | 0.2081 | 0.1020 |
| 0.071 | 102.0 | 1275 | 0.8125 | 0.77 | 0.3454 | 1.5066 | 0.7700 | 0.7424 | 0.2040 | 0.1022 |
| 0.071 | 102.96 | 1287 | 0.8134 | 0.775 | 0.3458 | 1.5048 | 0.775 | 0.7450 | 0.1977 | 0.1013 |
| 0.071 | 104.0 | 1300 | 0.8152 | 0.78 | 0.3461 | 1.5027 | 0.78 | 0.7488 | 0.2044 | 0.1014 |
| 0.071 | 104.96 | 1312 | 0.8185 | 0.78 | 0.3478 | 1.5057 | 0.78 | 0.7488 | 0.1900 | 0.1022 |
| 0.071 | 106.0 | 1325 | 0.8191 | 0.78 | 0.3480 | 1.5053 | 0.78 | 0.7488 | 0.2084 | 0.1026 |
| 0.071 | 106.96 | 1337 | 0.8207 | 0.77 | 0.3497 | 1.5095 | 0.7700 | 0.7424 | 0.1984 | 0.1025 |
| 0.071 | 108.0 | 1350 | 0.8221 | 0.77 | 0.3487 | 1.5095 | 0.7700 | 0.7424 | 0.1871 | 0.1031 |
| 0.071 | 108.96 | 1362 | 0.8229 | 0.765 | 0.3501 | 1.4607 | 0.765 | 0.7331 | 0.1920 | 0.1028 |
| 0.071 | 110.0 | 1375 | 0.8232 | 0.78 | 0.3498 | 1.5044 | 0.78 | 0.7488 | 0.1995 | 0.1023 |
| 0.071 | 110.96 | 1387 | 0.8279 | 0.785 | 0.3513 | 1.5060 | 0.785 | 0.7526 | 0.2073 | 0.1033 |
| 0.071 | 112.0 | 1400 | 0.8246 | 0.775 | 0.3505 | 1.5038 | 0.775 | 0.7450 | 0.1927 | 0.1018 |
| 0.071 | 112.96 | 1412 | 0.8308 | 0.765 | 0.3537 | 1.5095 | 0.765 | 0.7331 | 0.1931 | 0.1035 |
| 0.071 | 114.0 | 1425 | 0.8277 | 0.775 | 0.3513 | 1.5058 | 0.775 | 0.7395 | 0.1977 | 0.1022 |
| 0.071 | 114.96 | 1437 | 0.8302 | 0.76 | 0.3531 | 1.4583 | 0.76 | 0.7296 | 0.2112 | 0.1028 |
| 0.071 | 116.0 | 1450 | 0.8328 | 0.765 | 0.3535 | 1.5125 | 0.765 | 0.7331 | 0.2008 | 0.1037 |
| 0.071 | 116.96 | 1462 | 0.8309 | 0.76 | 0.3533 | 1.4542 | 0.76 | 0.7296 | 0.2037 | 0.1029 |
| 0.071 | 118.0 | 1475 | 0.8378 | 0.765 | 0.3558 | 1.5162 | 0.765 | 0.7323 | 0.2040 | 0.1055 |
| 0.071 | 118.96 | 1487 | 0.8341 | 0.76 | 0.3547 | 1.5076 | 0.76 | 0.7296 | 0.1942 | 0.1032 |
| 0.0462 | 120.0 | 1500 | 0.8367 | 0.76 | 0.3557 | 1.5134 | 0.76 | 0.7296 | 0.1987 | 0.1034 |
| 0.0462 | 120.96 | 1512 | 0.8369 | 0.76 | 0.3553 | 1.5081 | 0.76 | 0.7296 | 0.2121 | 0.1036 |
| 0.0462 | 122.0 | 1525 | 0.8385 | 0.77 | 0.3560 | 1.5076 | 0.7700 | 0.7357 | 0.1944 | 0.1034 |
| 0.0462 | 122.96 | 1537 | 0.8415 | 0.76 | 0.3577 | 1.5127 | 0.76 | 0.7296 | 0.2080 | 0.1040 |
| 0.0462 | 124.0 | 1550 | 0.8418 | 0.765 | 0.3571 | 1.5123 | 0.765 | 0.7333 | 0.1905 | 0.1043 |
| 0.0462 | 124.96 | 1562 | 0.8431 | 0.76 | 0.3581 | 1.5124 | 0.76 | 0.7296 | 0.2029 | 0.1043 |
| 0.0462 | 126.0 | 1575 | 0.8461 | 0.765 | 0.3595 | 1.5115 | 0.765 | 0.7331 | 0.1861 | 0.1044 |
| 0.0462 | 126.96 | 1587 | 0.8446 | 0.76 | 0.3586 | 1.5117 | 0.76 | 0.7296 | 0.1962 | 0.1043 |
| 0.0462 | 128.0 | 1600 | 0.8448 | 0.765 | 0.3585 | 1.5106 | 0.765 | 0.7333 | 0.1899 | 0.1048 |
| 0.0462 | 128.96 | 1612 | 0.8503 | 0.765 | 0.3611 | 1.5156 | 0.765 | 0.7323 | 0.1865 | 0.1050 |
| 0.0462 | 130.0 | 1625 | 0.8473 | 0.765 | 0.3597 | 1.5082 | 0.765 | 0.7333 | 0.1992 | 0.1040 |
| 0.0462 | 130.96 | 1637 | 0.8530 | 0.76 | 0.3617 | 1.5178 | 0.76 | 0.7296 | 0.2008 | 0.1053 |
| 0.0462 | 132.0 | 1650 | 0.8499 | 0.765 | 0.3608 | 1.5105 | 0.765 | 0.7321 | 0.1910 | 0.1035 |
| 0.0462 | 132.96 | 1662 | 0.8529 | 0.765 | 0.3612 | 1.5095 | 0.765 | 0.7333 | 0.1943 | 0.1043 |
| 0.0462 | 134.0 | 1675 | 0.8547 | 0.765 | 0.3635 | 1.5095 | 0.765 | 0.7321 | 0.2002 | 0.1032 |
| 0.0462 | 134.96 | 1687 | 0.8572 | 0.765 | 0.3638 | 1.5159 | 0.765 | 0.7333 | 0.1979 | 0.1056 |
| 0.0462 | 136.0 | 1700 | 0.8582 | 0.765 | 0.3642 | 1.5165 | 0.765 | 0.7333 | 0.2026 | 0.1057 |
| 0.0462 | 136.96 | 1712 | 0.8581 | 0.76 | 0.3639 | 1.5118 | 0.76 | 0.7296 | 0.1965 | 0.1052 |
| 0.0462 | 138.0 | 1725 | 0.8570 | 0.77 | 0.3629 | 1.5094 | 0.7700 | 0.7358 | 0.1870 | 0.1029 |
| 0.0462 | 138.96 | 1737 | 0.8611 | 0.76 | 0.3650 | 1.5129 | 0.76 | 0.7296 | 0.1919 | 0.1040 |
| 0.0462 | 140.0 | 1750 | 0.8618 | 0.76 | 0.3659 | 1.5131 | 0.76 | 0.7296 | 0.1981 | 0.1042 |
| 0.0462 | 140.96 | 1762 | 0.8605 | 0.765 | 0.3652 | 1.5115 | 0.765 | 0.7333 | 0.1875 | 0.1048 |
| 0.0462 | 142.0 | 1775 | 0.8647 | 0.76 | 0.3666 | 1.5157 | 0.76 | 0.7296 | 0.2002 | 0.1052 |
| 0.0462 | 142.96 | 1787 | 0.8618 | 0.76 | 0.3654 | 1.5116 | 0.76 | 0.7296 | 0.2006 | 0.1045 |
| 0.0462 | 144.0 | 1800 | 0.8672 | 0.765 | 0.3672 | 1.5160 | 0.765 | 0.7333 | 0.1979 | 0.1053 |
| 0.0462 | 144.96 | 1812 | 0.8625 | 0.77 | 0.3648 | 1.5080 | 0.7700 | 0.7358 | 0.1975 | 0.1026 |
| 0.0462 | 146.0 | 1825 | 0.8695 | 0.765 | 0.3679 | 1.5169 | 0.765 | 0.7323 | 0.1973 | 0.1051 |
| 0.0462 | 146.96 | 1837 | 0.8696 | 0.76 | 0.3685 | 1.5132 | 0.76 | 0.7296 | 0.1936 | 0.1037 |
| 0.0462 | 148.0 | 1850 | 0.8678 | 0.765 | 0.3671 | 1.5110 | 0.765 | 0.7333 | 0.2008 | 0.1040 |
| 0.0462 | 148.96 | 1862 | 0.8713 | 0.765 | 0.3690 | 1.5152 | 0.765 | 0.7333 | 0.1983 | 0.1050 |
| 0.0462 | 150.0 | 1875 | 0.8716 | 0.765 | 0.3687 | 1.5163 | 0.765 | 0.7323 | 0.2029 | 0.1051 |
| 0.0462 | 150.96 | 1887 | 0.8724 | 0.77 | 0.3691 | 1.5113 | 0.7700 | 0.7358 | 0.1997 | 0.1037 |
| 0.0462 | 152.0 | 1900 | 0.8729 | 0.765 | 0.3695 | 1.5134 | 0.765 | 0.7333 | 0.1966 | 0.1050 |
| 0.0462 | 152.96 | 1912 | 0.8760 | 0.765 | 0.3706 | 1.5131 | 0.765 | 0.7333 | 0.2046 | 0.1040 |
| 0.0462 | 154.0 | 1925 | 0.8761 | 0.765 | 0.3707 | 1.5138 | 0.765 | 0.7333 | 0.1896 | 0.1037 |
| 0.0462 | 154.96 | 1937 | 0.8778 | 0.765 | 0.3711 | 1.5138 | 0.765 | 0.7333 | 0.2012 | 0.1046 |
| 0.0462 | 156.0 | 1950 | 0.8768 | 0.765 | 0.3712 | 1.5125 | 0.765 | 0.7333 | 0.1891 | 0.1041 |
| 0.0462 | 156.96 | 1962 | 0.8816 | 0.77 | 0.3732 | 1.5205 | 0.7700 | 0.7360 | 0.1993 | 0.1067 |
| 0.0462 | 158.0 | 1975 | 0.8793 | 0.765 | 0.3718 | 1.5157 | 0.765 | 0.7333 | 0.2025 | 0.1049 |
| 0.0462 | 158.96 | 1987 | 0.8788 | 0.77 | 0.3713 | 1.5126 | 0.7700 | 0.7358 | 0.2044 | 0.1039 |
| 0.0335 | 160.0 | 2000 | 0.8851 | 0.77 | 0.3739 | 1.5193 | 0.7700 | 0.7360 | 0.2042 | 0.1069 |
| 0.0335 | 160.96 | 2012 | 0.8872 | 0.77 | 0.3748 | 1.5200 | 0.7700 | 0.7360 | 0.2009 | 0.1057 |
| 0.0335 | 162.0 | 2025 | 0.8827 | 0.765 | 0.3731 | 1.5144 | 0.765 | 0.7333 | 0.1897 | 0.1050 |
| 0.0335 | 162.96 | 2037 | 0.8821 | 0.765 | 0.3724 | 1.5129 | 0.765 | 0.7333 | 0.1971 | 0.1042 |
| 0.0335 | 164.0 | 2050 | 0.8919 | 0.77 | 0.3770 | 1.5229 | 0.7700 | 0.7360 | 0.2119 | 0.1061 |
| 0.0335 | 164.96 | 2062 | 0.8907 | 0.765 | 0.3764 | 1.5240 | 0.765 | 0.7323 | 0.2125 | 0.1069 |
| 0.0335 | 166.0 | 2075 | 0.8857 | 0.765 | 0.3743 | 1.5127 | 0.765 | 0.7333 | 0.1906 | 0.1044 |
| 0.0335 | 166.96 | 2087 | 0.8928 | 0.77 | 0.3771 | 1.5253 | 0.7700 | 0.7360 | 0.2062 | 0.1062 |
| 0.0335 | 168.0 | 2100 | 0.8895 | 0.77 | 0.3750 | 1.5179 | 0.7700 | 0.7360 | 0.2062 | 0.1054 |
| 0.0335 | 168.96 | 2112 | 0.8904 | 0.77 | 0.3754 | 1.5178 | 0.7700 | 0.7360 | 0.2048 | 0.1055 |
| 0.0335 | 170.0 | 2125 | 0.8919 | 0.765 | 0.3766 | 1.5137 | 0.765 | 0.7333 | 0.2170 | 0.1044 |
| 0.0335 | 170.96 | 2137 | 0.8949 | 0.77 | 0.3779 | 1.5203 | 0.7700 | 0.7360 | 0.2042 | 0.1069 |
| 0.0335 | 172.0 | 2150 | 0.8949 | 0.77 | 0.3779 | 1.5204 | 0.7700 | 0.7360 | 0.2078 | 0.1069 |
| 0.0335 | 172.96 | 2162 | 0.8986 | 0.765 | 0.3794 | 1.5241 | 0.765 | 0.7310 | 0.2079 | 0.1072 |
| 0.0335 | 174.0 | 2175 | 0.8978 | 0.76 | 0.3787 | 1.5201 | 0.76 | 0.7272 | 0.2108 | 0.1056 |
| 0.0335 | 174.96 | 2187 | 0.8990 | 0.77 | 0.3786 | 1.5198 | 0.7700 | 0.7360 | 0.2032 | 0.1053 |
| 0.0335 | 176.0 | 2200 | 0.9003 | 0.77 | 0.3794 | 1.5206 | 0.7700 | 0.7360 | 0.1996 | 0.1060 |
| 0.0335 | 176.96 | 2212 | 0.9000 | 0.77 | 0.3797 | 1.5196 | 0.7700 | 0.7360 | 0.2116 | 0.1063 |
| 0.0335 | 178.0 | 2225 | 0.9000 | 0.765 | 0.3794 | 1.5178 | 0.765 | 0.7333 | 0.1875 | 0.1055 |
| 0.0335 | 178.96 | 2237 | 0.9034 | 0.77 | 0.3804 | 1.5218 | 0.7700 | 0.7360 | 0.1964 | 0.1068 |
| 0.0335 | 180.0 | 2250 | 0.9020 | 0.77 | 0.3802 | 1.5198 | 0.7700 | 0.7360 | 0.2058 | 0.1063 |
| 0.0335 | 180.96 | 2262 | 0.9037 | 0.77 | 0.3808 | 1.5192 | 0.7700 | 0.7360 | 0.1976 | 0.1063 |
| 0.0335 | 182.0 | 2275 | 0.9059 | 0.77 | 0.3812 | 1.5227 | 0.7700 | 0.7360 | 0.1962 | 0.1067 |
| 0.0335 | 182.96 | 2287 | 0.9063 | 0.77 | 0.3818 | 1.5206 | 0.7700 | 0.7360 | 0.2000 | 0.1065 |
| 0.0335 | 184.0 | 2300 | 0.9058 | 0.77 | 0.3814 | 1.5196 | 0.7700 | 0.7360 | 0.1926 | 0.1061 |
| 0.0335 | 184.96 | 2312 | 0.9082 | 0.77 | 0.3821 | 1.5211 | 0.7700 | 0.7360 | 0.2001 | 0.1067 |
| 0.0335 | 186.0 | 2325 | 0.9083 | 0.77 | 0.3824 | 1.5204 | 0.7700 | 0.7360 | 0.2062 | 0.1057 |
| 0.0335 | 186.96 | 2337 | 0.9090 | 0.77 | 0.3824 | 1.5220 | 0.7700 | 0.7360 | 0.2027 | 0.1063 |
| 0.0335 | 188.0 | 2350 | 0.9106 | 0.77 | 0.3828 | 1.5213 | 0.7700 | 0.7360 | 0.1968 | 0.1068 |
| 0.0335 | 188.96 | 2362 | 0.9116 | 0.77 | 0.3829 | 1.5238 | 0.7700 | 0.7360 | 0.2029 | 0.1071 |
| 0.0335 | 190.0 | 2375 | 0.9120 | 0.77 | 0.3835 | 1.5225 | 0.7700 | 0.7360 | 0.1953 | 0.1064 |
| 0.0335 | 190.96 | 2387 | 0.9123 | 0.77 | 0.3835 | 1.5227 | 0.7700 | 0.7360 | 0.2080 | 0.1069 |
| 0.0335 | 192.0 | 2400 | 0.9131 | 0.775 | 0.3838 | 1.5222 | 0.775 | 0.7418 | 0.2039 | 0.1061 |
| 0.0335 | 192.96 | 2412 | 0.9144 | 0.765 | 0.3841 | 1.5200 | 0.765 | 0.7333 | 0.2163 | 0.1060 |
| 0.0335 | 194.0 | 2425 | 0.9138 | 0.77 | 0.3839 | 1.5200 | 0.7700 | 0.7360 | 0.2092 | 0.1057 |
| 0.0335 | 194.96 | 2437 | 0.9164 | 0.775 | 0.3850 | 1.5249 | 0.775 | 0.7418 | 0.2188 | 0.1065 |
| 0.0335 | 196.0 | 2450 | 0.9185 | 0.77 | 0.3861 | 1.5257 | 0.7700 | 0.7360 | 0.2087 | 0.1067 |
| 0.0335 | 196.96 | 2462 | 0.9207 | 0.77 | 0.3868 | 1.5286 | 0.7700 | 0.7360 | 0.2063 | 0.1074 |
| 0.0335 | 198.0 | 2475 | 0.9191 | 0.77 | 0.3858 | 1.5254 | 0.7700 | 0.7360 | 0.2129 | 0.1068 |
| 0.0335 | 198.96 | 2487 | 0.9195 | 0.77 | 0.3861 | 1.5240 | 0.7700 | 0.7360 | 0.2059 | 0.1066 |
| 0.0264 | 200.0 | 2500 | 0.9205 | 0.77 | 0.3864 | 1.5246 | 0.7700 | 0.7360 | 0.2081 | 0.1069 |
| 0.0264 | 200.96 | 2512 | 0.9214 | 0.77 | 0.3865 | 1.5235 | 0.7700 | 0.7360 | 0.2018 | 0.1066 |
| 0.0264 | 202.0 | 2525 | 0.9216 | 0.775 | 0.3867 | 1.5253 | 0.775 | 0.7418 | 0.2156 | 0.1068 |
| 0.0264 | 202.96 | 2537 | 0.9218 | 0.775 | 0.3870 | 1.5225 | 0.775 | 0.7418 | 0.2108 | 0.1064 |
| 0.0264 | 204.0 | 2550 | 0.9241 | 0.775 | 0.3871 | 1.4893 | 0.775 | 0.7418 | 0.2087 | 0.1071 |
| 0.0264 | 204.96 | 2562 | 0.9270 | 0.77 | 0.3889 | 1.5244 | 0.7700 | 0.7360 | 0.2024 | 0.1071 |
| 0.0264 | 206.0 | 2575 | 0.9260 | 0.775 | 0.3885 | 1.5262 | 0.775 | 0.7418 | 0.2116 | 0.1069 |
| 0.0264 | 206.96 | 2587 | 0.9259 | 0.775 | 0.3883 | 1.5269 | 0.775 | 0.7418 | 0.2089 | 0.1065 |
| 0.0264 | 208.0 | 2600 | 0.9254 | 0.77 | 0.3875 | 1.5247 | 0.7700 | 0.7360 | 0.2060 | 0.1069 |
| 0.0264 | 208.96 | 2612 | 0.9285 | 0.775 | 0.3889 | 1.5281 | 0.775 | 0.7418 | 0.2115 | 0.1074 |
| 0.0264 | 210.0 | 2625 | 0.9277 | 0.775 | 0.3886 | 1.5254 | 0.775 | 0.7418 | 0.2114 | 0.1069 |
| 0.0264 | 210.96 | 2637 | 0.9304 | 0.775 | 0.3897 | 1.5278 | 0.775 | 0.7418 | 0.2095 | 0.1071 |
| 0.0264 | 212.0 | 2650 | 0.9288 | 0.77 | 0.3886 | 1.5270 | 0.7700 | 0.7360 | 0.2068 | 0.1070 |
| 0.0264 | 212.96 | 2662 | 0.9310 | 0.775 | 0.3896 | 1.5316 | 0.775 | 0.7418 | 0.2135 | 0.1071 |
| 0.0264 | 214.0 | 2675 | 0.9311 | 0.775 | 0.3899 | 1.5263 | 0.775 | 0.7418 | 0.2187 | 0.1070 |
| 0.0264 | 214.96 | 2687 | 0.9315 | 0.775 | 0.3899 | 1.5256 | 0.775 | 0.7418 | 0.2123 | 0.1068 |
| 0.0264 | 216.0 | 2700 | 0.9315 | 0.77 | 0.3896 | 1.5258 | 0.7700 | 0.7360 | 0.2070 | 0.1071 |
| 0.0264 | 216.96 | 2712 | 0.9334 | 0.775 | 0.3905 | 1.5291 | 0.775 | 0.7418 | 0.2088 | 0.1071 |
| 0.0264 | 218.0 | 2725 | 0.9342 | 0.775 | 0.3908 | 1.5283 | 0.775 | 0.7418 | 0.2146 | 0.1072 |
| 0.0264 | 218.96 | 2737 | 0.9337 | 0.775 | 0.3903 | 1.5282 | 0.775 | 0.7418 | 0.2110 | 0.1070 |
| 0.0264 | 220.0 | 2750 | 0.9357 | 0.775 | 0.3913 | 1.5284 | 0.775 | 0.7418 | 0.2149 | 0.1073 |
| 0.0264 | 220.96 | 2762 | 0.9367 | 0.775 | 0.3918 | 1.5299 | 0.775 | 0.7418 | 0.2088 | 0.1072 |
| 0.0264 | 222.0 | 2775 | 0.9371 | 0.775 | 0.3916 | 1.5294 | 0.775 | 0.7418 | 0.2141 | 0.1075 |
| 0.0264 | 222.96 | 2787 | 0.9359 | 0.775 | 0.3910 | 1.5271 | 0.775 | 0.7418 | 0.2126 | 0.1067 |
| 0.0264 | 224.0 | 2800 | 0.9374 | 0.775 | 0.3918 | 1.5298 | 0.775 | 0.7418 | 0.2084 | 0.1072 |
| 0.0264 | 224.96 | 2812 | 0.9378 | 0.775 | 0.3914 | 1.5296 | 0.775 | 0.7418 | 0.2073 | 0.1072 |
| 0.0264 | 226.0 | 2825 | 0.9377 | 0.775 | 0.3916 | 1.5274 | 0.775 | 0.7418 | 0.2075 | 0.1066 |
| 0.0264 | 226.96 | 2837 | 0.9412 | 0.775 | 0.3932 | 1.5310 | 0.775 | 0.7418 | 0.2096 | 0.1077 |
| 0.0264 | 228.0 | 2850 | 0.9402 | 0.775 | 0.3923 | 1.5329 | 0.775 | 0.7418 | 0.2161 | 0.1076 |
| 0.0264 | 228.96 | 2862 | 0.9420 | 0.775 | 0.3932 | 1.5301 | 0.775 | 0.7418 | 0.2078 | 0.1074 |
| 0.0264 | 230.0 | 2875 | 0.9412 | 0.775 | 0.3925 | 1.5315 | 0.775 | 0.7418 | 0.2078 | 0.1076 |
| 0.0264 | 230.96 | 2887 | 0.9422 | 0.775 | 0.3930 | 1.5340 | 0.775 | 0.7418 | 0.2179 | 0.1077 |
| 0.0264 | 232.0 | 2900 | 0.9431 | 0.775 | 0.3933 | 1.5336 | 0.775 | 0.7418 | 0.2158 | 0.1081 |
| 0.0264 | 232.96 | 2912 | 0.9428 | 0.775 | 0.3931 | 1.5304 | 0.775 | 0.7418 | 0.2086 | 0.1075 |
| 0.0264 | 234.0 | 2925 | 0.9434 | 0.775 | 0.3935 | 1.5325 | 0.775 | 0.7418 | 0.2152 | 0.1074 |
| 0.0264 | 234.96 | 2937 | 0.9431 | 0.775 | 0.3933 | 1.5286 | 0.775 | 0.7418 | 0.2081 | 0.1070 |
| 0.0264 | 236.0 | 2950 | 0.9438 | 0.775 | 0.3935 | 1.5307 | 0.775 | 0.7418 | 0.2077 | 0.1073 |
| 0.0264 | 236.96 | 2962 | 0.9452 | 0.775 | 0.3940 | 1.5329 | 0.775 | 0.7418 | 0.2217 | 0.1074 |
| 0.0264 | 238.0 | 2975 | 0.9453 | 0.775 | 0.3939 | 1.5328 | 0.775 | 0.7418 | 0.2129 | 0.1076 |
| 0.0264 | 238.96 | 2987 | 0.9451 | 0.775 | 0.3937 | 1.5308 | 0.775 | 0.7418 | 0.2133 | 0.1073 |
| 0.0223 | 240.0 | 3000 | 0.9470 | 0.775 | 0.3947 | 1.5333 | 0.775 | 0.7418 | 0.2220 | 0.1077 |
| 0.0223 | 240.96 | 3012 | 0.9461 | 0.775 | 0.3942 | 1.5329 | 0.775 | 0.7418 | 0.2127 | 0.1072 |
| 0.0223 | 242.0 | 3025 | 0.9477 | 0.775 | 0.3949 | 1.5310 | 0.775 | 0.7418 | 0.2133 | 0.1074 |
| 0.0223 | 242.96 | 3037 | 0.9480 | 0.775 | 0.3949 | 1.5331 | 0.775 | 0.7418 | 0.2165 | 0.1073 |
| 0.0223 | 244.0 | 3050 | 0.9499 | 0.775 | 0.3955 | 1.5384 | 0.775 | 0.7418 | 0.2226 | 0.1080 |
| 0.0223 | 244.96 | 3062 | 0.9476 | 0.775 | 0.3946 | 1.5322 | 0.775 | 0.7418 | 0.2128 | 0.1069 |
| 0.0223 | 246.0 | 3075 | 0.9490 | 0.775 | 0.3953 | 1.5298 | 0.775 | 0.7418 | 0.2137 | 0.1071 |
| 0.0223 | 246.96 | 3087 | 0.9496 | 0.775 | 0.3953 | 1.5315 | 0.775 | 0.7418 | 0.2133 | 0.1071 |
| 0.0223 | 248.0 | 3100 | 0.9500 | 0.775 | 0.3955 | 1.5335 | 0.775 | 0.7418 | 0.2131 | 0.1072 |
| 0.0223 | 248.96 | 3112 | 0.9503 | 0.775 | 0.3956 | 1.5323 | 0.775 | 0.7418 | 0.2164 | 0.1072 |
| 0.0223 | 250.0 | 3125 | 0.9505 | 0.775 | 0.3955 | 1.5338 | 0.775 | 0.7418 | 0.2128 | 0.1071 |
| 0.0223 | 250.96 | 3137 | 0.9510 | 0.775 | 0.3957 | 1.5372 | 0.775 | 0.7418 | 0.2266 | 0.1072 |
| 0.0223 | 252.0 | 3150 | 0.9517 | 0.775 | 0.3960 | 1.5363 | 0.775 | 0.7418 | 0.2222 | 0.1073 |
| 0.0223 | 252.96 | 3162 | 0.9526 | 0.775 | 0.3961 | 1.5372 | 0.775 | 0.7418 | 0.2227 | 0.1080 |
| 0.0223 | 254.0 | 3175 | 0.9527 | 0.77 | 0.3963 | 1.5340 | 0.7700 | 0.7368 | 0.2174 | 0.1081 |
| 0.0223 | 254.96 | 3187 | 0.9527 | 0.775 | 0.3962 | 1.5389 | 0.775 | 0.7418 | 0.2222 | 0.1074 |
| 0.0223 | 256.0 | 3200 | 0.9528 | 0.775 | 0.3962 | 1.5347 | 0.775 | 0.7418 | 0.2258 | 0.1073 |
| 0.0223 | 256.96 | 3212 | 0.9545 | 0.775 | 0.3969 | 1.5401 | 0.775 | 0.7418 | 0.2226 | 0.1083 |
| 0.0223 | 258.0 | 3225 | 0.9540 | 0.775 | 0.3966 | 1.5369 | 0.775 | 0.7418 | 0.2224 | 0.1074 |
| 0.0223 | 258.96 | 3237 | 0.9547 | 0.775 | 0.3969 | 1.5370 | 0.775 | 0.7418 | 0.2228 | 0.1082 |
| 0.0223 | 260.0 | 3250 | 0.9549 | 0.775 | 0.3969 | 1.5381 | 0.775 | 0.7418 | 0.2226 | 0.1075 |
| 0.0223 | 260.96 | 3262 | 0.9545 | 0.775 | 0.3968 | 1.5345 | 0.775 | 0.7418 | 0.2134 | 0.1072 |
| 0.0223 | 262.0 | 3275 | 0.9550 | 0.775 | 0.3970 | 1.5362 | 0.775 | 0.7418 | 0.2145 | 0.1079 |
| 0.0223 | 262.96 | 3287 | 0.9558 | 0.775 | 0.3971 | 1.5392 | 0.775 | 0.7418 | 0.2227 | 0.1076 |
| 0.0223 | 264.0 | 3300 | 0.9557 | 0.775 | 0.3970 | 1.5383 | 0.775 | 0.7418 | 0.2226 | 0.1074 |
| 0.0223 | 264.96 | 3312 | 0.9561 | 0.775 | 0.3973 | 1.5393 | 0.775 | 0.7418 | 0.2224 | 0.1080 |
| 0.0223 | 266.0 | 3325 | 0.9563 | 0.775 | 0.3972 | 1.5387 | 0.775 | 0.7418 | 0.2224 | 0.1073 |
| 0.0223 | 266.96 | 3337 | 0.9568 | 0.775 | 0.3974 | 1.5407 | 0.775 | 0.7418 | 0.2225 | 0.1082 |
| 0.0223 | 268.0 | 3350 | 0.9567 | 0.775 | 0.3973 | 1.5373 | 0.775 | 0.7418 | 0.2259 | 0.1080 |
| 0.0223 | 268.96 | 3362 | 0.9566 | 0.775 | 0.3973 | 1.5371 | 0.775 | 0.7418 | 0.2225 | 0.1080 |
| 0.0223 | 270.0 | 3375 | 0.9574 | 0.775 | 0.3976 | 1.5403 | 0.775 | 0.7418 | 0.2227 | 0.1075 |
| 0.0223 | 270.96 | 3387 | 0.9568 | 0.775 | 0.3974 | 1.5363 | 0.775 | 0.7418 | 0.2225 | 0.1072 |
| 0.0223 | 272.0 | 3400 | 0.9580 | 0.775 | 0.3978 | 1.5465 | 0.775 | 0.7418 | 0.2241 | 0.1081 |
| 0.0223 | 272.96 | 3412 | 0.9577 | 0.775 | 0.3977 | 1.5383 | 0.775 | 0.7418 | 0.2228 | 0.1074 |
| 0.0223 | 274.0 | 3425 | 0.9577 | 0.775 | 0.3976 | 1.5409 | 0.775 | 0.7418 | 0.2225 | 0.1080 |
| 0.0223 | 274.96 | 3437 | 0.9582 | 0.775 | 0.3978 | 1.5409 | 0.775 | 0.7418 | 0.2226 | 0.1075 |
| 0.0223 | 276.0 | 3450 | 0.9581 | 0.775 | 0.3978 | 1.5412 | 0.775 | 0.7418 | 0.2225 | 0.1082 |
| 0.0223 | 276.96 | 3462 | 0.9582 | 0.775 | 0.3978 | 1.5367 | 0.775 | 0.7418 | 0.2220 | 0.1073 |
| 0.0223 | 278.0 | 3475 | 0.9587 | 0.775 | 0.3980 | 1.5422 | 0.775 | 0.7418 | 0.2244 | 0.1082 |
| 0.0223 | 278.96 | 3487 | 0.9588 | 0.775 | 0.3980 | 1.5478 | 0.775 | 0.7418 | 0.2242 | 0.1082 |
| 0.0202 | 280.0 | 3500 | 0.9586 | 0.775 | 0.3980 | 1.5381 | 0.775 | 0.7418 | 0.2219 | 0.1081 |
| 0.0202 | 280.96 | 3512 | 0.9592 | 0.775 | 0.3981 | 1.5474 | 0.775 | 0.7418 | 0.2243 | 0.1082 |
| 0.0202 | 282.0 | 3525 | 0.9588 | 0.775 | 0.3980 | 1.5396 | 0.775 | 0.7418 | 0.2227 | 0.1080 |
| 0.0202 | 282.96 | 3537 | 0.9589 | 0.775 | 0.3980 | 1.5401 | 0.775 | 0.7418 | 0.2218 | 0.1074 |
| 0.0202 | 284.0 | 3550 | 0.9593 | 0.775 | 0.3982 | 1.5441 | 0.775 | 0.7418 | 0.2243 | 0.1083 |
| 0.0202 | 284.96 | 3562 | 0.9591 | 0.775 | 0.3981 | 1.5412 | 0.775 | 0.7418 | 0.2227 | 0.1082 |
| 0.0202 | 286.0 | 3575 | 0.9592 | 0.775 | 0.3981 | 1.5417 | 0.775 | 0.7418 | 0.2227 | 0.1082 |
| 0.0202 | 286.96 | 3587 | 0.9592 | 0.775 | 0.3981 | 1.5416 | 0.775 | 0.7418 | 0.2227 | 0.1082 |
| 0.0202 | 288.0 | 3600 | 0.9592 | 0.775 | 0.3981 | 1.5416 | 0.775 | 0.7418 | 0.2227 | 0.1082 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-cbt-rarity-all-no-cut
|
NasimB
| 2023-07-13T14:14:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T12:29:28Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7012 | 0.29 | 500 | 5.6350 |
| 5.3399 | 0.58 | 1000 | 5.1969 |
| 4.9839 | 0.87 | 1500 | 4.9406 |
| 4.7034 | 1.16 | 2000 | 4.7940 |
| 4.5463 | 1.46 | 2500 | 4.6801 |
| 4.4423 | 1.75 | 3000 | 4.5644 |
| 4.3263 | 2.04 | 3500 | 4.4872 |
| 4.1157 | 2.33 | 4000 | 4.4394 |
| 4.0929 | 2.62 | 4500 | 4.3840 |
| 4.0599 | 2.91 | 5000 | 4.3306 |
| 3.8592 | 3.2 | 5500 | 4.3227 |
| 3.793 | 3.49 | 6000 | 4.2915 |
| 3.7801 | 3.79 | 6500 | 4.2609 |
| 3.6929 | 4.08 | 7000 | 4.2583 |
| 3.5075 | 4.37 | 7500 | 4.2539 |
| 3.5083 | 4.66 | 8000 | 4.2380 |
| 3.4906 | 4.95 | 8500 | 4.2264 |
| 3.3427 | 5.24 | 9000 | 4.2369 |
| 3.3099 | 5.53 | 9500 | 4.2359 |
| 3.3141 | 5.82 | 10000 | 4.2348 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mort1k/dqn-SpaceInvadersNoFrameskip-v4
|
mort1k
| 2023-07-13T14:09:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T14:09:10Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 762.50 +/- 250.23
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mort1k -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mort1k -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mort1k
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
FarziBuilder/myAdapter
|
FarziBuilder
| 2023-07-13T14:06:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T14:06:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
jordiclive/scaled-llama-7b-lora-16k-rp2
|
jordiclive
| 2023-07-13T14:05:35Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"dataset:togethercomputer/RedPajama-Data-1T-Sample",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T12:39:35Z |
---
datasets:
- togethercomputer/RedPajama-Data-1T-Sample
---
# Linear Scaled RoPE LLama LoRA 16k
```
import torch
from transformers import LlamaTokenizerFast, AutoModelForCausalLM
model_name = "jordiclive/scaled-llama-7b-lora-16k-rp2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,trust_remote_code=True
)
tokenizer = LlamaTokenizerFast.from_pretrained(
model_name)
tokenizer.model_max_length = 16384
tokenizer.pad_token = tokenizer.eos_token
model.max_sequence_length = tokenizer.model_max_length
```
- `huggyllama/llama-7b` Trained on Packed 16k sequences of the RedPajama dataset for 1 Epoch.
- Merged Model. If require LoRA parameters/config, they are in the `adapter` folder.
|
onlywone/layoutlm-funsd
|
onlywone
| 2023-07-13T13:58:09Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T13:48:57Z |
---
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6940
- Answer: {'precision': 0.721978021978022, 'recall': 0.8121137206427689, 'f1': 0.7643979057591623, 'number': 809}
- Header: {'precision': 0.2662337662337662, 'recall': 0.3445378151260504, 'f1': 0.30036630036630035, 'number': 119}
- Question: {'precision': 0.7816091954022989, 'recall': 0.8300469483568075, 'f1': 0.8051001821493625, 'number': 1065}
- Overall Precision: 0.7207
- Overall Recall: 0.7938
- Overall F1: 0.7555
- Overall Accuracy: 0.8073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.755 | 1.0 | 10 | 1.5815 | {'precision': 0.026919242273180457, 'recall': 0.03337453646477132, 'f1': 0.02980132450331126, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.20780487804878048, 'recall': 0.2, 'f1': 0.20382775119617225, 'number': 1065} | 0.1183 | 0.1204 | 0.1194 | 0.3885 |
| 1.4375 | 2.0 | 20 | 1.2088 | {'precision': 0.28227848101265823, 'recall': 0.27564894932014833, 'f1': 0.2789243277048155, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4782964782964783, 'recall': 0.5483568075117371, 'f1': 0.5109361329833771, 'number': 1065} | 0.4013 | 0.4049 | 0.4031 | 0.6223 |
| 1.0595 | 3.0 | 30 | 0.9379 | {'precision': 0.503954802259887, 'recall': 0.5512978986402967, 'f1': 0.526564344746163, 'number': 809} | {'precision': 0.0425531914893617, 'recall': 0.01680672268907563, 'f1': 0.024096385542168672, 'number': 119} | {'precision': 0.6126205083260298, 'recall': 0.6563380281690141, 'f1': 0.6337262012692656, 'number': 1065} | 0.5533 | 0.5755 | 0.5642 | 0.7194 |
| 0.8139 | 4.0 | 40 | 0.7735 | {'precision': 0.6280041797283177, 'recall': 0.7428924598269468, 'f1': 0.680634201585504, 'number': 809} | {'precision': 0.13432835820895522, 'recall': 0.07563025210084033, 'f1': 0.09677419354838708, 'number': 119} | {'precision': 0.6600688468158348, 'recall': 0.72018779342723, 'f1': 0.688819039066008, 'number': 1065} | 0.6299 | 0.6909 | 0.6590 | 0.7636 |
| 0.664 | 5.0 | 50 | 0.7245 | {'precision': 0.6519453207150369, 'recall': 0.7663782447466008, 'f1': 0.7045454545454546, 'number': 809} | {'precision': 0.24719101123595505, 'recall': 0.18487394957983194, 'f1': 0.21153846153846156, 'number': 119} | {'precision': 0.7090909090909091, 'recall': 0.7690140845070422, 'f1': 0.7378378378378379, 'number': 1065} | 0.6656 | 0.7331 | 0.6977 | 0.7757 |
| 0.5505 | 6.0 | 60 | 0.6956 | {'precision': 0.6834061135371179, 'recall': 0.7737948084054388, 'f1': 0.7257971014492753, 'number': 809} | {'precision': 0.28205128205128205, 'recall': 0.18487394957983194, 'f1': 0.2233502538071066, 'number': 119} | {'precision': 0.723421926910299, 'recall': 0.8178403755868544, 'f1': 0.7677390921110622, 'number': 1065} | 0.6911 | 0.7622 | 0.7249 | 0.7888 |
| 0.4759 | 7.0 | 70 | 0.6712 | {'precision': 0.6844396082698585, 'recall': 0.7775030902348579, 'f1': 0.7280092592592592, 'number': 809} | {'precision': 0.2727272727272727, 'recall': 0.2773109243697479, 'f1': 0.27499999999999997, 'number': 119} | {'precision': 0.7472527472527473, 'recall': 0.8300469483568075, 'f1': 0.786476868327402, 'number': 1065} | 0.6955 | 0.7757 | 0.7334 | 0.7975 |
| 0.4276 | 8.0 | 80 | 0.6765 | {'precision': 0.6889375684556407, 'recall': 0.7775030902348579, 'f1': 0.7305458768873403, 'number': 809} | {'precision': 0.28205128205128205, 'recall': 0.2773109243697479, 'f1': 0.2796610169491525, 'number': 119} | {'precision': 0.7527333894028595, 'recall': 0.8403755868544601, 'f1': 0.7941437444543035, 'number': 1065} | 0.7017 | 0.7812 | 0.7393 | 0.8021 |
| 0.3788 | 9.0 | 90 | 0.6653 | {'precision': 0.7081930415263749, 'recall': 0.7799752781211372, 'f1': 0.7423529411764707, 'number': 809} | {'precision': 0.2647058823529412, 'recall': 0.3025210084033613, 'f1': 0.2823529411764706, 'number': 119} | {'precision': 0.7667238421955404, 'recall': 0.8394366197183099, 'f1': 0.8014343343792021, 'number': 1065} | 0.7118 | 0.7832 | 0.7458 | 0.8049 |
| 0.3466 | 10.0 | 100 | 0.6838 | {'precision': 0.7005464480874317, 'recall': 0.792336217552534, 'f1': 0.7436194895591649, 'number': 809} | {'precision': 0.2706766917293233, 'recall': 0.3025210084033613, 'f1': 0.28571428571428564, 'number': 119} | {'precision': 0.7728055077452668, 'recall': 0.8431924882629108, 'f1': 0.8064660978895375, 'number': 1065} | 0.7127 | 0.7903 | 0.7495 | 0.8047 |
| 0.3142 | 11.0 | 110 | 0.6795 | {'precision': 0.6997816593886463, 'recall': 0.792336217552534, 'f1': 0.7431884057971013, 'number': 809} | {'precision': 0.2857142857142857, 'recall': 0.3025210084033613, 'f1': 0.2938775510204082, 'number': 119} | {'precision': 0.7994628469113697, 'recall': 0.8384976525821596, 'f1': 0.8185151237396883, 'number': 1065} | 0.7272 | 0.7878 | 0.7563 | 0.8067 |
| 0.2978 | 12.0 | 120 | 0.6922 | {'precision': 0.6927194860813705, 'recall': 0.799752781211372, 'f1': 0.7423981640849111, 'number': 809} | {'precision': 0.2585034013605442, 'recall': 0.31932773109243695, 'f1': 0.2857142857142857, 'number': 119} | {'precision': 0.7768090671316478, 'recall': 0.8366197183098592, 'f1': 0.8056057866184448, 'number': 1065} | 0.7074 | 0.7908 | 0.7467 | 0.8026 |
| 0.2824 | 13.0 | 130 | 0.6960 | {'precision': 0.7184357541899441, 'recall': 0.7948084054388134, 'f1': 0.754694835680751, 'number': 809} | {'precision': 0.2611464968152866, 'recall': 0.3445378151260504, 'f1': 0.2971014492753623, 'number': 119} | {'precision': 0.7757255936675461, 'recall': 0.828169014084507, 'f1': 0.8010899182561309, 'number': 1065} | 0.7154 | 0.7858 | 0.7489 | 0.8045 |
| 0.2696 | 14.0 | 140 | 0.6917 | {'precision': 0.7164667393675027, 'recall': 0.8121137206427689, 'f1': 0.7612977983777521, 'number': 809} | {'precision': 0.2708333333333333, 'recall': 0.3277310924369748, 'f1': 0.2965779467680608, 'number': 119} | {'precision': 0.7833775419982316, 'recall': 0.831924882629108, 'f1': 0.8069216757741348, 'number': 1065} | 0.7217 | 0.7938 | 0.7560 | 0.8067 |
| 0.2674 | 15.0 | 150 | 0.6940 | {'precision': 0.721978021978022, 'recall': 0.8121137206427689, 'f1': 0.7643979057591623, 'number': 809} | {'precision': 0.2662337662337662, 'recall': 0.3445378151260504, 'f1': 0.30036630036630035, 'number': 119} | {'precision': 0.7816091954022989, 'recall': 0.8300469483568075, 'f1': 0.8051001821493625, 'number': 1065} | 0.7207 | 0.7938 | 0.7555 | 0.8073 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
laura63/ast-finetuned-audioset-10-10-0.4593-finetuned-AST
|
laura63
| 2023-07-13T13:54:58Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-01T10:00:08Z |
---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-AST
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-AST
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3787
- Accuracy: 0.9463
- F1: 0.9426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7914 | 1.0 | 1467 | 0.5058 | 0.8788 | 0.8679 |
| 0.5962 | 2.0 | 2934 | 0.4318 | 0.9018 | 0.8941 |
| 0.0143 | 3.0 | 4401 | 0.4418 | 0.9233 | 0.9183 |
| 0.0002 | 4.0 | 5868 | 0.3996 | 0.9387 | 0.9342 |
| 0.0001 | 5.0 | 7335 | 0.3787 | 0.9463 | 0.9426 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
peft-internal-testing/tiny_OPTForSequenceClassification-lora
|
peft-internal-testing
| 2023-07-13T13:48:21Z | 25,195 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T13:48:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Yntec/DucHaiten-Retro-Diffusers
|
Yntec
| 2023-07-13T13:39:06Z | 1,798 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Retro",
"DucHaiten",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T13:02:56Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Retro
- DucHaiten
---
# DucHaiten Retro
I don't know about you, but in my opinion this is the best retro model DucHaiten has ever created. It's sad to see it sitting at 0 downloads at huggingface, so here's a Diffusers version you can use with huggingface's pipeline!
If you like their content, support them at:
https://linktr.ee/Duc_Haiten
Original page:
https://civitai.com/models/103966?modelVersionId=111392
|
Yntec/rainbowpatch
|
Yntec
| 2023-07-13T13:38:28Z | 119 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lexica",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T11:50:41Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- lexica
---
# Rainbowpatch
Use "Rainbowpatch" in the prompt to enhance the style.
Model by Patchmonk, original page:
https://civitai.com/models/5528/rainbowpatch
|
sephinroth/marian-finetuned-kde4-en-to-fr
|
sephinroth
| 2023-07-13T13:29:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-13T12:05:39Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
peft-internal-testing/tiny_GPT2ForTokenClassification-lora
|
peft-internal-testing
| 2023-07-13T13:29:16Z | 25,209 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T13:11:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ayanban011/vit-base_tobacco_bs_16_lr_5e-6_e_300_wr_0.1_wd_0.2
|
ayanban011
| 2023-07-13T13:24:29Z | 168 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T10:51:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_bs_16_lr_5e-6_e_300_wr_0.1_wd_0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_bs_16_lr_5e-6_e_300_wr_0.1_wd_0.2
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8461
- Accuracy: 0.775
- Brier Loss: 0.3632
- Nll: 1.4570
- F1 Micro: 0.775
- F1 Macro: 0.7418
- Ece: 0.2043
- Aurc: 0.1066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 0.7447 | 0.815 | 0.3078 | 1.1882 | 0.815 | 0.7942 | 0.2385 | 0.0731 |
| No log | 2.0 | 25 | 0.7442 | 0.815 | 0.3075 | 1.1872 | 0.815 | 0.7922 | 0.2401 | 0.0736 |
| No log | 2.96 | 37 | 0.7439 | 0.815 | 0.3075 | 1.1883 | 0.815 | 0.7942 | 0.2292 | 0.0722 |
| No log | 4.0 | 50 | 0.7463 | 0.815 | 0.3083 | 1.1904 | 0.815 | 0.7942 | 0.2454 | 0.0762 |
| No log | 4.96 | 62 | 0.7441 | 0.805 | 0.3077 | 1.1886 | 0.805 | 0.7819 | 0.2322 | 0.0731 |
| No log | 6.0 | 75 | 0.7408 | 0.81 | 0.3064 | 1.1842 | 0.81 | 0.7914 | 0.2217 | 0.0704 |
| No log | 6.96 | 87 | 0.7448 | 0.81 | 0.3082 | 1.1852 | 0.81 | 0.7847 | 0.2341 | 0.0748 |
| No log | 8.0 | 100 | 0.7454 | 0.815 | 0.3084 | 1.1882 | 0.815 | 0.7942 | 0.2129 | 0.0767 |
| No log | 8.96 | 112 | 0.7462 | 0.815 | 0.3080 | 1.1954 | 0.815 | 0.7922 | 0.2535 | 0.0775 |
| No log | 10.0 | 125 | 0.7427 | 0.81 | 0.3067 | 1.1924 | 0.81 | 0.7876 | 0.2280 | 0.0767 |
| No log | 10.96 | 137 | 0.7420 | 0.815 | 0.3067 | 1.2033 | 0.815 | 0.7942 | 0.2611 | 0.0755 |
| No log | 12.0 | 150 | 0.7417 | 0.805 | 0.3063 | 1.1881 | 0.805 | 0.7820 | 0.2456 | 0.0774 |
| No log | 12.96 | 162 | 0.7442 | 0.815 | 0.3089 | 1.1895 | 0.815 | 0.8059 | 0.2230 | 0.0768 |
| No log | 14.0 | 175 | 0.7398 | 0.805 | 0.3061 | 1.2547 | 0.805 | 0.7843 | 0.2310 | 0.0766 |
| No log | 14.96 | 187 | 0.7355 | 0.81 | 0.3046 | 1.1887 | 0.81 | 0.7914 | 0.2328 | 0.0746 |
| No log | 16.0 | 200 | 0.7368 | 0.81 | 0.3053 | 1.1894 | 0.81 | 0.7922 | 0.2256 | 0.0774 |
| No log | 16.96 | 212 | 0.7355 | 0.81 | 0.3037 | 1.2537 | 0.81 | 0.7947 | 0.2077 | 0.0788 |
| No log | 18.0 | 225 | 0.7407 | 0.81 | 0.3065 | 1.1882 | 0.81 | 0.7871 | 0.2421 | 0.0767 |
| No log | 18.96 | 237 | 0.7279 | 0.8 | 0.2999 | 1.2540 | 0.8000 | 0.7796 | 0.2159 | 0.0742 |
| No log | 20.0 | 250 | 0.7324 | 0.805 | 0.3042 | 1.1811 | 0.805 | 0.7841 | 0.2269 | 0.0763 |
| No log | 20.96 | 262 | 0.7421 | 0.805 | 0.3079 | 1.1827 | 0.805 | 0.7850 | 0.2339 | 0.0797 |
| No log | 22.0 | 275 | 0.7343 | 0.81 | 0.3050 | 1.1689 | 0.81 | 0.7877 | 0.2223 | 0.0784 |
| No log | 22.96 | 287 | 0.7308 | 0.81 | 0.3032 | 1.1901 | 0.81 | 0.7922 | 0.2190 | 0.0774 |
| No log | 24.0 | 300 | 0.7381 | 0.805 | 0.3057 | 1.3200 | 0.805 | 0.7853 | 0.2500 | 0.0819 |
| No log | 24.96 | 312 | 0.7336 | 0.81 | 0.3042 | 1.3123 | 0.81 | 0.7903 | 0.2082 | 0.0795 |
| No log | 26.0 | 325 | 0.7282 | 0.805 | 0.3020 | 1.2465 | 0.805 | 0.7847 | 0.2248 | 0.0792 |
| No log | 26.96 | 337 | 0.7346 | 0.81 | 0.3050 | 1.2538 | 0.81 | 0.7956 | 0.2095 | 0.0818 |
| No log | 28.0 | 350 | 0.7305 | 0.805 | 0.3031 | 1.2443 | 0.805 | 0.7850 | 0.2488 | 0.0823 |
| No log | 28.96 | 362 | 0.7395 | 0.8 | 0.3071 | 1.3235 | 0.8000 | 0.7818 | 0.2223 | 0.0843 |
| No log | 30.0 | 375 | 0.7349 | 0.8 | 0.3058 | 1.2511 | 0.8000 | 0.7733 | 0.2004 | 0.0817 |
| No log | 30.96 | 387 | 0.7344 | 0.8 | 0.3048 | 1.2516 | 0.8000 | 0.7818 | 0.2183 | 0.0837 |
| No log | 32.0 | 400 | 0.7332 | 0.795 | 0.3037 | 1.3836 | 0.795 | 0.7686 | 0.2185 | 0.0844 |
| No log | 32.96 | 412 | 0.7306 | 0.81 | 0.3042 | 1.1767 | 0.81 | 0.7905 | 0.2117 | 0.0837 |
| No log | 34.0 | 425 | 0.7326 | 0.8 | 0.3040 | 1.2058 | 0.8000 | 0.7783 | 0.2106 | 0.0857 |
| No log | 34.96 | 437 | 0.7317 | 0.8 | 0.3045 | 1.3068 | 0.8000 | 0.7733 | 0.2337 | 0.0843 |
| No log | 36.0 | 450 | 0.7345 | 0.805 | 0.3073 | 1.3065 | 0.805 | 0.7782 | 0.1928 | 0.0823 |
| No log | 36.96 | 462 | 0.7367 | 0.8 | 0.3074 | 1.3259 | 0.8000 | 0.7733 | 0.1941 | 0.0860 |
| No log | 38.0 | 475 | 0.7349 | 0.8 | 0.3073 | 1.3074 | 0.8000 | 0.7731 | 0.2138 | 0.0853 |
| No log | 38.96 | 487 | 0.7331 | 0.81 | 0.3057 | 1.3149 | 0.81 | 0.7909 | 0.1981 | 0.0865 |
| 0.1577 | 40.0 | 500 | 0.7269 | 0.8 | 0.3018 | 1.3700 | 0.8000 | 0.7746 | 0.2033 | 0.0865 |
| 0.1577 | 40.96 | 512 | 0.7270 | 0.8 | 0.3020 | 1.3687 | 0.8000 | 0.7737 | 0.2108 | 0.0860 |
| 0.1577 | 42.0 | 525 | 0.7356 | 0.805 | 0.3078 | 1.3105 | 0.805 | 0.7784 | 0.2053 | 0.0892 |
| 0.1577 | 42.96 | 537 | 0.7291 | 0.8 | 0.3031 | 1.3687 | 0.8000 | 0.7746 | 0.2066 | 0.0876 |
| 0.1577 | 44.0 | 550 | 0.7276 | 0.81 | 0.3034 | 1.3655 | 0.81 | 0.7844 | 0.2189 | 0.0872 |
| 0.1577 | 44.96 | 562 | 0.7318 | 0.805 | 0.3050 | 1.3684 | 0.805 | 0.7793 | 0.2209 | 0.0893 |
| 0.1577 | 46.0 | 575 | 0.7300 | 0.805 | 0.3041 | 1.3679 | 0.805 | 0.7793 | 0.2040 | 0.0885 |
| 0.1577 | 46.96 | 587 | 0.7342 | 0.805 | 0.3060 | 1.3679 | 0.805 | 0.7797 | 0.2059 | 0.0893 |
| 0.1577 | 48.0 | 600 | 0.7303 | 0.805 | 0.3045 | 1.3672 | 0.805 | 0.7797 | 0.1862 | 0.0889 |
| 0.1577 | 48.96 | 612 | 0.7401 | 0.8 | 0.3090 | 1.3710 | 0.8000 | 0.7746 | 0.1930 | 0.0915 |
| 0.1577 | 50.0 | 625 | 0.7329 | 0.795 | 0.3054 | 1.3696 | 0.795 | 0.7654 | 0.1984 | 0.0891 |
| 0.1577 | 50.96 | 637 | 0.7363 | 0.795 | 0.3072 | 1.3689 | 0.795 | 0.7654 | 0.2196 | 0.0907 |
| 0.1577 | 52.0 | 650 | 0.7402 | 0.805 | 0.3101 | 1.3646 | 0.805 | 0.7784 | 0.2028 | 0.0911 |
| 0.1577 | 52.96 | 662 | 0.7347 | 0.8 | 0.3065 | 1.3687 | 0.8000 | 0.7746 | 0.2062 | 0.0894 |
| 0.1577 | 54.0 | 675 | 0.7388 | 0.805 | 0.3097 | 1.3649 | 0.805 | 0.7784 | 0.2027 | 0.0907 |
| 0.1577 | 54.96 | 687 | 0.7381 | 0.8 | 0.3087 | 1.3681 | 0.8000 | 0.7704 | 0.2120 | 0.0908 |
| 0.1577 | 56.0 | 700 | 0.7372 | 0.805 | 0.3088 | 1.3646 | 0.805 | 0.7749 | 0.1866 | 0.0903 |
| 0.1577 | 56.96 | 712 | 0.7403 | 0.805 | 0.3102 | 1.3682 | 0.805 | 0.7749 | 0.2287 | 0.0922 |
| 0.1577 | 58.0 | 725 | 0.7352 | 0.8 | 0.3069 | 1.3680 | 0.8000 | 0.7704 | 0.2117 | 0.0900 |
| 0.1577 | 58.96 | 737 | 0.7373 | 0.8 | 0.3079 | 1.3699 | 0.8000 | 0.7704 | 0.1990 | 0.0923 |
| 0.1577 | 60.0 | 750 | 0.7353 | 0.795 | 0.3065 | 1.3690 | 0.795 | 0.7656 | 0.2078 | 0.0900 |
| 0.1577 | 60.96 | 762 | 0.7357 | 0.805 | 0.3071 | 1.3657 | 0.805 | 0.7732 | 0.2076 | 0.0899 |
| 0.1577 | 62.0 | 775 | 0.7409 | 0.79 | 0.3103 | 1.3737 | 0.79 | 0.7623 | 0.2066 | 0.0920 |
| 0.1577 | 62.96 | 787 | 0.7393 | 0.795 | 0.3082 | 1.4518 | 0.795 | 0.7670 | 0.2047 | 0.0912 |
| 0.1577 | 64.0 | 800 | 0.7417 | 0.8 | 0.3093 | 1.3304 | 0.8000 | 0.7684 | 0.1955 | 0.0917 |
| 0.1577 | 64.96 | 812 | 0.7438 | 0.8 | 0.3121 | 1.3714 | 0.8000 | 0.7707 | 0.1782 | 0.0920 |
| 0.1577 | 66.0 | 825 | 0.7408 | 0.8 | 0.3100 | 1.3758 | 0.8000 | 0.7709 | 0.1965 | 0.0931 |
| 0.1577 | 66.96 | 837 | 0.7434 | 0.8 | 0.3112 | 1.3767 | 0.8000 | 0.7707 | 0.2124 | 0.0935 |
| 0.1577 | 68.0 | 850 | 0.7393 | 0.8 | 0.3107 | 1.3038 | 0.8000 | 0.7704 | 0.1786 | 0.0901 |
| 0.1577 | 68.96 | 862 | 0.7383 | 0.8 | 0.3090 | 1.3689 | 0.8000 | 0.7704 | 0.2041 | 0.0913 |
| 0.1577 | 70.0 | 875 | 0.7436 | 0.8 | 0.3119 | 1.3658 | 0.8000 | 0.7704 | 0.1983 | 0.0932 |
| 0.1577 | 70.96 | 887 | 0.7463 | 0.8 | 0.3130 | 1.3700 | 0.8000 | 0.7707 | 0.1932 | 0.0947 |
| 0.1577 | 72.0 | 900 | 0.7464 | 0.795 | 0.3135 | 1.3720 | 0.795 | 0.7656 | 0.2089 | 0.0932 |
| 0.1577 | 72.96 | 912 | 0.7469 | 0.8 | 0.3137 | 1.3703 | 0.8000 | 0.7707 | 0.2004 | 0.0943 |
| 0.1577 | 74.0 | 925 | 0.7435 | 0.8 | 0.3124 | 1.3674 | 0.8000 | 0.7704 | 0.1958 | 0.0930 |
| 0.1577 | 74.96 | 937 | 0.7427 | 0.8 | 0.3117 | 1.3708 | 0.8000 | 0.7707 | 0.2224 | 0.0921 |
| 0.1577 | 76.0 | 950 | 0.7420 | 0.8 | 0.3111 | 1.3664 | 0.8000 | 0.7704 | 0.2145 | 0.0928 |
| 0.1577 | 76.96 | 962 | 0.7457 | 0.8 | 0.3135 | 1.3690 | 0.8000 | 0.7707 | 0.2178 | 0.0934 |
| 0.1577 | 78.0 | 975 | 0.7513 | 0.8 | 0.3163 | 1.3707 | 0.8000 | 0.7707 | 0.1964 | 0.0947 |
| 0.1577 | 78.96 | 987 | 0.7466 | 0.8 | 0.3139 | 1.3722 | 0.8000 | 0.7704 | 0.2001 | 0.0936 |
| 0.1081 | 80.0 | 1000 | 0.7491 | 0.8 | 0.3154 | 1.3712 | 0.8000 | 0.7707 | 0.2100 | 0.0943 |
| 0.1081 | 80.96 | 1012 | 0.7483 | 0.8 | 0.3150 | 1.3675 | 0.8000 | 0.7704 | 0.2083 | 0.0939 |
| 0.1081 | 82.0 | 1025 | 0.7523 | 0.8 | 0.3163 | 1.3742 | 0.8000 | 0.7707 | 0.2095 | 0.0958 |
| 0.1081 | 82.96 | 1037 | 0.7511 | 0.8 | 0.3166 | 1.3703 | 0.8000 | 0.7707 | 0.2034 | 0.0944 |
| 0.1081 | 84.0 | 1050 | 0.7481 | 0.8 | 0.3150 | 1.3687 | 0.8000 | 0.7704 | 0.2113 | 0.0941 |
| 0.1081 | 84.96 | 1062 | 0.7501 | 0.8 | 0.3164 | 1.3668 | 0.8000 | 0.7693 | 0.2053 | 0.0932 |
| 0.1081 | 86.0 | 1075 | 0.7539 | 0.8 | 0.3177 | 1.3725 | 0.8000 | 0.7707 | 0.2025 | 0.0951 |
| 0.1081 | 86.96 | 1087 | 0.7550 | 0.8 | 0.3182 | 1.3731 | 0.8000 | 0.7707 | 0.1969 | 0.0953 |
| 0.1081 | 88.0 | 1100 | 0.7553 | 0.8 | 0.3183 | 1.3697 | 0.8000 | 0.7707 | 0.1972 | 0.0952 |
| 0.1081 | 88.96 | 1112 | 0.7535 | 0.8 | 0.3176 | 1.3719 | 0.8000 | 0.7707 | 0.2073 | 0.0945 |
| 0.1081 | 90.0 | 1125 | 0.7558 | 0.795 | 0.3186 | 1.3742 | 0.795 | 0.7681 | 0.2018 | 0.0959 |
| 0.1081 | 90.96 | 1137 | 0.7573 | 0.8 | 0.3193 | 1.3739 | 0.8000 | 0.7704 | 0.1919 | 0.0965 |
| 0.1081 | 92.0 | 1150 | 0.7565 | 0.8 | 0.3193 | 1.3743 | 0.8000 | 0.7698 | 0.1967 | 0.0959 |
| 0.1081 | 92.96 | 1162 | 0.7619 | 0.795 | 0.3218 | 1.3758 | 0.795 | 0.7681 | 0.1989 | 0.0974 |
| 0.1081 | 94.0 | 1175 | 0.7577 | 0.8 | 0.3198 | 1.3793 | 0.8000 | 0.7696 | 0.1996 | 0.0957 |
| 0.1081 | 94.96 | 1187 | 0.7575 | 0.795 | 0.3201 | 1.3781 | 0.795 | 0.7666 | 0.1954 | 0.0964 |
| 0.1081 | 96.0 | 1200 | 0.7573 | 0.8 | 0.3199 | 1.3752 | 0.8000 | 0.7693 | 0.1863 | 0.0955 |
| 0.1081 | 96.96 | 1212 | 0.7615 | 0.795 | 0.3216 | 1.3753 | 0.795 | 0.7681 | 0.1997 | 0.0975 |
| 0.1081 | 98.0 | 1225 | 0.7603 | 0.795 | 0.3215 | 1.3731 | 0.795 | 0.7681 | 0.2051 | 0.0963 |
| 0.1081 | 98.96 | 1237 | 0.7596 | 0.795 | 0.3209 | 1.3744 | 0.795 | 0.7673 | 0.2081 | 0.0959 |
| 0.1081 | 100.0 | 1250 | 0.7582 | 0.795 | 0.3203 | 1.3743 | 0.795 | 0.7673 | 0.2024 | 0.0955 |
| 0.1081 | 100.96 | 1262 | 0.7609 | 0.795 | 0.3223 | 1.3761 | 0.795 | 0.7681 | 0.1823 | 0.0968 |
| 0.1081 | 102.0 | 1275 | 0.7632 | 0.785 | 0.3233 | 1.3758 | 0.785 | 0.7528 | 0.1833 | 0.0970 |
| 0.1081 | 102.96 | 1287 | 0.7618 | 0.785 | 0.3219 | 1.3785 | 0.785 | 0.7516 | 0.2141 | 0.0970 |
| 0.1081 | 104.0 | 1300 | 0.7633 | 0.795 | 0.3230 | 1.4970 | 0.795 | 0.7664 | 0.1956 | 0.0952 |
| 0.1081 | 104.96 | 1312 | 0.7657 | 0.79 | 0.3243 | 1.4406 | 0.79 | 0.7639 | 0.1960 | 0.0961 |
| 0.1081 | 106.0 | 1325 | 0.7673 | 0.785 | 0.3251 | 1.4424 | 0.785 | 0.7516 | 0.2083 | 0.0978 |
| 0.1081 | 106.96 | 1337 | 0.7667 | 0.79 | 0.3250 | 1.4392 | 0.79 | 0.7639 | 0.1875 | 0.0976 |
| 0.1081 | 108.0 | 1350 | 0.7690 | 0.785 | 0.3250 | 1.3876 | 0.785 | 0.7526 | 0.2078 | 0.0990 |
| 0.1081 | 108.96 | 1362 | 0.7676 | 0.785 | 0.3252 | 1.3872 | 0.785 | 0.7554 | 0.2073 | 0.0985 |
| 0.1081 | 110.0 | 1375 | 0.7662 | 0.79 | 0.3249 | 1.4335 | 0.79 | 0.7639 | 0.1939 | 0.0980 |
| 0.1081 | 110.96 | 1387 | 0.7723 | 0.785 | 0.3273 | 1.4567 | 0.785 | 0.7554 | 0.2066 | 0.0995 |
| 0.1081 | 112.0 | 1400 | 0.7665 | 0.78 | 0.3250 | 1.3960 | 0.78 | 0.7488 | 0.2066 | 0.0976 |
| 0.1081 | 112.96 | 1412 | 0.7722 | 0.785 | 0.3275 | 1.4410 | 0.785 | 0.7573 | 0.2063 | 0.0991 |
| 0.1081 | 114.0 | 1425 | 0.7722 | 0.79 | 0.3271 | 1.4039 | 0.79 | 0.7639 | 0.1902 | 0.0990 |
| 0.1081 | 114.96 | 1437 | 0.7699 | 0.79 | 0.3264 | 1.3849 | 0.79 | 0.7644 | 0.1914 | 0.0982 |
| 0.1081 | 116.0 | 1450 | 0.7749 | 0.785 | 0.3285 | 1.3854 | 0.785 | 0.7573 | 0.1942 | 0.0999 |
| 0.1081 | 116.96 | 1462 | 0.7722 | 0.78 | 0.3279 | 1.4365 | 0.78 | 0.7488 | 0.1973 | 0.0991 |
| 0.1081 | 118.0 | 1475 | 0.7763 | 0.78 | 0.3293 | 1.3823 | 0.78 | 0.7488 | 0.2050 | 0.1006 |
| 0.1081 | 118.96 | 1487 | 0.7740 | 0.78 | 0.3287 | 1.3822 | 0.78 | 0.7488 | 0.2105 | 0.0991 |
| 0.0821 | 120.0 | 1500 | 0.7761 | 0.785 | 0.3294 | 1.4414 | 0.785 | 0.7573 | 0.1996 | 0.0995 |
| 0.0821 | 120.96 | 1512 | 0.7749 | 0.78 | 0.3289 | 1.4387 | 0.78 | 0.7488 | 0.1981 | 0.0991 |
| 0.0821 | 122.0 | 1525 | 0.7763 | 0.78 | 0.3297 | 1.4395 | 0.78 | 0.7488 | 0.2175 | 0.0993 |
| 0.0821 | 122.96 | 1537 | 0.7775 | 0.78 | 0.3305 | 1.4407 | 0.78 | 0.7488 | 0.2073 | 0.0993 |
| 0.0821 | 124.0 | 1550 | 0.7770 | 0.78 | 0.3299 | 1.4411 | 0.78 | 0.7488 | 0.2096 | 0.0996 |
| 0.0821 | 124.96 | 1562 | 0.7785 | 0.78 | 0.3309 | 1.4415 | 0.78 | 0.7488 | 0.2174 | 0.1004 |
| 0.0821 | 126.0 | 1575 | 0.7808 | 0.78 | 0.3321 | 1.4431 | 0.78 | 0.7488 | 0.2082 | 0.1005 |
| 0.0821 | 126.96 | 1587 | 0.7791 | 0.78 | 0.3312 | 1.4405 | 0.78 | 0.7488 | 0.2087 | 0.0998 |
| 0.0821 | 128.0 | 1600 | 0.7789 | 0.78 | 0.3312 | 1.4386 | 0.78 | 0.7488 | 0.2047 | 0.0995 |
| 0.0821 | 128.96 | 1612 | 0.7829 | 0.78 | 0.3330 | 1.4423 | 0.78 | 0.7488 | 0.1920 | 0.1005 |
| 0.0821 | 130.0 | 1625 | 0.7797 | 0.78 | 0.3317 | 1.4400 | 0.78 | 0.7488 | 0.2013 | 0.1006 |
| 0.0821 | 130.96 | 1637 | 0.7849 | 0.78 | 0.3336 | 1.4446 | 0.78 | 0.7491 | 0.2064 | 0.1006 |
| 0.0821 | 132.0 | 1650 | 0.7817 | 0.78 | 0.3322 | 1.4396 | 0.78 | 0.7488 | 0.2060 | 0.1003 |
| 0.0821 | 132.96 | 1662 | 0.7823 | 0.78 | 0.3329 | 1.4407 | 0.78 | 0.7488 | 0.1990 | 0.0999 |
| 0.0821 | 134.0 | 1675 | 0.7869 | 0.78 | 0.3354 | 1.4482 | 0.78 | 0.7488 | 0.1999 | 0.1009 |
| 0.0821 | 134.96 | 1687 | 0.7859 | 0.78 | 0.3349 | 1.4429 | 0.78 | 0.7488 | 0.1934 | 0.1013 |
| 0.0821 | 136.0 | 1700 | 0.7867 | 0.78 | 0.3352 | 1.4437 | 0.78 | 0.7488 | 0.2114 | 0.1006 |
| 0.0821 | 136.96 | 1712 | 0.7867 | 0.78 | 0.3350 | 1.4403 | 0.78 | 0.7488 | 0.2070 | 0.1011 |
| 0.0821 | 138.0 | 1725 | 0.7851 | 0.78 | 0.3341 | 1.4439 | 0.78 | 0.7488 | 0.1906 | 0.1009 |
| 0.0821 | 138.96 | 1737 | 0.7892 | 0.78 | 0.3360 | 1.4495 | 0.78 | 0.7488 | 0.2009 | 0.1020 |
| 0.0821 | 140.0 | 1750 | 0.7893 | 0.78 | 0.3366 | 1.4434 | 0.78 | 0.7488 | 0.1976 | 0.1013 |
| 0.0821 | 140.96 | 1762 | 0.7848 | 0.78 | 0.3344 | 1.4383 | 0.78 | 0.7488 | 0.1995 | 0.1001 |
| 0.0821 | 142.0 | 1775 | 0.7911 | 0.78 | 0.3372 | 1.4487 | 0.78 | 0.7488 | 0.1995 | 0.1020 |
| 0.0821 | 142.96 | 1787 | 0.7890 | 0.78 | 0.3362 | 1.4416 | 0.78 | 0.7488 | 0.2075 | 0.1010 |
| 0.0821 | 144.0 | 1800 | 0.7915 | 0.78 | 0.3372 | 1.4476 | 0.78 | 0.7488 | 0.1842 | 0.1019 |
| 0.0821 | 144.96 | 1812 | 0.7876 | 0.78 | 0.3351 | 1.4999 | 0.78 | 0.7488 | 0.1904 | 0.0995 |
| 0.0821 | 146.0 | 1825 | 0.7933 | 0.78 | 0.3378 | 1.4469 | 0.78 | 0.7488 | 0.1973 | 0.1023 |
| 0.0821 | 146.96 | 1837 | 0.7932 | 0.78 | 0.3383 | 1.4441 | 0.78 | 0.7488 | 0.2070 | 0.1016 |
| 0.0821 | 148.0 | 1850 | 0.7907 | 0.78 | 0.3369 | 1.4439 | 0.78 | 0.7488 | 0.1932 | 0.1014 |
| 0.0821 | 148.96 | 1862 | 0.7939 | 0.78 | 0.3386 | 1.4462 | 0.78 | 0.7488 | 0.1906 | 0.1015 |
| 0.0821 | 150.0 | 1875 | 0.7943 | 0.78 | 0.3386 | 1.4449 | 0.78 | 0.7488 | 0.1965 | 0.1016 |
| 0.0821 | 150.96 | 1887 | 0.7955 | 0.78 | 0.3393 | 1.5025 | 0.78 | 0.7488 | 0.2112 | 0.1015 |
| 0.0821 | 152.0 | 1900 | 0.7936 | 0.78 | 0.3386 | 1.4407 | 0.78 | 0.7488 | 0.2112 | 0.1012 |
| 0.0821 | 152.96 | 1912 | 0.7966 | 0.78 | 0.3400 | 1.5033 | 0.78 | 0.7488 | 0.1963 | 0.1012 |
| 0.0821 | 154.0 | 1925 | 0.7981 | 0.78 | 0.3405 | 1.4495 | 0.78 | 0.7488 | 0.1895 | 0.1020 |
| 0.0821 | 154.96 | 1937 | 0.7972 | 0.78 | 0.3401 | 1.4417 | 0.78 | 0.7488 | 0.1953 | 0.1018 |
| 0.0821 | 156.0 | 1950 | 0.7922 | 0.78 | 0.3381 | 1.4395 | 0.78 | 0.7488 | 0.2056 | 0.0999 |
| 0.0821 | 156.96 | 1962 | 0.8013 | 0.775 | 0.3425 | 1.4473 | 0.775 | 0.7451 | 0.1869 | 0.1028 |
| 0.0821 | 158.0 | 1975 | 0.7977 | 0.78 | 0.3403 | 1.4446 | 0.78 | 0.7488 | 0.1872 | 0.1014 |
| 0.0821 | 158.96 | 1987 | 0.7990 | 0.78 | 0.3412 | 1.4413 | 0.78 | 0.7488 | 0.1939 | 0.1017 |
| 0.0668 | 160.0 | 2000 | 0.8048 | 0.775 | 0.3435 | 1.4532 | 0.775 | 0.7451 | 0.1966 | 0.1049 |
| 0.0668 | 160.96 | 2012 | 0.8064 | 0.77 | 0.3448 | 1.4529 | 0.7700 | 0.7358 | 0.1953 | 0.1044 |
| 0.0668 | 162.0 | 2025 | 0.7989 | 0.78 | 0.3412 | 1.4423 | 0.78 | 0.7488 | 0.2038 | 0.1022 |
| 0.0668 | 162.96 | 2037 | 0.8001 | 0.78 | 0.3414 | 1.4440 | 0.78 | 0.7488 | 0.1972 | 0.1015 |
| 0.0668 | 164.0 | 2050 | 0.8068 | 0.775 | 0.3448 | 1.4523 | 0.775 | 0.7396 | 0.2031 | 0.1036 |
| 0.0668 | 164.96 | 2062 | 0.8046 | 0.785 | 0.3438 | 1.4475 | 0.785 | 0.7536 | 0.2070 | 0.1037 |
| 0.0668 | 166.0 | 2075 | 0.8016 | 0.78 | 0.3426 | 1.4451 | 0.78 | 0.7488 | 0.1975 | 0.1012 |
| 0.0668 | 166.96 | 2087 | 0.8053 | 0.78 | 0.3442 | 1.4485 | 0.78 | 0.7477 | 0.2112 | 0.1022 |
| 0.0668 | 168.0 | 2100 | 0.8040 | 0.78 | 0.3433 | 1.4459 | 0.78 | 0.7422 | 0.2014 | 0.1031 |
| 0.0668 | 168.96 | 2112 | 0.8048 | 0.785 | 0.3437 | 1.4479 | 0.785 | 0.7515 | 0.2046 | 0.1033 |
| 0.0668 | 170.0 | 2125 | 0.8054 | 0.775 | 0.3447 | 1.5060 | 0.775 | 0.7450 | 0.1896 | 0.1017 |
| 0.0668 | 170.96 | 2137 | 0.8067 | 0.775 | 0.3451 | 1.5079 | 0.775 | 0.7450 | 0.1898 | 0.1018 |
| 0.0668 | 172.0 | 2150 | 0.8060 | 0.78 | 0.3447 | 1.4508 | 0.78 | 0.7488 | 0.1842 | 0.1022 |
| 0.0668 | 172.96 | 2162 | 0.8127 | 0.77 | 0.3484 | 1.4513 | 0.7700 | 0.7358 | 0.2006 | 0.1042 |
| 0.0668 | 174.0 | 2175 | 0.8080 | 0.77 | 0.3457 | 1.4453 | 0.7700 | 0.7349 | 0.2198 | 0.1034 |
| 0.0668 | 174.96 | 2187 | 0.8095 | 0.775 | 0.3460 | 1.4471 | 0.775 | 0.7384 | 0.2029 | 0.1027 |
| 0.0668 | 176.0 | 2200 | 0.8112 | 0.775 | 0.3467 | 1.4559 | 0.775 | 0.7395 | 0.1995 | 0.1036 |
| 0.0668 | 176.96 | 2212 | 0.8089 | 0.77 | 0.3460 | 1.4485 | 0.7700 | 0.7357 | 0.2050 | 0.1019 |
| 0.0668 | 178.0 | 2225 | 0.8093 | 0.77 | 0.3461 | 1.4459 | 0.7700 | 0.7357 | 0.1989 | 0.1021 |
| 0.0668 | 178.96 | 2237 | 0.8118 | 0.775 | 0.3473 | 1.4499 | 0.775 | 0.7384 | 0.2085 | 0.1029 |
| 0.0668 | 180.0 | 2250 | 0.8112 | 0.775 | 0.3472 | 1.4471 | 0.775 | 0.7384 | 0.2070 | 0.1027 |
| 0.0668 | 180.96 | 2262 | 0.8124 | 0.77 | 0.3478 | 1.4484 | 0.7700 | 0.7357 | 0.1983 | 0.1029 |
| 0.0668 | 182.0 | 2275 | 0.8140 | 0.77 | 0.3484 | 1.4489 | 0.7700 | 0.7357 | 0.1987 | 0.1038 |
| 0.0668 | 182.96 | 2287 | 0.8137 | 0.77 | 0.3483 | 1.4491 | 0.7700 | 0.7357 | 0.2036 | 0.1030 |
| 0.0668 | 184.0 | 2300 | 0.8133 | 0.77 | 0.3481 | 1.4468 | 0.7700 | 0.7357 | 0.2012 | 0.1024 |
| 0.0668 | 184.96 | 2312 | 0.8152 | 0.77 | 0.3489 | 1.4525 | 0.7700 | 0.7357 | 0.1996 | 0.1029 |
| 0.0668 | 186.0 | 2325 | 0.8149 | 0.77 | 0.3490 | 1.4511 | 0.7700 | 0.7357 | 0.1917 | 0.1027 |
| 0.0668 | 186.96 | 2337 | 0.8151 | 0.77 | 0.3490 | 1.4489 | 0.7700 | 0.7357 | 0.1956 | 0.1028 |
| 0.0668 | 188.0 | 2350 | 0.8175 | 0.77 | 0.3500 | 1.5084 | 0.7700 | 0.7357 | 0.2011 | 0.1038 |
| 0.0668 | 188.96 | 2362 | 0.8181 | 0.765 | 0.3499 | 1.4506 | 0.765 | 0.7323 | 0.1975 | 0.1056 |
| 0.0668 | 190.0 | 2375 | 0.8180 | 0.765 | 0.3504 | 1.4499 | 0.765 | 0.7323 | 0.2162 | 0.1050 |
| 0.0668 | 190.96 | 2387 | 0.8168 | 0.77 | 0.3498 | 1.4510 | 0.7700 | 0.7357 | 0.2014 | 0.1039 |
| 0.0668 | 192.0 | 2400 | 0.8183 | 0.77 | 0.3505 | 1.4483 | 0.7700 | 0.7379 | 0.2114 | 0.1032 |
| 0.0668 | 192.96 | 2412 | 0.8193 | 0.775 | 0.3507 | 1.4508 | 0.775 | 0.7384 | 0.2025 | 0.1042 |
| 0.0668 | 194.0 | 2425 | 0.8181 | 0.77 | 0.3503 | 1.4565 | 0.7700 | 0.7357 | 0.2090 | 0.1027 |
| 0.0668 | 194.96 | 2437 | 0.8192 | 0.77 | 0.3507 | 1.4513 | 0.7700 | 0.7357 | 0.1953 | 0.1032 |
| 0.0668 | 196.0 | 2450 | 0.8214 | 0.77 | 0.3520 | 1.4519 | 0.7700 | 0.7349 | 0.2112 | 0.1045 |
| 0.0668 | 196.96 | 2462 | 0.8231 | 0.765 | 0.3531 | 1.4517 | 0.765 | 0.7323 | 0.2042 | 0.1049 |
| 0.0668 | 198.0 | 2475 | 0.8219 | 0.77 | 0.3521 | 1.4512 | 0.7700 | 0.7349 | 0.2152 | 0.1044 |
| 0.0668 | 198.96 | 2487 | 0.8223 | 0.77 | 0.3523 | 1.4507 | 0.7700 | 0.7349 | 0.1888 | 0.1050 |
| 0.0571 | 200.0 | 2500 | 0.8235 | 0.77 | 0.3529 | 1.4533 | 0.7700 | 0.7349 | 0.2029 | 0.1050 |
| 0.0571 | 200.96 | 2512 | 0.8227 | 0.77 | 0.3525 | 1.4718 | 0.7700 | 0.7357 | 0.2170 | 0.1033 |
| 0.0571 | 202.0 | 2525 | 0.8226 | 0.77 | 0.3525 | 1.4505 | 0.7700 | 0.7349 | 0.1954 | 0.1041 |
| 0.0571 | 202.96 | 2537 | 0.8231 | 0.765 | 0.3530 | 1.4506 | 0.765 | 0.7321 | 0.1962 | 0.1046 |
| 0.0571 | 204.0 | 2550 | 0.8255 | 0.77 | 0.3535 | 1.4520 | 0.7700 | 0.7380 | 0.2078 | 0.1060 |
| 0.0571 | 204.96 | 2562 | 0.8276 | 0.77 | 0.3550 | 1.4594 | 0.7700 | 0.7349 | 0.2013 | 0.1046 |
| 0.0571 | 206.0 | 2575 | 0.8257 | 0.77 | 0.3542 | 1.4532 | 0.7700 | 0.7349 | 0.1987 | 0.1040 |
| 0.0571 | 206.96 | 2587 | 0.8248 | 0.775 | 0.3536 | 1.4499 | 0.775 | 0.7406 | 0.1903 | 0.1043 |
| 0.0571 | 208.0 | 2600 | 0.8250 | 0.77 | 0.3534 | 1.4537 | 0.7700 | 0.7349 | 0.2070 | 0.1040 |
| 0.0571 | 208.96 | 2612 | 0.8277 | 0.77 | 0.3548 | 1.4521 | 0.7700 | 0.7380 | 0.1867 | 0.1058 |
| 0.0571 | 210.0 | 2625 | 0.8271 | 0.77 | 0.3545 | 1.4543 | 0.7700 | 0.7349 | 0.2213 | 0.1036 |
| 0.0571 | 210.96 | 2637 | 0.8284 | 0.775 | 0.3552 | 1.4516 | 0.775 | 0.7406 | 0.1992 | 0.1053 |
| 0.0571 | 212.0 | 2650 | 0.8278 | 0.77 | 0.3545 | 1.4533 | 0.7700 | 0.7360 | 0.1938 | 0.1056 |
| 0.0571 | 212.96 | 2662 | 0.8289 | 0.77 | 0.3552 | 1.4533 | 0.7700 | 0.7380 | 0.2017 | 0.1057 |
| 0.0571 | 214.0 | 2675 | 0.8290 | 0.775 | 0.3556 | 1.4530 | 0.775 | 0.7406 | 0.2005 | 0.1052 |
| 0.0571 | 214.96 | 2687 | 0.8282 | 0.77 | 0.3551 | 1.4517 | 0.7700 | 0.7379 | 0.1985 | 0.1037 |
| 0.0571 | 216.0 | 2700 | 0.8294 | 0.77 | 0.3555 | 1.4588 | 0.7700 | 0.7349 | 0.1941 | 0.1045 |
| 0.0571 | 216.96 | 2712 | 0.8305 | 0.775 | 0.3562 | 1.4516 | 0.775 | 0.7406 | 0.1977 | 0.1057 |
| 0.0571 | 218.0 | 2725 | 0.8310 | 0.77 | 0.3565 | 1.4539 | 0.7700 | 0.7380 | 0.1926 | 0.1054 |
| 0.0571 | 218.96 | 2737 | 0.8304 | 0.775 | 0.3560 | 1.4516 | 0.775 | 0.7406 | 0.1986 | 0.1054 |
| 0.0571 | 220.0 | 2750 | 0.8320 | 0.775 | 0.3568 | 1.4545 | 0.775 | 0.7406 | 0.1953 | 0.1054 |
| 0.0571 | 220.96 | 2762 | 0.8316 | 0.775 | 0.3569 | 1.4523 | 0.775 | 0.7406 | 0.1945 | 0.1045 |
| 0.0571 | 222.0 | 2775 | 0.8330 | 0.77 | 0.3573 | 1.4547 | 0.7700 | 0.7380 | 0.1892 | 0.1067 |
| 0.0571 | 222.96 | 2787 | 0.8309 | 0.77 | 0.3563 | 1.4548 | 0.7700 | 0.7379 | 0.2060 | 0.1033 |
| 0.0571 | 224.0 | 2800 | 0.8323 | 0.775 | 0.3572 | 1.4515 | 0.775 | 0.7406 | 0.1910 | 0.1050 |
| 0.0571 | 224.96 | 2812 | 0.8329 | 0.775 | 0.3569 | 1.4530 | 0.775 | 0.7406 | 0.1931 | 0.1055 |
| 0.0571 | 226.0 | 2825 | 0.8319 | 0.78 | 0.3567 | 1.4513 | 0.78 | 0.7444 | 0.2038 | 0.1043 |
| 0.0571 | 226.96 | 2837 | 0.8354 | 0.77 | 0.3586 | 1.4556 | 0.7700 | 0.7380 | 0.1969 | 0.1068 |
| 0.0571 | 228.0 | 2850 | 0.8340 | 0.78 | 0.3575 | 1.4550 | 0.78 | 0.7444 | 0.2043 | 0.1062 |
| 0.0571 | 228.96 | 2862 | 0.8355 | 0.775 | 0.3584 | 1.4546 | 0.775 | 0.7406 | 0.2048 | 0.1055 |
| 0.0571 | 230.0 | 2875 | 0.8350 | 0.78 | 0.3579 | 1.4538 | 0.78 | 0.7444 | 0.2069 | 0.1064 |
| 0.0571 | 230.96 | 2887 | 0.8358 | 0.77 | 0.3584 | 1.4550 | 0.7700 | 0.7380 | 0.1899 | 0.1061 |
| 0.0571 | 232.0 | 2900 | 0.8366 | 0.77 | 0.3587 | 1.4564 | 0.7700 | 0.7380 | 0.1921 | 0.1070 |
| 0.0571 | 232.96 | 2912 | 0.8364 | 0.775 | 0.3587 | 1.4557 | 0.775 | 0.7418 | 0.1970 | 0.1065 |
| 0.0571 | 234.0 | 2925 | 0.8359 | 0.775 | 0.3585 | 1.4543 | 0.775 | 0.7406 | 0.1912 | 0.1061 |
| 0.0571 | 234.96 | 2937 | 0.8360 | 0.775 | 0.3587 | 1.4540 | 0.775 | 0.7406 | 0.2017 | 0.1049 |
| 0.0571 | 236.0 | 2950 | 0.8362 | 0.78 | 0.3587 | 1.4527 | 0.78 | 0.7444 | 0.1985 | 0.1060 |
| 0.0571 | 236.96 | 2962 | 0.8375 | 0.78 | 0.3593 | 1.4554 | 0.78 | 0.7444 | 0.2035 | 0.1061 |
| 0.0571 | 238.0 | 2975 | 0.8378 | 0.775 | 0.3593 | 1.4544 | 0.775 | 0.7418 | 0.1971 | 0.1068 |
| 0.0571 | 238.96 | 2987 | 0.8369 | 0.78 | 0.3588 | 1.4557 | 0.78 | 0.7444 | 0.2178 | 0.1057 |
| 0.0512 | 240.0 | 3000 | 0.8388 | 0.77 | 0.3600 | 1.4558 | 0.7700 | 0.7380 | 0.1939 | 0.1067 |
| 0.0512 | 240.96 | 3012 | 0.8375 | 0.78 | 0.3593 | 1.4540 | 0.78 | 0.7444 | 0.2071 | 0.1058 |
| 0.0512 | 242.0 | 3025 | 0.8393 | 0.775 | 0.3602 | 1.4546 | 0.775 | 0.7406 | 0.1990 | 0.1066 |
| 0.0512 | 242.96 | 3037 | 0.8391 | 0.775 | 0.3601 | 1.4551 | 0.775 | 0.7406 | 0.2025 | 0.1063 |
| 0.0512 | 244.0 | 3050 | 0.8414 | 0.77 | 0.3610 | 1.4575 | 0.7700 | 0.7380 | 0.1924 | 0.1072 |
| 0.0512 | 244.96 | 3062 | 0.8385 | 0.78 | 0.3597 | 1.4531 | 0.78 | 0.7444 | 0.2062 | 0.1059 |
| 0.0512 | 246.0 | 3075 | 0.8394 | 0.78 | 0.3603 | 1.4583 | 0.78 | 0.7444 | 0.1962 | 0.1057 |
| 0.0512 | 246.96 | 3087 | 0.8401 | 0.775 | 0.3604 | 1.4535 | 0.775 | 0.7406 | 0.1880 | 0.1060 |
| 0.0512 | 248.0 | 3100 | 0.8400 | 0.78 | 0.3605 | 1.4550 | 0.78 | 0.7444 | 0.2156 | 0.1058 |
| 0.0512 | 248.96 | 3112 | 0.8404 | 0.78 | 0.3606 | 1.4554 | 0.78 | 0.7444 | 0.1977 | 0.1061 |
| 0.0512 | 250.0 | 3125 | 0.8406 | 0.78 | 0.3607 | 1.4542 | 0.78 | 0.7444 | 0.2055 | 0.1062 |
| 0.0512 | 250.96 | 3137 | 0.8408 | 0.78 | 0.3608 | 1.4545 | 0.78 | 0.7444 | 0.2036 | 0.1062 |
| 0.0512 | 252.0 | 3150 | 0.8414 | 0.78 | 0.3611 | 1.4560 | 0.78 | 0.7444 | 0.2054 | 0.1063 |
| 0.0512 | 252.96 | 3162 | 0.8424 | 0.775 | 0.3614 | 1.4580 | 0.775 | 0.7418 | 0.2037 | 0.1072 |
| 0.0512 | 254.0 | 3175 | 0.8423 | 0.775 | 0.3616 | 1.4558 | 0.775 | 0.7406 | 0.2057 | 0.1064 |
| 0.0512 | 254.96 | 3187 | 0.8422 | 0.775 | 0.3613 | 1.4562 | 0.775 | 0.7418 | 0.2070 | 0.1066 |
| 0.0512 | 256.0 | 3200 | 0.8419 | 0.78 | 0.3612 | 1.4562 | 0.78 | 0.7444 | 0.2196 | 0.1063 |
| 0.0512 | 256.96 | 3212 | 0.8434 | 0.775 | 0.3620 | 1.4565 | 0.775 | 0.7406 | 0.2033 | 0.1065 |
| 0.0512 | 258.0 | 3225 | 0.8431 | 0.775 | 0.3619 | 1.4557 | 0.775 | 0.7418 | 0.2072 | 0.1064 |
| 0.0512 | 258.96 | 3237 | 0.8435 | 0.77 | 0.3620 | 1.4567 | 0.7700 | 0.7380 | 0.1985 | 0.1066 |
| 0.0512 | 260.0 | 3250 | 0.8433 | 0.78 | 0.3619 | 1.4567 | 0.78 | 0.7444 | 0.2179 | 0.1065 |
| 0.0512 | 260.96 | 3262 | 0.8430 | 0.78 | 0.3619 | 1.4558 | 0.78 | 0.7444 | 0.2120 | 0.1060 |
| 0.0512 | 262.0 | 3275 | 0.8432 | 0.78 | 0.3619 | 1.4552 | 0.78 | 0.7444 | 0.2058 | 0.1060 |
| 0.0512 | 262.96 | 3287 | 0.8444 | 0.775 | 0.3623 | 1.4572 | 0.775 | 0.7418 | 0.2035 | 0.1068 |
| 0.0512 | 264.0 | 3300 | 0.8442 | 0.775 | 0.3622 | 1.4574 | 0.775 | 0.7418 | 0.2054 | 0.1067 |
| 0.0512 | 264.96 | 3312 | 0.8441 | 0.78 | 0.3623 | 1.4554 | 0.78 | 0.7444 | 0.2051 | 0.1062 |
| 0.0512 | 266.0 | 3325 | 0.8446 | 0.775 | 0.3624 | 1.4561 | 0.775 | 0.7418 | 0.1975 | 0.1066 |
| 0.0512 | 266.96 | 3337 | 0.8447 | 0.775 | 0.3624 | 1.4570 | 0.775 | 0.7418 | 0.2053 | 0.1065 |
| 0.0512 | 268.0 | 3350 | 0.8448 | 0.78 | 0.3624 | 1.4573 | 0.78 | 0.7444 | 0.2085 | 0.1065 |
| 0.0512 | 268.96 | 3362 | 0.8443 | 0.78 | 0.3624 | 1.4558 | 0.78 | 0.7444 | 0.2119 | 0.1065 |
| 0.0512 | 270.0 | 3375 | 0.8453 | 0.775 | 0.3628 | 1.4571 | 0.775 | 0.7418 | 0.2035 | 0.1067 |
| 0.0512 | 270.96 | 3387 | 0.8444 | 0.78 | 0.3623 | 1.4561 | 0.78 | 0.7444 | 0.2076 | 0.1063 |
| 0.0512 | 272.0 | 3400 | 0.8455 | 0.775 | 0.3629 | 1.4569 | 0.775 | 0.7418 | 0.2034 | 0.1066 |
| 0.0512 | 272.96 | 3412 | 0.8453 | 0.78 | 0.3628 | 1.4574 | 0.78 | 0.7444 | 0.2021 | 0.1065 |
| 0.0512 | 274.0 | 3425 | 0.8450 | 0.78 | 0.3626 | 1.4560 | 0.78 | 0.7444 | 0.2058 | 0.1064 |
| 0.0512 | 274.96 | 3437 | 0.8456 | 0.775 | 0.3629 | 1.4569 | 0.775 | 0.7418 | 0.2035 | 0.1066 |
| 0.0512 | 276.0 | 3450 | 0.8454 | 0.775 | 0.3628 | 1.4565 | 0.775 | 0.7418 | 0.2033 | 0.1065 |
| 0.0512 | 276.96 | 3462 | 0.8454 | 0.78 | 0.3628 | 1.4575 | 0.78 | 0.7444 | 0.2137 | 0.1063 |
| 0.0512 | 278.0 | 3475 | 0.8457 | 0.78 | 0.3630 | 1.4567 | 0.78 | 0.7444 | 0.2092 | 0.1065 |
| 0.0512 | 278.96 | 3487 | 0.8462 | 0.775 | 0.3632 | 1.4567 | 0.775 | 0.7418 | 0.1994 | 0.1067 |
| 0.0481 | 280.0 | 3500 | 0.8456 | 0.78 | 0.3630 | 1.4572 | 0.78 | 0.7444 | 0.2192 | 0.1064 |
| 0.0481 | 280.96 | 3512 | 0.8462 | 0.775 | 0.3632 | 1.4571 | 0.775 | 0.7418 | 0.2034 | 0.1066 |
| 0.0481 | 282.0 | 3525 | 0.8457 | 0.775 | 0.3630 | 1.4563 | 0.775 | 0.7418 | 0.2042 | 0.1065 |
| 0.0481 | 282.96 | 3537 | 0.8460 | 0.775 | 0.3631 | 1.4570 | 0.775 | 0.7418 | 0.2106 | 0.1066 |
| 0.0481 | 284.0 | 3550 | 0.8462 | 0.775 | 0.3632 | 1.4570 | 0.775 | 0.7418 | 0.2106 | 0.1067 |
| 0.0481 | 284.96 | 3562 | 0.8460 | 0.775 | 0.3631 | 1.4567 | 0.775 | 0.7418 | 0.2042 | 0.1065 |
| 0.0481 | 286.0 | 3575 | 0.8461 | 0.775 | 0.3632 | 1.4568 | 0.775 | 0.7418 | 0.2043 | 0.1066 |
| 0.0481 | 286.96 | 3587 | 0.8461 | 0.775 | 0.3632 | 1.4570 | 0.775 | 0.7418 | 0.2043 | 0.1066 |
| 0.0481 | 288.0 | 3600 | 0.8461 | 0.775 | 0.3632 | 1.4570 | 0.775 | 0.7418 | 0.2043 | 0.1066 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
plncmm/roberta-clinical-wl-es
|
plncmm
| 2023-07-13T13:16:20Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-07T22:52:54Z |
---
license: apache-2.0
language:
- es
widget:
- text: "Periodontitis <mask> generalizada severa."
- text: "Caries dentinaria <mask>."
- text: "Movilidad aumentada en pza <mask>."
- text: "Pcte con dm en tto con <mask>."
- text: "Pcte con erc en tto con <mask>."
tags:
- generated_from_trainer
model-index:
- name: roberta-clinical-wl-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plncmm/roberta-clinical-wl-es
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the Chilean waiting list dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
avnishkr/falcon-QAMaster
|
avnishkr
| 2023-07-13T13:04:48Z | 10 | 4 |
adapter-transformers
|
[
"adapter-transformers",
"falcon",
"QLoRA",
"Adapters",
"llms",
"Transformers",
"Fine-Tuning",
"PEFT",
"SFTTrainer",
"Open-Source",
"LoRA",
"Attention",
"code",
"Falcon-7b",
"question-answering",
"custom_code",
"en",
"dataset:squad",
"dataset:tiiuae/falcon-refinedweb",
"dataset:adversarial_qa",
"dataset:avnishkr/trimpixel",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2106.09685",
"arxiv:2305.14314",
"license:mit",
"region:us"
] |
question-answering
| 2023-07-10T11:51:57Z |
---
library_name: adapter-transformers
license: mit
datasets:
- squad
- tiiuae/falcon-refinedweb
- adversarial_qa
- avnishkr/trimpixel
language:
- en
pipeline_tag: question-answering
tags:
- QLoRA
- Adapters
- llms
- Transformers
- Fine-Tuning
- PEFT
- SFTTrainer
- Open-Source
- LoRA
- Attention
- code
- Falcon-7b
---
# 🚀 Falcon-QAMaster
Falcon-7b-QueAns is a chatbot-like model for Question and Answering. It was built by fine-tuning [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) on the [SQuAD](https://huggingface.co/datasets/squad), [Adversarial_qa](https://huggingface.co/datasets/adversarial_qa), Trimpixel (Self-Made) datasets. This repo only includes the QLoRA adapters from fine-tuning with 🤗's [peft](https://github.com/huggingface/peft) package.
## Model Summary
- **Model Type:** Causal decoder-only
- **Language(s):** English
- **Base Model:** Falcon-7B (License: Apache 2.0)
- **Dataset:** [SQuAD](https://huggingface.co/datasets/squad) (License: cc-by-4.0), [Adversarial_qa](https://huggingface.co/datasets/adversarial_qa) (License: cc-by-sa-4.0), [Falcon-RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) (odc-by), Trimpixel (Self-Made)
- **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset"
## Why use Falcon-7B?
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
⚠️ **This is a finetuned version for specifically question and answering.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
## Model Details
The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 12 hours and was executed on a workstation with a single T4 NVIDIA GPU with 25 GB of available memory. See attached [Colab Notebook] used to train the model.
### Model Date
July 13, 2023
Open source falcon 7b large language model fine tuned on SQuAD, Adversarial_qa, Trimpixel datasets for question and answering.
QLoRA technique used for fine tuning the model on consumer grade GPU
SFTTrainer is also used.
## Datasets
1.
Dataset used: SQuAD
Dataset Size: 87599
Training Steps: 350
2.
Dataset used: Adversarial_qa
Dataset Size: 30000
Training Steps: 400
3.
Dataset used: Trimpixel
Dataset Size: 1757
Training Steps: 400
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
zohaib99k/QnA_model_training
|
zohaib99k
| 2023-07-13T13:04:41Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T04:12:35Z |
---
license: other
---
LLaMA-13B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
EquinoxElahin/q-FrozenLake-v1-4x4-noSlippery
|
EquinoxElahin
| 2023-07-13T12:42:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-27T14:50:32Z |
# ANAIS
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab-interne.dev.klee.lan.net/datateam/anais.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab-interne.dev.klee.lan.net/datateam/anais/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
|
atiiisham988/whisper-small-dv
|
atiiisham988
| 2023-07-13T12:41:14Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-13T11:13:03Z |
---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - atiiisham
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.509754146816427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - atiiisham
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Wer Ortho: 62.8665
- Wer: 13.5098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1243 | 1.63 | 500 | 0.1709 | 62.8665 | 13.5098 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RushTurtle/crnn_vgg16_bn_20230713-111621
|
RushTurtle
| 2023-07-13T12:33:27Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T12:33:19Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/tmp/dataset/train3_1100/",
"val_path": "/tmp/dataset/val3_1100/",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 600,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": null,
"vocab": "french",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
}
|
phatjk/bloomz-lora-vi-QA-NLLB-viquad_ver2
|
phatjk
| 2023-07-13T12:24:58Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T12:24:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Sasagi/Remusuzumori
|
Sasagi
| 2023-07-13T12:20:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T12:12:51Z |
---
license: creativeml-openrail-m
---
|
jordyvl/vit-tiny_tobacco3482_dualsimkd_
|
jordyvl
| 2023-07-13T12:19:30Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T10:55:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_tobacco3482_dualsimkd_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_tobacco3482_dualsimkd_
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1401
- Accuracy: 0.385
- Brier Loss: 0.8709
- Nll: 8.8462
- F1 Micro: 0.3850
- F1 Macro: 0.1979
- Ece: 0.3606
- Aurc: 0.3874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 100 | 0.5117 | 0.04 | 0.9009 | 19.1664 | 0.04 | 0.0077 | 0.1344 | 0.9445 |
| No log | 2.0 | 200 | 0.3168 | 0.05 | 0.8997 | 15.0313 | 0.0500 | 0.0095 | 0.1344 | 0.8364 |
| No log | 3.0 | 300 | 0.2703 | 0.18 | 0.8978 | 9.6860 | 0.18 | 0.0305 | 0.2180 | 0.7731 |
| No log | 4.0 | 400 | 0.2266 | 0.18 | 0.8952 | 12.0957 | 0.18 | 0.0305 | 0.2223 | 0.7993 |
| 1.1219 | 5.0 | 500 | 0.1687 | 0.18 | 0.8951 | 12.7136 | 0.18 | 0.0305 | 0.2215 | 0.7713 |
| 1.1219 | 6.0 | 600 | 0.1331 | 0.165 | 0.8956 | 12.6737 | 0.165 | 0.0284 | 0.2044 | 0.7829 |
| 1.1219 | 7.0 | 700 | 0.1139 | 0.18 | 0.8960 | 12.6380 | 0.18 | 0.0305 | 0.2283 | 0.7875 |
| 1.1219 | 8.0 | 800 | 0.1143 | 0.18 | 0.8963 | 12.6385 | 0.18 | 0.0306 | 0.2183 | 0.7703 |
| 1.1219 | 9.0 | 900 | 0.1246 | 0.18 | 0.8966 | 12.5389 | 0.18 | 0.0305 | 0.2223 | 0.7726 |
| 0.0694 | 10.0 | 1000 | 0.1262 | 0.18 | 0.8961 | 12.6316 | 0.18 | 0.0305 | 0.2271 | 0.7894 |
| 0.0694 | 11.0 | 1100 | 0.1186 | 0.155 | 0.8961 | 12.6309 | 0.155 | 0.0268 | 0.2169 | 0.6418 |
| 0.0694 | 12.0 | 1200 | 0.1290 | 0.18 | 0.8960 | 12.6360 | 0.18 | 0.0305 | 0.2272 | 0.8014 |
| 0.0694 | 13.0 | 1300 | 0.1202 | 0.18 | 0.8959 | 12.6644 | 0.18 | 0.0305 | 0.2274 | 0.7910 |
| 0.0694 | 14.0 | 1400 | 0.1341 | 0.18 | 0.8960 | 12.6667 | 0.18 | 0.0305 | 0.2273 | 0.7916 |
| 0.0505 | 15.0 | 1500 | 0.1234 | 0.18 | 0.8961 | 12.6653 | 0.18 | 0.0305 | 0.2261 | 0.7819 |
| 0.0505 | 16.0 | 1600 | 0.1375 | 0.18 | 0.8960 | 12.6951 | 0.18 | 0.0305 | 0.2283 | 0.7929 |
| 0.0505 | 17.0 | 1700 | 0.1249 | 0.18 | 0.8959 | 12.7041 | 0.18 | 0.0305 | 0.2262 | 0.7820 |
| 0.0505 | 18.0 | 1800 | 0.1263 | 0.18 | 0.8964 | 12.6096 | 0.18 | 0.0305 | 0.2228 | 0.7900 |
| 0.0505 | 19.0 | 1900 | 0.1243 | 0.18 | 0.8961 | 12.6667 | 0.18 | 0.0305 | 0.2229 | 0.7896 |
| 0.0483 | 20.0 | 2000 | 0.1246 | 0.18 | 0.8960 | 12.6285 | 0.18 | 0.0305 | 0.2172 | 0.7913 |
| 0.0483 | 21.0 | 2100 | 0.1218 | 0.18 | 0.8961 | 12.6375 | 0.18 | 0.0305 | 0.2250 | 0.8003 |
| 0.0483 | 22.0 | 2200 | 0.1228 | 0.18 | 0.8964 | 12.5765 | 0.18 | 0.0305 | 0.2258 | 0.7938 |
| 0.0483 | 23.0 | 2300 | 0.1270 | 0.18 | 0.8963 | 12.6332 | 0.18 | 0.0305 | 0.2239 | 0.8055 |
| 0.0483 | 24.0 | 2400 | 0.1303 | 0.18 | 0.8963 | 12.5914 | 0.18 | 0.0305 | 0.2270 | 0.8006 |
| 0.0484 | 25.0 | 2500 | 0.1234 | 0.18 | 0.8960 | 12.6429 | 0.18 | 0.0305 | 0.2208 | 0.7990 |
| 0.0484 | 26.0 | 2600 | 0.1313 | 0.18 | 0.8965 | 12.5721 | 0.18 | 0.0305 | 0.2205 | 0.8069 |
| 0.0484 | 27.0 | 2700 | 0.1314 | 0.18 | 0.8963 | 12.5982 | 0.18 | 0.0305 | 0.2247 | 0.8110 |
| 0.0484 | 28.0 | 2800 | 0.1326 | 0.18 | 0.8962 | 12.6539 | 0.18 | 0.0305 | 0.2143 | 0.8083 |
| 0.0484 | 29.0 | 2900 | 0.1337 | 0.18 | 0.8964 | 12.5814 | 0.18 | 0.0305 | 0.2225 | 0.8106 |
| 0.0473 | 30.0 | 3000 | 0.1369 | 0.18 | 0.8962 | 12.6021 | 0.18 | 0.0305 | 0.2258 | 0.8095 |
| 0.0473 | 31.0 | 3100 | 0.1295 | 0.18 | 0.8958 | 12.6587 | 0.18 | 0.0305 | 0.2273 | 0.8104 |
| 0.0473 | 32.0 | 3200 | 0.1343 | 0.18 | 0.8959 | 12.6740 | 0.18 | 0.0305 | 0.2220 | 0.8119 |
| 0.0473 | 33.0 | 3300 | 0.1359 | 0.18 | 0.8960 | 12.6790 | 0.18 | 0.0305 | 0.2273 | 0.8134 |
| 0.0473 | 34.0 | 3400 | 0.1367 | 0.18 | 0.8961 | 12.6336 | 0.18 | 0.0305 | 0.2228 | 0.8159 |
| 0.0476 | 35.0 | 3500 | 0.1378 | 0.18 | 0.8963 | 12.6119 | 0.18 | 0.0305 | 0.2270 | 0.8172 |
| 0.0476 | 36.0 | 3600 | 0.1286 | 0.18 | 0.8961 | 12.6340 | 0.18 | 0.0305 | 0.2218 | 0.8148 |
| 0.0476 | 37.0 | 3700 | 0.1333 | 0.18 | 0.8960 | 12.6328 | 0.18 | 0.0305 | 0.2207 | 0.8164 |
| 0.0476 | 38.0 | 3800 | 0.1328 | 0.18 | 0.8963 | 12.6294 | 0.18 | 0.0305 | 0.2196 | 0.8180 |
| 0.0476 | 39.0 | 3900 | 0.1344 | 0.18 | 0.8961 | 12.6417 | 0.18 | 0.0305 | 0.2207 | 0.8209 |
| 0.0474 | 40.0 | 4000 | 0.1362 | 0.18 | 0.8959 | 12.6775 | 0.18 | 0.0305 | 0.2187 | 0.8198 |
| 0.0474 | 41.0 | 4100 | 0.1340 | 0.18 | 0.8961 | 12.6746 | 0.18 | 0.0305 | 0.2249 | 0.8215 |
| 0.0474 | 42.0 | 4200 | 0.1308 | 0.18 | 0.8958 | 12.6621 | 0.18 | 0.0305 | 0.2208 | 0.8215 |
| 0.0474 | 43.0 | 4300 | 0.1372 | 0.18 | 0.8960 | 12.6133 | 0.18 | 0.0305 | 0.2249 | 0.8204 |
| 0.0474 | 44.0 | 4400 | 0.1436 | 0.18 | 0.8963 | 12.6014 | 0.18 | 0.0305 | 0.2280 | 0.8201 |
| 0.0472 | 45.0 | 4500 | 0.1374 | 0.18 | 0.8960 | 12.6316 | 0.18 | 0.0305 | 0.2228 | 0.8193 |
| 0.0472 | 46.0 | 4600 | 0.1261 | 0.18 | 0.8957 | 12.6840 | 0.18 | 0.0305 | 0.2251 | 0.8220 |
| 0.0472 | 47.0 | 4700 | 0.1340 | 0.18 | 0.8956 | 12.6704 | 0.18 | 0.0305 | 0.2251 | 0.8221 |
| 0.0472 | 48.0 | 4800 | 0.1320 | 0.18 | 0.8959 | 12.6111 | 0.18 | 0.0305 | 0.2227 | 0.8203 |
| 0.0472 | 49.0 | 4900 | 0.1336 | 0.18 | 0.8956 | 12.6838 | 0.18 | 0.0305 | 0.2294 | 0.8209 |
| 0.0474 | 50.0 | 5000 | 0.1342 | 0.18 | 0.8959 | 12.3426 | 0.18 | 0.0305 | 0.2292 | 0.8218 |
| 0.0474 | 51.0 | 5100 | 0.1362 | 0.18 | 0.8957 | 12.3611 | 0.18 | 0.0305 | 0.2261 | 0.8224 |
| 0.0474 | 52.0 | 5200 | 0.1368 | 0.18 | 0.8958 | 11.5617 | 0.18 | 0.0305 | 0.2205 | 0.8222 |
| 0.0474 | 53.0 | 5300 | 0.1391 | 0.18 | 0.8955 | 11.5519 | 0.18 | 0.0305 | 0.2312 | 0.8225 |
| 0.0474 | 54.0 | 5400 | 0.1366 | 0.18 | 0.8947 | 12.2068 | 0.18 | 0.0305 | 0.2231 | 0.8231 |
| 0.047 | 55.0 | 5500 | 0.1355 | 0.19 | 0.8943 | 11.5922 | 0.19 | 0.0641 | 0.2299 | 0.8248 |
| 0.047 | 56.0 | 5600 | 0.1386 | 0.17 | 0.8930 | 11.8204 | 0.17 | 0.0705 | 0.2240 | 0.5968 |
| 0.047 | 57.0 | 5700 | 0.1364 | 0.33 | 0.8936 | 11.0092 | 0.33 | 0.1878 | 0.3195 | 0.4381 |
| 0.047 | 58.0 | 5800 | 0.1368 | 0.27 | 0.8923 | 11.0463 | 0.27 | 0.1541 | 0.2874 | 0.5187 |
| 0.047 | 59.0 | 5900 | 0.1328 | 0.325 | 0.8915 | 10.5269 | 0.325 | 0.1702 | 0.3247 | 0.4469 |
| 0.0469 | 60.0 | 6000 | 0.1402 | 0.235 | 0.8945 | 9.2940 | 0.235 | 0.1141 | 0.2558 | 0.6612 |
| 0.0469 | 61.0 | 6100 | 0.1387 | 0.345 | 0.8913 | 9.2678 | 0.345 | 0.1657 | 0.3422 | 0.4100 |
| 0.0469 | 62.0 | 6200 | 0.1386 | 0.31 | 0.8891 | 10.1100 | 0.31 | 0.1637 | 0.3134 | 0.4609 |
| 0.0469 | 63.0 | 6300 | 0.1379 | 0.34 | 0.8892 | 9.1965 | 0.34 | 0.1582 | 0.3388 | 0.4344 |
| 0.0469 | 64.0 | 6400 | 0.1375 | 0.335 | 0.8876 | 9.2252 | 0.335 | 0.1624 | 0.3356 | 0.4239 |
| 0.0469 | 65.0 | 6500 | 0.1357 | 0.345 | 0.8868 | 9.1887 | 0.345 | 0.1659 | 0.3361 | 0.4061 |
| 0.0469 | 66.0 | 6600 | 0.1394 | 0.345 | 0.8850 | 9.1819 | 0.345 | 0.1641 | 0.3398 | 0.4265 |
| 0.0469 | 67.0 | 6700 | 0.1410 | 0.34 | 0.8850 | 9.1158 | 0.34 | 0.1590 | 0.3328 | 0.4302 |
| 0.0469 | 68.0 | 6800 | 0.1387 | 0.295 | 0.8814 | 9.2693 | 0.295 | 0.1374 | 0.3039 | 0.4572 |
| 0.0469 | 69.0 | 6900 | 0.1385 | 0.335 | 0.8814 | 9.1526 | 0.335 | 0.1668 | 0.3324 | 0.4205 |
| 0.0463 | 70.0 | 7000 | 0.1392 | 0.34 | 0.8814 | 9.1159 | 0.34 | 0.1546 | 0.3405 | 0.4263 |
| 0.0463 | 71.0 | 7100 | 0.1418 | 0.35 | 0.8820 | 9.1363 | 0.35 | 0.1692 | 0.3436 | 0.4019 |
| 0.0463 | 72.0 | 7200 | 0.1379 | 0.35 | 0.8791 | 9.0483 | 0.35 | 0.1726 | 0.3402 | 0.4226 |
| 0.0463 | 73.0 | 7300 | 0.1405 | 0.33 | 0.8760 | 9.3563 | 0.33 | 0.1731 | 0.3207 | 0.4307 |
| 0.0463 | 74.0 | 7400 | 0.1401 | 0.31 | 0.8769 | 9.4413 | 0.31 | 0.1676 | 0.3099 | 0.4383 |
| 0.0458 | 75.0 | 7500 | 0.1393 | 0.38 | 0.8778 | 9.0788 | 0.38 | 0.1985 | 0.3518 | 0.3976 |
| 0.0458 | 76.0 | 7600 | 0.1384 | 0.39 | 0.8779 | 9.0233 | 0.39 | 0.2027 | 0.3673 | 0.4144 |
| 0.0458 | 77.0 | 7700 | 0.1403 | 0.365 | 0.8818 | 9.1567 | 0.3650 | 0.1953 | 0.3518 | 0.4181 |
| 0.0458 | 78.0 | 7800 | 0.1400 | 0.27 | 0.8725 | 11.0592 | 0.27 | 0.1627 | 0.2896 | 0.4809 |
| 0.0458 | 79.0 | 7900 | 0.1402 | 0.375 | 0.8739 | 9.1158 | 0.375 | 0.1961 | 0.3540 | 0.3929 |
| 0.0455 | 80.0 | 8000 | 0.1401 | 0.315 | 0.8722 | 9.9114 | 0.315 | 0.1771 | 0.3220 | 0.4443 |
| 0.0455 | 81.0 | 8100 | 0.1378 | 0.39 | 0.8761 | 9.0128 | 0.39 | 0.2048 | 0.3642 | 0.4020 |
| 0.0455 | 82.0 | 8200 | 0.1401 | 0.38 | 0.8729 | 9.1624 | 0.38 | 0.2006 | 0.3612 | 0.3924 |
| 0.0455 | 83.0 | 8300 | 0.1391 | 0.38 | 0.8742 | 8.8982 | 0.38 | 0.2048 | 0.3561 | 0.3991 |
| 0.0455 | 84.0 | 8400 | 0.1381 | 0.375 | 0.8734 | 9.0598 | 0.375 | 0.1901 | 0.3567 | 0.4010 |
| 0.0453 | 85.0 | 8500 | 0.1398 | 0.39 | 0.8718 | 9.1407 | 0.39 | 0.2057 | 0.3693 | 0.3892 |
| 0.0453 | 86.0 | 8600 | 0.1389 | 0.37 | 0.8721 | 9.3494 | 0.37 | 0.2006 | 0.3505 | 0.3914 |
| 0.0453 | 87.0 | 8700 | 0.1390 | 0.395 | 0.8743 | 8.7444 | 0.395 | 0.2113 | 0.3724 | 0.3854 |
| 0.0453 | 88.0 | 8800 | 0.1404 | 0.395 | 0.8739 | 8.7654 | 0.395 | 0.2134 | 0.3657 | 0.3925 |
| 0.0453 | 89.0 | 8900 | 0.1409 | 0.385 | 0.8726 | 8.7763 | 0.3850 | 0.2032 | 0.3643 | 0.3963 |
| 0.0451 | 90.0 | 9000 | 0.1403 | 0.39 | 0.8717 | 8.8363 | 0.39 | 0.2055 | 0.3668 | 0.3926 |
| 0.0451 | 91.0 | 9100 | 0.1388 | 0.39 | 0.8719 | 9.2985 | 0.39 | 0.2099 | 0.3662 | 0.3847 |
| 0.0451 | 92.0 | 9200 | 0.1397 | 0.385 | 0.8702 | 9.4449 | 0.3850 | 0.2050 | 0.3535 | 0.3877 |
| 0.0451 | 93.0 | 9300 | 0.1403 | 0.385 | 0.8709 | 8.9790 | 0.3850 | 0.1989 | 0.3473 | 0.3887 |
| 0.0451 | 94.0 | 9400 | 0.1400 | 0.39 | 0.8705 | 9.1647 | 0.39 | 0.2053 | 0.3569 | 0.3865 |
| 0.045 | 95.0 | 9500 | 0.1404 | 0.395 | 0.8712 | 9.1707 | 0.395 | 0.2087 | 0.3688 | 0.3815 |
| 0.045 | 96.0 | 9600 | 0.1404 | 0.385 | 0.8711 | 8.6711 | 0.3850 | 0.1980 | 0.3566 | 0.3867 |
| 0.045 | 97.0 | 9700 | 0.1399 | 0.39 | 0.8706 | 9.1288 | 0.39 | 0.2035 | 0.3610 | 0.3845 |
| 0.045 | 98.0 | 9800 | 0.1400 | 0.385 | 0.8708 | 9.1302 | 0.3850 | 0.1982 | 0.3538 | 0.3870 |
| 0.045 | 99.0 | 9900 | 0.1398 | 0.39 | 0.8712 | 8.8257 | 0.39 | 0.2002 | 0.3660 | 0.3825 |
| 0.0449 | 100.0 | 10000 | 0.1401 | 0.385 | 0.8709 | 8.8462 | 0.3850 | 0.1979 | 0.3606 | 0.3874 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
nolanaatama/mstnnm
|
nolanaatama
| 2023-07-13T12:15:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-02T21:28:24Z |
---
license: creativeml-openrail-m
---
|
yassmine/plbart-finetuned-unitTest-1000
|
yassmine
| 2023-07-13T12:04:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"plbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-13T09:49:08Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: plbart-finetuned-unitTest-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-finetuned-unitTest-1000
This model is a fine-tuned version of [uclanlp/plbart-base](https://huggingface.co/uclanlp/plbart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0000
- Bleu: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 92 | 0.9023 | 0.0000 |
| No log | 2.0 | 184 | 0.8401 | 0.0000 |
| No log | 3.0 | 276 | 0.8096 | 0.0000 |
| No log | 4.0 | 368 | 0.7942 | 0.0000 |
| No log | 5.0 | 460 | 0.7848 | 0.0000 |
| 0.943 | 6.0 | 552 | 0.7818 | 0.0000 |
| 0.943 | 7.0 | 644 | 0.7911 | 0.0000 |
| 0.943 | 8.0 | 736 | 0.7874 | 0.0000 |
| 0.943 | 9.0 | 828 | 0.7970 | 0.0000 |
| 0.943 | 10.0 | 920 | 0.8062 | 0.0000 |
| 0.5025 | 11.0 | 1012 | 0.8085 | 0.0000 |
| 0.5025 | 12.0 | 1104 | 0.8179 | 0.0000 |
| 0.5025 | 13.0 | 1196 | 0.8360 | 0.0000 |
| 0.5025 | 14.0 | 1288 | 0.8385 | 0.0000 |
| 0.5025 | 15.0 | 1380 | 0.8470 | 0.0000 |
| 0.5025 | 16.0 | 1472 | 0.8556 | 0.0000 |
| 0.3309 | 17.0 | 1564 | 0.8619 | 0.0000 |
| 0.3309 | 18.0 | 1656 | 0.8701 | 0.0000 |
| 0.3309 | 19.0 | 1748 | 0.8827 | 0.0000 |
| 0.3309 | 20.0 | 1840 | 0.8871 | 0.0000 |
| 0.3309 | 21.0 | 1932 | 0.8970 | 0.0000 |
| 0.2266 | 22.0 | 2024 | 0.8984 | 0.0000 |
| 0.2266 | 23.0 | 2116 | 0.9051 | 0.0000 |
| 0.2266 | 24.0 | 2208 | 0.9188 | 0.0000 |
| 0.2266 | 25.0 | 2300 | 0.9205 | 0.0000 |
| 0.2266 | 26.0 | 2392 | 0.9278 | 0.0000 |
| 0.2266 | 27.0 | 2484 | 0.9333 | 0.0000 |
| 0.1639 | 28.0 | 2576 | 0.9456 | 0.0000 |
| 0.1639 | 29.0 | 2668 | 0.9454 | 0.0000 |
| 0.1639 | 30.0 | 2760 | 0.9522 | 0.0000 |
| 0.1639 | 31.0 | 2852 | 0.9513 | 0.0000 |
| 0.1639 | 32.0 | 2944 | 0.9554 | 0.0000 |
| 0.1251 | 33.0 | 3036 | 0.9661 | 0.0000 |
| 0.1251 | 34.0 | 3128 | 0.9698 | 0.0000 |
| 0.1251 | 35.0 | 3220 | 0.9750 | 0.0000 |
| 0.1251 | 36.0 | 3312 | 0.9722 | 0.0000 |
| 0.1251 | 37.0 | 3404 | 0.9780 | 0.0000 |
| 0.1251 | 38.0 | 3496 | 0.9789 | 0.0000 |
| 0.1019 | 39.0 | 3588 | 0.9825 | 0.0000 |
| 0.1019 | 40.0 | 3680 | 0.9913 | 0.0000 |
| 0.1019 | 41.0 | 3772 | 0.9906 | 0.0000 |
| 0.1019 | 42.0 | 3864 | 0.9922 | 0.0000 |
| 0.1019 | 43.0 | 3956 | 0.9937 | 0.0000 |
| 0.0863 | 44.0 | 4048 | 0.9981 | 0.0000 |
| 0.0863 | 45.0 | 4140 | 0.9979 | 0.0000 |
| 0.0863 | 46.0 | 4232 | 0.9984 | 0.0000 |
| 0.0863 | 47.0 | 4324 | 0.9970 | 0.0000 |
| 0.0863 | 48.0 | 4416 | 1.0003 | 0.0000 |
| 0.0783 | 49.0 | 4508 | 0.9993 | 0.0000 |
| 0.0783 | 50.0 | 4600 | 1.0000 | 0.0000 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2
|
vnktrmnb
| 2023-07-13T11:56:45Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T09:50:00Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3530
- Train End Logits Accuracy: 0.6339
- Train Start Logits Accuracy: 0.6471
- Validation Loss: 0.9662
- Validation End Logits Accuracy: 0.7197
- Validation Start Logits Accuracy: 0.7298
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11957, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.3530 | 0.6339 | 0.6471 | 0.9662 | 0.7197 | 0.7298 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AllenQ/model_archive
|
AllenQ
| 2023-07-13T11:53:36Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-13T11:30:15Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-AllenQ/model_archive
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: car

|
rightspeed/spacehope
|
rightspeed
| 2023-07-13T11:52:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T11:51:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 5.00 +/- 7.07
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rightspeed -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rightspeed -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rightspeed
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
IbrahemVX2000/kandiskyai2-1
|
IbrahemVX2000
| 2023-07-13T11:29:14Z | 0 | 0 | null |
[
"text-to-image",
"kandinsky",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2023-07-13T11:27:16Z |
---
license: apache-2.0
prior: kandinsky-community/kandinsky-2-1-prior
tags:
- text-to-image
- kandinsky
---
# Kandinsky 2.1
Kandinsky 2.1 inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas.
It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov)
## Usage
Kandinsky 2.1 is available in diffusers!
```python
pip install diffusers transformers accelerate
```
### Text to image
```python
from diffusers import DiffusionPipeline
import torch
pipe_prior = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16)
pipe_prior.to("cuda")
t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
t2i_pipe.to("cuda")
prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality"
image_embeds, negative_image_embeds = pipe_prior(prompt, negative_prompt, guidance_scale=1.0).to_tuple()
image = t2i_pipe(prompt, negative_prompt=negative_prompt, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0]
image.save("cheeseburger_monster.png")
```

### Text Guided Image-to-Image Generation
```python
from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline
import torch
from PIL import Image
import requests
from io import BytesIO
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
original_image = Image.open(BytesIO(response.content)).convert("RGB")
original_image = original_image.resize((768, 512))
# create prior
pipe_prior = KandinskyPriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
# create img2img pipeline
pipe = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.to("cuda")
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
image_embeds, negative_image_embeds = pipe_prior(prompt, negative_prompt).to_tuple()
out = pipe(
prompt,
image=original_image,
image_embeds=image_embeds,
negative_image_embeds=negative_image_embeds,
height=768,
width=768,
strength=0.3,
)
out.images[0].save("fantasy_land.png")
```

### Interpolate
```python
from diffusers import KandinskyPriorPipeline, KandinskyPipeline
from diffusers.utils import load_image
import PIL
import torch
pipe_prior = KandinskyPriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
img1 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
img2 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg"
)
# add all the conditions we want to interpolate, can be either text or image
images_texts = ["a cat", img1, img2]
# specify the weights for each condition in images_texts
weights = [0.3, 0.3, 0.4]
# We can leave the prompt empty
prompt = ""
prior_out = pipe_prior.interpolate(images_texts, weights)
pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(prompt, **prior_out, height=768, width=768).images[0]
image.save("starry_cat.png")
```

## Model Architecture
### Overview
Kandinsky 2.1 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.
The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation.
<p float="left">
<img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/>
</p>
Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [mCLIP model](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14). The trained image prior model is then used to generate mCLIP image embeddings for input text prompts. Both the input text prompts and its mCLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image.
### Details
The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution).
The main Text2Image diffusion model was trained on the basis of 170M text-image pairs from the [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) (an important condition was the presence of images with a resolution of at least 768x768). The use of 170M pairs is due to the fact that we kept the UNet diffusion block from Kandinsky 2.0, which allowed us not to train it from scratch. Further, at the stage of fine-tuning, a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources.
### Evaluation
We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID.
FID metric values for generative models on COCO_30k
| | FID (30k)|
|:------|----:|
| eDiff-I (2022) | 6.95 |
| Image (2022) | 7.27 |
| Kandinsky 2.1 (2023) | 8.21|
| Stable Diffusion 2.1 (2022) | 8.59 |
| GigaGAN, 512x512 (2023) | 9.09 |
| DALL-E 2 (2022) | 10.39 |
| GLIDE (2022) | 12.24 |
| Kandinsky 1.0 (2022) | 15.40 |
| DALL-E (2021) | 17.89 |
| Kandinsky 2.0 (2022) | 20.00 |
| GLIGEN (2022) | 21.04 |
For more information, please refer to the upcoming technical report.
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kandinsky 2.1,
title = {kandinsky 2.1},
author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov},
year = {2023},
howpublished = {},
}
```
|
offlinehq/autotrain-slovenian-swear-words-74310139575
|
offlinehq
| 2023-07-13T11:28:35Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:offlinehq/autotrain-data-slovenian-swear-words",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T11:22:57Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- offlinehq/autotrain-data-slovenian-swear-words
co2_eq_emissions:
emissions: 3.733207533466129
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 74310139575
- CO2 Emissions (in grams): 3.7332
## Validation Metrics
- Loss: 0.575
- Accuracy: 0.702
- Precision: 0.682
- Recall: 0.708
- AUC: 0.764
- F1: 0.695
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/offlinehq/autotrain-slovenian-swear-words-74310139575
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("offlinehq/autotrain-slovenian-swear-words-74310139575", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("offlinehq/autotrain-slovenian-swear-words-74310139575", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
CanSukru/YORUvoicemodel
|
CanSukru
| 2023-07-13T11:23:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T11:12:34Z |
---
license: creativeml-openrail-m
---
|
Beams24/indk21
|
Beams24
| 2023-07-13T11:14:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T11:12:16Z |
---
license: creativeml-openrail-m
---
|
preetham/rmicki
|
preetham
| 2023-07-13T11:13:58Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T10:39:15Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - preetham/rmicki
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Fixedbot/q-FrozenLake-v1-4x4-noSlippery
|
Fixedbot
| 2023-07-13T11:13:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T11:08:04Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="Fixedbot/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
PraveenJesu/openai-whisper-medium-murf
|
PraveenJesu
| 2023-07-13T11:13:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T11:13:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
RushTurtle/crnn_vgg16_bn_20230713-111233
|
RushTurtle
| 2023-07-13T11:13:02Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T11:12:55Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/tmp/dataset/train3_1100/",
"val_path": "/tmp/dataset/val3_1100/",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 3,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": null,
"vocab": "french",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
}
|
FreedomIntelligence/HuatuoGPT-13b-delta
|
FreedomIntelligence
| 2023-07-13T11:07:20Z | 24 | 18 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T06:09:35Z |
---
license: apache-2.0
---
Please see our [HuatuoGPT](https://github.com/FreedomIntelligence/HuatuoGPT) project: https://github.com/FreedomIntelligence/HuatuoGPT.
|
BlueSunflower/gpt2-medium-chess
|
BlueSunflower
| 2023-07-13T10:51:47Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-30T14:13:46Z |
# Model description
GPT-2 medium finetuned on 8 million chess games (short algebraic notation)
Data: Chess DB + sample from lichess + sample from CCRL
Example context: "1-0 2700 1350 1.e4 e5 2.Nf3 Nc6" (white score-black score white_elo black_elo moves)
# Model results
- ELO (measured against Stockfish) ~ 1340
- % legal moves 98.5%
- checkmates in one move (from BigBench benchmark) - 46.5%
---
license: agpl-3.0
---
|
Virch/q-FrozenLake-v1-4x4-noSlippery
|
Virch
| 2023-07-13T10:51:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T10:43:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Virch/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.