modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
thisiskeithkwan/whisper-medium-1000steps-spaced
|
thisiskeithkwan
| 2023-08-07T04:35:59Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:thisiskeithkwan/canto",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T01:25:37Z |
---
language:
- zh
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- thisiskeithkwan/canto
model-index:
- name: whisper-medium-cantonese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-cantonese
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the thisiskeithkwan/canto dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4767
- Cer: 1.2115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5362 | 0.76 | 500 | 0.4981 | 1.5560 |
| 0.3313 | 1.52 | 1000 | 0.4767 | 1.2115 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
thisiskeithkwan/whisper-medium-1000steps
|
thisiskeithkwan
| 2023-08-07T03:50:36Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:thisiskeithkwan/canto",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T01:06:39Z |
---
language:
- zh
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- thisiskeithkwan/canto
model-index:
- name: whisper-medium-cantonese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-cantonese
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the thisiskeithkwan/canto dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7006
- Cer: 3.6111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6458 | 0.76 | 500 | 0.7109 | 3.5960 |
| 0.4183 | 1.52 | 1000 | 0.7006 | 3.6111 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
TheRains/yt-special-batch12-small
|
TheRains
| 2023-08-07T03:49:24Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T14:31:41Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 40.08170676350431
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6718
- Wer: 40.0817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8104 | 0.26 | 1000 | 0.8244 | 49.7374 |
| 0.7059 | 0.52 | 2000 | 0.7380 | 47.9671 |
| 0.7127 | 0.77 | 3000 | 0.6957 | 48.8360 |
| 0.5311 | 1.03 | 4000 | 0.6718 | 40.0817 |
| 0.47 | 1.29 | 5000 | 0.6645 | 40.4254 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mikuhl/wow-icons
|
mikuhl
| 2023-08-07T03:46:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-07T03:12:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a world of warcraft icon
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - mikuhl/wow-icons
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a world of warcraft icon using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Naruke/rl_course_vizdoom_health_gathering_supreme
|
Naruke
| 2023-08-07T03:22:50Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:03:46Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.87 +/- 5.84
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Naruke/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
hw2942/Erlangshen-Longformer-110M-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
|
hw2942
| 2023-08-07T03:10:38Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:IDEA-CCNL/Erlangshen-Longformer-110M",
"base_model:finetune:IDEA-CCNL/Erlangshen-Longformer-110M",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T02:47:52Z |
---
license: apache-2.0
base_model: IDEA-CCNL/Erlangshen-Longformer-110M
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Erlangshen-Longformer-110M-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2093
- F1: 0.3636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 38 | 0.6873 | 0.0 |
| No log | 2.0 | 76 | 0.6933 | 0.0 |
| No log | 3.0 | 114 | 0.7401 | 0.5854 |
| No log | 4.0 | 152 | 0.6913 | 0.0 |
| No log | 5.0 | 190 | 1.0142 | 0.4706 |
| No log | 6.0 | 228 | 0.8925 | 0.2353 |
| No log | 7.0 | 266 | 0.9258 | 0.1333 |
| No log | 8.0 | 304 | 1.0290 | 0.3636 |
| No log | 9.0 | 342 | 1.1018 | 0.4 |
| No log | 10.0 | 380 | 1.2093 | 0.3636 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze
|
saefro991
| 2023-08-07T03:01:26Z | 3 | 1 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"multilingual",
"dataset:masmultts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2023-08-07T02:45:09Z |
---
tags:
- espnet
- audio
- text-to-speech
language: multilingual
datasets:
- masmultts
license: cc-by-4.0
---
## ESPnet2 TTS model
### `saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze`
This model was trained by Takaaki-Saeki using masmultts recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 11a7d61312439111d4996d55935ede718d494262
pip install -e .
cd egs2/masmultts/tts_byte_css10_adap_residual_freeze
./run.sh --skip_data_prep false --skip_train true --download_model saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze
```
## TTS config
<details><summary>expand</summary>
```
config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_byte
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 2.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../tts_pretrain_byte_residual/exp/tts_train_byte/2epoch.pth:tts_pretrain.encoder:tts.encoder
- ../tts_pretrain_byte_residual/exp/tts_train_byte/2epoch.pth:tts_pretrain.lid_emb:tts.lid_emb
ignore_init_mismatch: false
freeze_param:
- tts.encoder.adapter
- tts.encoder.embed
- tts.lid_emb
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 400000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_byte/train/text_shape.byte
- exp/tts_stats_raw_byte/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_byte/valid/text_shape.byte
- exp/tts_stats_raw_byte/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /local/11399690.1.gpu/dump/raw/train/text
- text
- text
- - /local/11399690.1.gpu/dump/raw/train/wav.scp
- speech
- sound
- - /local/11399690.1.gpu/dump/xvector/train/xvector.scp
- spembs
- kaldi_ark
- - /local/11399690.1.gpu/dump/raw/train/utt2lid
- lids
- text_int
valid_data_path_and_name_and_type:
- - /local/11399690.1.gpu/dump/raw/dev/text
- text
- text
- - /local/11399690.1.gpu/dump/raw/dev/wav.scp
- speech
- sound
- - /local/11399690.1.gpu/dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
- - /local/11399690.1.gpu/dump/raw/dev/utt2lid
- lids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 512
warmup_steps: 50000
token_list:
- <blank>
- <unk>
- '32'
- '101'
- '97'
- '105'
- '110'
- '116'
- '111'
- '115'
- '114'
- '108'
- '100'
- '117'
- '109'
- '99'
- '195'
- '112'
- '104'
- '118'
- '107'
- '103'
- '98'
- '122'
- '102'
- '106'
- '121'
- '119'
- '164'
- '169'
- '197'
- '196'
- '161'
- '113'
- '179'
- '173'
- '188'
- '182'
- '190'
- '208'
- '120'
- '141'
- '153'
- '160'
- '155'
- '189'
- '131'
- '186'
- '168'
- '133'
- '209'
- '130'
- '181'
- '159'
- '151'
- '175'
- '177'
- '145'
- '171'
- '174'
- '165'
- '135'
- '200'
- '180'
- '170'
- '178'
- '176'
- '163'
- '184'
- '185'
- '187'
- '129'
- '132'
- '128'
- '136'
- '143'
- '162'
- '191'
- '150'
- '206'
- '183'
- '140'
- '172'
- '167'
- '207'
- '139'
- '142'
- '147'
- '134'
- '137'
- '148'
- '194'
- '149'
- '166'
- '49'
- '50'
- '48'
- '51'
- '138'
- '56'
- '53'
- '55'
- '52'
- '54'
- '57'
- '199'
- '226'
- '210'
- '144'
- '203'
- '225'
- '202'
- '232'
- '201'
- '157'
- '231'
- '156'
- '220'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: byte
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: byte
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_byte/train/feats_stats.npz
tts: transformer
tts_conf:
embed_dim: 0
eprenet_conv_layers: 0
eprenet_conv_filts: 0
eprenet_conv_chans: 0
dprenet_layers: 2
dprenet_units: 256
adim: 512
aheads: 8
elayers: 6
eunits: 1024
dlayers: 6
dunits: 1024
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 1
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
spk_embed_dim: 192
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
use_masking: true
bce_pos_weight: 5.0
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
eprenet_dropout_rate: 0.0
dprenet_dropout_rate: 0.5
postnet_dropout_rate: 0.5
transformer_enc_dropout_rate: 0.1
transformer_enc_positional_dropout_rate: 0.1
transformer_enc_attn_dropout_rate: 0.1
transformer_dec_dropout_rate: 0.1
transformer_dec_positional_dropout_rate: 0.1
transformer_dec_attn_dropout_rate: 0.1
transformer_enc_dec_attn_dropout_rate: 0.1
use_guided_attn_loss: true
num_heads_applied_guided_attn: 2
num_layers_applied_guided_attn: 2
modules_applied_guided_attn:
- encoder-decoder
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 10.0
langs: 21
lang_family_encoding: false
num_lang_family: 7
use_adapter: true
adapter_type: residual
use_encoder_w_lid: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mrkusypl/Magik
|
mrkusypl
| 2023-08-07T03:00:48Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-08-02T22:39:07Z |
---
language:
- pl
---
<center>
<img src="https://cdn.discordapp.com/attachments/1136428972939419789/1136428973279154228/latest.png"></img>
<h1>Magik (RVC v2) (Mangio Crepe 64) (400 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Piotr "Magik" Łuszcz <br/>
**Dataset:** 00:18:49 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136428972939419789/1137073748781047848/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136428972939419789/1137931072777244673/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/Magik/resolve/main/Magik%20%5B400%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
Chang-Su/llama-2-7b-chat-ko
|
Chang-Su
| 2023-08-07T02:57:50Z | 5 | 3 | null |
[
"LLAMA2",
"arxiv:1910.09700",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-07-24T02:29:14Z |
---
license: cc-by-nc-sa-4.0
tags:
- LLAMA2
---
⛱This repo is under construction
# llama-2-7b-chat-ko🇰🇷
<!-- Provide a quick summary of what the model is/does. -->
Korean Pretraning Model. Need to Instruction Tuning
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [cc-by-nc-sa-4.0]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
You need to install
```$ pip install protobuf```
*This model was trained with Qlora*
```
# Case: Load model directly
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, BitsAndBytesConfig, AutoConfig
from peft import PeftModel
generation_config = dict(
temperature=0.3,
top_k=40,
top_p=0.9,
do_sample=True,
num_beams=1,
repetition_penalty=1.1,
max_new_tokens=400
)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
config = AutoConfig.from_pretrained('meta-llama/Llama-2-7b-chat-hf')
model = LlamaForCausalLM.from_pretrained(
'meta-llama/Llama-2-7b-chat-hf',
low_cpu_mem_usage=True,
quantization_config=bnb_config,
)
tokenizer = LlamaTokenizer.from_pretrained('Chang-Su/llama-2-7b-chat-ko')
model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(model, 'Chang-Su/llama-2-7b-chat-ko')
model.eval()
input_text = '안녕 네 이름은'
with torch.no_grad():
print("Start inference.")
results = []
inputs = tokenizer(input_text,return_tensors="pt") #add_special_tokens=False ?
generation_output = model.generate(
input_ids = inputs["input_ids"].to('cuda:2'),
attention_mask = inputs['attention_mask'].to('cuda:2'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
**generation_config
)
s = generation_output[0]
output = tokenizer.decode(s,skip_special_tokens=True)
response = output.split("### Response:")[0].strip()
print(f"====================")
print(f"Input: '{input_text}'\n")
print(f"Output: {response}\n")
results.append({"Input":input_text,"Output":response})
```
```
# Case: Load model directly
not published yet
```
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mrkusypl/MexicanoTV
|
mrkusypl
| 2023-08-07T02:57:15Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-08-01T20:57:37Z |
---
language:
- pl
---
<center>
<img src="https://cdn.discordapp.com/attachments/1136043395123515465/1136043395928825957/comment_7oiVx1SlO3f8Ub44Vb0718v2vZin7XUk.png"></img>
<h1>MexicanoTV (RVC v2) (Mangio Crepe 64) (400 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Jarosław Andrzejewski <br/>
**Dataset:** 00:17:40 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136043395123515465/1137050343440650341/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136043395123515465/1137932262139248741/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/MexicanoTV/resolve/main/MexicanoTV%20%5B400%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
mrkusypl/Kononowicz
|
mrkusypl
| 2023-08-07T02:55:42Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-07-26T16:20:00Z |
---
language:
- pl
---
<center>
<img src="https://wiez.pl/wp-content/uploads/2022/10/krzysztof-kononowicz-1-1-1408x1000.jpg"></img>
<h1>Krzysztof Kononowicz (RVC v2) (Mangio Crepe 64) (300 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Krzysztof Kononowicz <br/>
**Dataset:** 00:19:10 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1133799046327316592/1137482939828027392/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1133799046327316592/1137929852339634266/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/Kononowicz/blob/main/Kononowicz%20%5B300%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
Yacong/lora-trained-xl
|
Yacong
| 2023-08-07T02:50:49Z | 3 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-07T01:33:38Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Yacong/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
kobessah/hajiareal
|
kobessah
| 2023-08-07T02:41:51Z | 11 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-07T02:35:45Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### hajiareal Dreambooth model trained by kobessah with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AmelieSchreiber/esm2_t6_8M_UR50D_LoRA_RNA-binding
|
AmelieSchreiber
| 2023-08-07T02:34:08Z | 4 | 1 |
peft
|
[
"peft",
"transformers",
"biology",
"esm",
"esm2",
"protein",
"protein language model",
"en",
"license:mit",
"region:us"
] | null | 2023-08-07T00:12:16Z |
---
library_name: peft
license: mit
language:
- en
tags:
- transformers
- biology
- esm
- esm2
- protein
- protein language model
---
# ESM-2 RNA Binding Site LoRA
This is a Parameter Efficient Fine Tuning (PEFT) Low Rank Adaptation (LoRA) of
the [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) model for the (binary) token classification task of
predicting RNA binding sites of proteins. The Github with the training script and conda env YAML can be
[found here](https://github.com/Amelie-Schreiber/esm2_LoRA_binding_sites/tree/main). You can also find a version of this model
that was fine-tuned without LoRA [here](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_UR50D_rna_binding_site_predictor).
## Training procedure
This is a Low Rank Adaptation (LoRA) of `esm2_t6_8M_UR50D`,
trained on `166` protein sequences in the [RNA binding sites dataset](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites)
using a `75/25` train/test split. It achieves an evaluation loss of `0.1791934072971344`.
### Framework versions
- PEFT 0.4.0
## Using the Model
To use, try running:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
from peft import PeftModel
import torch
# Path to the saved LoRA model
model_path = "AmelieSchreiber/esm2_t6_8M_UR50D_LoRA_RNA-binding"
# ESM2 base model
base_model_path = "facebook/esm2_t6_8M_UR50D"
# Load the model
base_model = AutoModelForTokenClassification.from_pretrained(base_model_path)
loaded_model = PeftModel.from_pretrained(base_model, model_path)
# Ensure the model is in evaluation mode
loaded_model.eval()
# Load the tokenizer
loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Protein sequence for inference
protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence
# Tokenize the sequence
inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length')
# Run the model
with torch.no_grad():
logits = loaded_model(**inputs).logits
# Get predictions
tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens
predictions = torch.argmax(logits, dim=2)
# Define labels
id2label = {
0: "No binding site",
1: "Binding site"
}
# Print the predicted labels for each token
for token, prediction in zip(tokens, predictions[0].numpy()):
if token not in ['<pad>', '<cls>', '<eos>']:
print((token, id2label[prediction]))
```
|
dai1If/q-FrozenLake-v1-4x4-noSlippery
|
dai1If
| 2023-08-07T02:22:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T02:22:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dai1If/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sunnyZX/huggingface_practice
|
sunnyZX
| 2023-08-07T02:17:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-04T07:40:39Z |
## huggingface学习笔记
学习理解huggingface的主要功能,学习使用huggingface的各个工具,理解其原理。
### 0huggingface.ipynb
huggingface简介、安装、注意事项
### 1pipeline.ipynb
理解使用pipeline的提供的自然语言处理任务的便捷用法。
### 2transformers.ipynb
理解使用transformers库提供的分词和模型的用法。
### 3finetune.ipynb
基于预训练模型进行模型微调,包括数据加载、模型训练-Trainer实现、模型训练-pytorch实现、模型评估
### 4datasets.ipynb
理解使用datasets库,包括数据加载、数据预处理、分词、数据格式转换、加载大规模数据集。
实战:基于github issues进行网络爬虫构建数据集进行相似性检索
### 5tokenizers.ipynb
理解使用tokrnizers库,包括:
- 基于微调已有的tokenizer;
- 理解Fast Tokenizer的并行化和偏移映射的能力(通过token classification和QA任务进行深刻理解);
- 理解tokenizer的四个处理步骤:标准化、预标记化、三种标记化模型(BPE、WordPiece、Unigram)、后处理;
- 基于三种标记化模型构建自定义的tokenizer。
### translations.ipynb
实战:翻译任务的完整过程:数据加载、数据预处理、模型微调训练、模型评估。
|
Akemixzz/Jiwon
|
Akemixzz
| 2023-08-07T01:27:02Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-07T01:22:19Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optiīonal]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Baronco98/Sudoku-Number-Classifier
|
Baronco98
| 2023-08-07T01:18:56Z | 2 | 0 |
keras
|
[
"keras",
"en",
"dataset:mnist",
"license:apache-2.0",
"region:us"
] | null | 2023-08-07T00:25:58Z |
---
license: apache-2.0
datasets:
- mnist
language:
- en
metrics:
- accuracy
library_name: keras
---
# Description
This model is a convolutional neural network built with transfer learning using the pre-trained model 'VGG16.' The 'block5_conv1' layer is retrained, and a final dense layer with 128 neurons is added.
The model will be used as a preliminary step in solving Sudokus through linear programming. Model It is responsible for classifying the content of each sudoku cell:
- class_0: empty cell
- class_1: cell contains the number 1
- class_2: cell contains the number 2
- class_3: cell contains the number 3
- class_4: cell contains the number 4
- class_5: cell contains the number 5
- class_6: cell contains the number 6
- class_7: cell contains the number 7
- class_8: cell contains the number 8
- class_9: cell contains the number 9
The dataset is constructed with balanced classes using images from the famous "MNIST digits classification" dataset, as well as images of numbers written digitally.
# Dataset schema
The image size it is 28x28 pixels. After applying data augmentation to the dataset, the total number of images is as follows:
- Training images: 5,600
- Validation images: 2,400
- Test images: 2,000
Test Accuracy: 0.9810
# Other validations:
An initial validation is performed. It remains pending to increase the size of the validations to understand the reliability of the mode
<div style="text-align: center;">
<img src="https://i.imgur.com/kdj9udt.jpg" width="300">
</div>
</div>
The results of the inference are as follows:
<div style="text-align: center;">
<img src="https://i.imgur.com/U2MJzH6.jpg" width="500">
</div>
|
nhat117/dica-llama-2-7b-chat
|
nhat117
| 2023-08-07T00:52:50Z | 7 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T02:54:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
brunoboat/Pixelcopter-PLE-v4
|
brunoboat
| 2023-08-07T00:48:34Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T00:48:32Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 10.50 +/- 11.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
skhaghighi/roberta-finetuned-subjqa-movies_2
|
skhaghighi
| 2023-08-07T00:39:17Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-07T00:25:40Z |
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Yacong/my_dreambooth_out_dir
|
Yacong
| 2023-08-07T00:23:49Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T15:09:47Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Yacong/my_dreambooth_out_dir
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
zwangab91/Taxi-v3
|
zwangab91
| 2023-08-07T00:16:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T17:51:32Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zwangab91/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
brunoboat/Pixelcopter-PLE-v2
|
brunoboat
| 2023-08-06T23:49:26Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T23:49:23Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.00 +/- 21.79
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
manyet1k/deberta-v3-base-finetuned-mcqa
|
manyet1k
| 2023-08-06T23:43:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T06:09:37Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-base-finetuned-mcqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-mcqa
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3869
- Accuracy: 0.262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3888 | 1.0 | 563 | 1.3869 | 0.262 |
| 1.3881 | 2.0 | 1126 | 1.3875 | 0.262 |
| 1.3877 | 3.0 | 1689 | 1.3871 | 0.236 |
| 1.3877 | 4.0 | 2252 | 1.3871 | 0.262 |
| 1.3873 | 5.0 | 2815 | 1.3867 | 0.236 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Eggsbena/model_007
|
Eggsbena
| 2023-08-06T23:36:35Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T23:23:39Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
|
vurdenko/ppo-LunarLander-v2
|
vurdenko
| 2023-08-06T23:18:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T22:12:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.01 +/- 16.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
manyet1k/roberta-base-finetuned-projectile
|
manyet1k
| 2023-08-06T23:13:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T22:23:45Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-projectile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-projectile
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3867
- Accuracy: 0.262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3906 | 1.0 | 563 | 1.3867 | 0.236 |
| 1.3888 | 2.0 | 1126 | 1.3902 | 0.236 |
| 1.3876 | 3.0 | 1689 | 1.3874 | 0.236 |
| 1.388 | 4.0 | 2252 | 1.3867 | 0.262 |
| 1.3871 | 5.0 | 2815 | 1.3870 | 0.236 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
harshV27/my-falcon-7b
|
harshV27
| 2023-08-06T23:04:54Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"falcon",
"custom_code",
"region:us"
] | null | 2023-08-06T14:37:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster031_partitioned_v3_standardized_031
|
HydraLM
| 2023-08-06T23:00:46Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T18:17:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
jmoney54378256438905/cybershart-temp
|
jmoney54378256438905
| 2023-08-06T23:00:25Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-06T22:59:55Z |
---
license: cc-by-nc-sa-4.0
---
|
ailabturkiye/ToronKaracaoglu
|
ailabturkiye
| 2023-08-06T22:58:14Z | 0 | 0 | null |
[
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-06T22:30:23Z |
---
license: openrail
language:
- tr
---
|
joelniklaus/legal-croatian-roberta-base
|
joelniklaus
| 2023-08-06T22:56:10Z | 125 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"legal",
"hr",
"arxiv:2306.02069",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-06T00:06:30Z |
---
tags:
- legal
model-index:
- name: legal-croatian-roberta-base
results: []
license: cc
language:
- hr
---
# Model Card for joelito/legal-croatian-roberta-base
This model is a monolingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used the Croatian portion of [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (RoBERTa)
- **Language(s) (NLP):** Croatian
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
### Training Data
This model was pretrained on the Croatian portion of [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-croatian-roberta-base')
print(model)
RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(32000, 768, padding_idx=0)
(position_embeddings): Embedding(514, 768, padding_idx=0)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0-11): 12 x RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): RobertaPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
joelniklaus/legal-english-roberta-base
|
joelniklaus
| 2023-08-06T22:55:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"en",
"arxiv:2306.02069",
"arxiv:2301.13126",
"arxiv:2110.00976",
"arxiv:2306.09237",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-13T06:38:38Z |
---
license: cc
language:
- en
---
# Model Card for joelito/legal-english-roberta-base
This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (RoBERTa)
- **Language(s) (NLP):** en
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
(f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models.
### Training Data
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
For further insights into the evaluation, we refer to the [trainer state](https://huggingface.co/joelito/legal-swiss-roberta-base/blob/main/last-checkpoint/trainer_state.json). Additional information is available in the [tensorboard](https://huggingface.co/joelito/legal-swiss-roberta-base/tensorboard).
For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-english-roberta-base')
print(model)
RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(128000, 768, padding_idx=0)
(position_embeddings): Embedding(514, 768, padding_idx=0)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0-11): 12 x RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): RobertaPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
joelniklaus/legal-xlm-roberta-large
|
joelniklaus
| 2023-08-06T22:55:31Z | 119 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"multilingual",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"dataset:MultiLegalPile",
"dataset:LEXTREME",
"dataset:LEXGLUE",
"arxiv:2306.02069",
"arxiv:2301.13126",
"arxiv:2110.00976",
"arxiv:2306.09237",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-12-30T18:43:43Z |
---
language:
- multilingual
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
tags:
- multilingual
license: cc
datasets:
- MultiLegalPile
- LEXTREME
- LEXGLUE
---
# Model Card for joelito/legal-xlm-roberta-large
This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (RoBERTa)
- **Language(s) (NLP):** bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
(f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models.
### Training Data
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
For further insights into the evaluation, we refer to the [trainer state](https://huggingface.co/joelito/legal-xlm-roberta-large/blob/main/last-checkpoint/trainer_state.json). Additional information is available in the [tensorboard](https://huggingface.co/joelito/legal-xlm-roberta-large/tensorboard).
For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-xlm-roberta-large')
print(model)
RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(128000, 1024, padding_idx=0)
(position_embeddings): Embedding(514, 1024, padding_idx=0)
(token_type_embeddings): Embedding(1, 1024)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0-23): 24 x RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=1024, out_features=1024, bias=True)
(key): Linear(in_features=1024, out_features=1024, bias=True)
(value): Linear(in_features=1024, out_features=1024, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=1024, out_features=4096, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): RobertaOutput(
(dense): Linear(in_features=4096, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): RobertaPooler(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster030_partitioned_v3_standardized_030
|
HydraLM
| 2023-08-06T22:55:06Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:53:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
smd142/model
|
smd142
| 2023-08-06T22:53:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T06:31:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
DRAGOO/whisper_Fr_Ht
|
DRAGOO
| 2023-08-06T22:47:37Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:qanastek/whisper-small-french-uncased",
"base_model:finetune:qanastek/whisper-small-french-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T18:11:00Z |
---
license: apache-2.0
base_model: qanastek/whisper-small-french-uncased
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper_Fr_Ht
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_Fr_Ht
This model is a fine-tuned version of [qanastek/whisper-small-french-uncased](https://huggingface.co/qanastek/whisper-small-french-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8968
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.293 | 3.95 | 1000 | 0.6567 | 1.0 |
| 0.0541 | 7.91 | 2000 | 0.7640 | 1.0 |
| 0.0063 | 11.86 | 3000 | 0.8664 | 1.0 |
| 0.0016 | 15.81 | 4000 | 0.8968 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster029_partitioned_v3_standardized_029
|
HydraLM
| 2023-08-06T22:45:01Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:54:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster028_partitioned_v3_standardized_028
|
HydraLM
| 2023-08-06T22:38:15Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:54:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster027_partitioned_v3_standardized_027
|
HydraLM
| 2023-08-06T22:32:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:48:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster021_partitioned_v3_standardized_021
|
HydraLM
| 2023-08-06T22:00:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T06:04:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster017_partitioned_v3_standardized_017
|
HydraLM
| 2023-08-06T21:42:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
spicecloud/bert-yelp-local
|
spicecloud
| 2023-08-06T21:40:56Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"coreml",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-06T21:40:25Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
parthsuresh/LunarLander-tutorial
|
parthsuresh
| 2023-08-06T21:37:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T21:37:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.86 +/- 54.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster016_partitioned_v3_standardized_016
|
HydraLM
| 2023-08-06T21:36:47Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T06:20:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
nrakocz/distilhubert-finetuned-gtzan
|
nrakocz
| 2023-08-06T21:30:23Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-06T19:46:04Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5565
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9919 | 1.0 | 113 | 1.8205 | 0.48 |
| 1.3634 | 2.0 | 226 | 1.1723 | 0.68 |
| 0.9779 | 3.0 | 339 | 0.8990 | 0.77 |
| 0.8092 | 4.0 | 452 | 0.8420 | 0.74 |
| 0.7011 | 5.0 | 565 | 0.7290 | 0.79 |
| 0.3831 | 6.0 | 678 | 0.7509 | 0.77 |
| 0.3852 | 7.0 | 791 | 0.6150 | 0.84 |
| 0.1792 | 8.0 | 904 | 0.5968 | 0.82 |
| 0.2193 | 9.0 | 1017 | 0.6058 | 0.82 |
| 0.1887 | 10.0 | 1130 | 0.5565 | 0.84 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster014_partitioned_v3_standardized_014
|
HydraLM
| 2023-08-06T21:28:09Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:52:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ailabturkiye/sehinsah2
|
ailabturkiye
| 2023-08-06T21:21:49Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-06T21:15:04Z |
---
license: openrail
language:
- tr
tags:
- music
---
Şehinşah'ın çıplak sesiyle yapılan ses modeli. Train ve dataset bana aittir.
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster013_partitioned_v3_standardized_013
|
HydraLM
| 2023-08-06T21:16:21Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
AmelieSchreiber/esm2_t6_8M_UR50D_sequence_classifier_v1
|
AmelieSchreiber
| 2023-08-06T21:13:59Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"esm",
"text-classification",
"esm-2",
"sequence classifier",
"proteins",
"protein language model",
"zero-shot-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-07-29T18:56:34Z |
---
license: mit
language:
- en
library_name: transformers
tags:
- esm
- esm-2
- sequence classifier
- proteins
- protein language model
pipeline_tag: zero-shot-classification
---
# ESM-2 Sequence Classifier
This is a small sequence classifier trained on synthetic data generated by GPT-4
which classifies protein sequences into three categories `enzymes` (class `0`), `receptor_proteins` (class `1`), and `structural_proteins` (class `2`).
This is trained using [facebook/esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D), one of the [ESM-2 models](https://huggingface.co/docs/transformers/model_doc/esm).
This model is not well tested, and is for experimental and eductaional purposes. Use with caution.
## Using the Model
To use the model, try running:
```python
# Load the trained model and tokenizer
model = EsmForSequenceClassification.from_pretrained("AmelieSchreiber/esm2_t6_8M_UR50D_sequence_classifier_v1")
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
# Suppose these are your new sequences that you want to classify
# Additional Family 0: Enzymes
new_sequences_0 = [
"ACGYLKTPKLADPPVLRGDSSVTKAICKPDPVLEK",
"GVALDECKALDYLPGKPLPMDGKVCQCGSKTPLRP",
"VLPGYTCGELDCKPGKPLPKCGADKTQVATPFLRG",
"TCGALVQYPSCADPPVLRGSDSSVKACKKLDPQDK",
"GALCEECKLCPGADYKPMDGDRLPAAATSKTRPVG",
"PAVDCKKALVYLPKPLPMDGKVCRGSKTPKTRPYG",
"VLGYTCGALDCKPGKPLPKCGADKTQVATPFLRGA",
"CGALVQYPSCADPPVLRGSDSSVKACKKLDPQDKT",
"ALCEECKLCPGADYKPMDGDRLPAAATSKTRPVGK",
"AVDCKKALVYLPKPLPMDGKVCRGSKTPKTRPYGR",
]
# Additional Family 1: Receptor Proteins
new_sequences_1 = [
"VGQRFYGGRQKNRHCELSPLPSACRGSVQGALYTD",
"KDQVLTVPTYACRCCPKMDSKGRVPSTLRVKSARS",
"PLAGVACGRGLDYRCPRKMVPGDLQVTPATQRPYG",
"CGVRLGYPGCADVPLRGRSSFAPRACMKKDPRVTR",
"RKGVAYLYECRKLRCRADYKPRGMDGRRLPKASTT",
"RPTGAVNCKQAKVYRGLPLPMMGKVPRVCRSRRPY",
"RLDGGYTCGQALDCKPGRKPPKMGCADLKSTVATP",
"LGTCRKLVRYPQCADPPVMGRSSFRPKACCRQDPV",
"RVGYAMCSPKLCSCRADYKPPMGDGDRLPKAATSK",
"QPKAVNCRKAMVYRPKPLPMDKGVPVCRSKRPRPY",
]
# Additional Family 2: Structural Proteins
new_sequences_2 = [
"VGKGFRYGSSQKRYLHCQKSALPPSCRRGKGQGSAT",
"KDPTVMTVGTYSCQCPKQDSRGSVQPTSRVKTSRSK",
"PLVGKACGRSSDYKCPGQMVSGGSKQTPASQRPSYD",
"CGKKLVGYPSSKADVPLQGRSSFSPKACKKDPQMTS",
"RKGVASLYCSSKLSCKAQYSKGMSDGRSPKASSTTS",
"RPKSAASCEQAKSYRSLSLPSMKGKVPSKCSRSKRP",
"RSDVSYTSCSQSKDCKPSKPPKMSGSKDSSTVATPS",
"LSTCSKKVAYPSSKADPPSSGRSSFSMKACKKQDPPV",
"RVGSASSEPKSSCSVQSYSKPSMSGDSSPKASSTSK",
"QPSASNCEKMSSYRPSLPSMSKGVPSSRSKSSPPYQ",
]
# Tokenize the sequences and convert to tensors
# Merge all sequences
new_sequences = new_sequences_0 + new_sequences_1 + new_sequences_2
inputs = tokenizer(new_sequences, return_tensors="pt", padding=True, truncation=True)
# Use the model to get the logits
with torch.no_grad():
logits = model(**inputs).logits
# Get the predicted class for each sequence
predicted_class_ids = torch.argmax(logits, dim=-1)
# Print the predicted class for each sequence
for sequence, predicted_class in zip(new_sequences, predicted_class_ids):
print(f"Sequence: {sequence}, Predicted class: {predicted_class.item()}")
```
|
madebyollin/taesd-x4-upscaler
|
madebyollin
| 2023-08-06T21:13:41Z | 40 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"license:mit",
"region:us"
] | null | 2023-08-06T19:59:39Z |
---
license: mit
---
# 🍰 Tiny AutoEncoder for Stable Diffusion X4 Upscaler
[`taesd-x4-upscaler`](https://github.com/madebyollin/taesd) is very tiny autoencoder which uses the same "latent API" as [`stable-diffusion-x4-upscaler`](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)'s VAE.
`taesd-x4-upscaler` is useful for [real-time previewing](https://twitter.com/madebyollin/status/1679356448655163394) of the upsampling process.
This repo contains `.safetensors` versions of the `taesd-x4-upscaler` weights.
## Using in 🧨 diffusers
```python
import requests
from PIL import Image
from io import BytesIO
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
low_res_img = Image.open(BytesIO(requests.get(url).content)).convert("RGB").resize((128, 128))
import torch
from diffusers import StableDiffusionUpscalePipeline, AutoencoderTiny
pipe = StableDiffusionUpscalePipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd-x4-upscaler", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
image = pipe("a white cat", image=low_res_img, num_inference_steps=25).images[0]
image.save("upsampled.png")
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster012_partitioned_v3_standardized_012
|
HydraLM
| 2023-08-06T21:11:11Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
muhtasham/bert-tiny-finetuned-glue-rte
|
muhtasham
| 2023-08-06T21:06:42Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-01T23:42:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-tiny-finetuned-glue-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: rte
split: train
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.631768953068592
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-glue-rte
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6673
- Accuracy: 0.6318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.4294744851376705e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6852 | 0.5776 |
| No log | 2.0 | 312 | 0.6800 | 0.5993 |
| No log | 3.0 | 468 | 0.6737 | 0.6173 |
| 0.6845 | 4.0 | 624 | 0.6690 | 0.6101 |
| 0.6845 | 5.0 | 780 | 0.6673 | 0.6318 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
simonycl/roberta-large-sst-2-32-13-smoothed
|
simonycl
| 2023-08-06T21:04:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T20:55:53Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-sst-2-32-13-smoothed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst-2-32-13-smoothed
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5917
- Accuracy: 0.8906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 75
- label_smoothing_factor: 0.45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.7430 | 0.5 |
| No log | 2.0 | 4 | 0.7414 | 0.5 |
| No log | 3.0 | 6 | 0.7386 | 0.5 |
| No log | 4.0 | 8 | 0.7348 | 0.5 |
| 0.7439 | 5.0 | 10 | 0.7302 | 0.5 |
| 0.7439 | 6.0 | 12 | 0.7248 | 0.5 |
| 0.7439 | 7.0 | 14 | 0.7195 | 0.5 |
| 0.7439 | 8.0 | 16 | 0.7143 | 0.5 |
| 0.7439 | 9.0 | 18 | 0.7082 | 0.5 |
| 0.7171 | 10.0 | 20 | 0.7022 | 0.5 |
| 0.7171 | 11.0 | 22 | 0.6977 | 0.5 |
| 0.7171 | 12.0 | 24 | 0.6954 | 0.5312 |
| 0.7171 | 13.0 | 26 | 0.6936 | 0.5156 |
| 0.7171 | 14.0 | 28 | 0.6926 | 0.5156 |
| 0.7024 | 15.0 | 30 | 0.6922 | 0.5312 |
| 0.7024 | 16.0 | 32 | 0.6921 | 0.5469 |
| 0.7024 | 17.0 | 34 | 0.6927 | 0.5312 |
| 0.7024 | 18.0 | 36 | 0.6938 | 0.5312 |
| 0.7024 | 19.0 | 38 | 0.6958 | 0.5156 |
| 0.6826 | 20.0 | 40 | 0.6982 | 0.5156 |
| 0.6826 | 21.0 | 42 | 0.7138 | 0.5 |
| 0.6826 | 22.0 | 44 | 0.7064 | 0.5312 |
| 0.6826 | 23.0 | 46 | 0.6992 | 0.5625 |
| 0.6826 | 24.0 | 48 | 0.6926 | 0.5625 |
| 0.6474 | 25.0 | 50 | 0.6836 | 0.5781 |
| 0.6474 | 26.0 | 52 | 0.6617 | 0.7344 |
| 0.6474 | 27.0 | 54 | 0.6450 | 0.7656 |
| 0.6474 | 28.0 | 56 | 0.6392 | 0.7812 |
| 0.6474 | 29.0 | 58 | 0.6513 | 0.7344 |
| 0.5878 | 30.0 | 60 | 0.6481 | 0.7812 |
| 0.5878 | 31.0 | 62 | 0.6583 | 0.7969 |
| 0.5878 | 32.0 | 64 | 0.6649 | 0.7812 |
| 0.5878 | 33.0 | 66 | 0.6280 | 0.8125 |
| 0.5878 | 34.0 | 68 | 0.6212 | 0.8594 |
| 0.5602 | 35.0 | 70 | 0.6214 | 0.8281 |
| 0.5602 | 36.0 | 72 | 0.6534 | 0.75 |
| 0.5602 | 37.0 | 74 | 0.6334 | 0.8594 |
| 0.5602 | 38.0 | 76 | 0.6060 | 0.875 |
| 0.5602 | 39.0 | 78 | 0.6048 | 0.875 |
| 0.55 | 40.0 | 80 | 0.6064 | 0.8594 |
| 0.55 | 41.0 | 82 | 0.6095 | 0.8438 |
| 0.55 | 42.0 | 84 | 0.6161 | 0.8438 |
| 0.55 | 43.0 | 86 | 0.6068 | 0.8594 |
| 0.55 | 44.0 | 88 | 0.5929 | 0.875 |
| 0.5425 | 45.0 | 90 | 0.5918 | 0.8906 |
| 0.5425 | 46.0 | 92 | 0.5919 | 0.8906 |
| 0.5425 | 47.0 | 94 | 0.5921 | 0.875 |
| 0.5425 | 48.0 | 96 | 0.5925 | 0.875 |
| 0.5425 | 49.0 | 98 | 0.5970 | 0.8906 |
| 0.5415 | 50.0 | 100 | 0.6128 | 0.8438 |
| 0.5415 | 51.0 | 102 | 0.6187 | 0.8438 |
| 0.5415 | 52.0 | 104 | 0.6012 | 0.8906 |
| 0.5415 | 53.0 | 106 | 0.5981 | 0.8906 |
| 0.5415 | 54.0 | 108 | 0.6085 | 0.8125 |
| 0.5434 | 55.0 | 110 | 0.6028 | 0.8438 |
| 0.5434 | 56.0 | 112 | 0.5970 | 0.8594 |
| 0.5434 | 57.0 | 114 | 0.6013 | 0.8906 |
| 0.5434 | 58.0 | 116 | 0.6023 | 0.8906 |
| 0.5434 | 59.0 | 118 | 0.6002 | 0.8906 |
| 0.5397 | 60.0 | 120 | 0.5964 | 0.8906 |
| 0.5397 | 61.0 | 122 | 0.5940 | 0.8906 |
| 0.5397 | 62.0 | 124 | 0.5934 | 0.8906 |
| 0.5397 | 63.0 | 126 | 0.5936 | 0.8906 |
| 0.5397 | 64.0 | 128 | 0.5936 | 0.8906 |
| 0.5403 | 65.0 | 130 | 0.5939 | 0.8906 |
| 0.5403 | 66.0 | 132 | 0.5939 | 0.8906 |
| 0.5403 | 67.0 | 134 | 0.5933 | 0.8906 |
| 0.5403 | 68.0 | 136 | 0.5933 | 0.8906 |
| 0.5403 | 69.0 | 138 | 0.5934 | 0.8906 |
| 0.5394 | 70.0 | 140 | 0.5931 | 0.8906 |
| 0.5394 | 71.0 | 142 | 0.5926 | 0.8906 |
| 0.5394 | 72.0 | 144 | 0.5921 | 0.8906 |
| 0.5394 | 73.0 | 146 | 0.5919 | 0.8906 |
| 0.5394 | 74.0 | 148 | 0.5918 | 0.8906 |
| 0.5394 | 75.0 | 150 | 0.5917 | 0.8906 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster010_partitioned_v3_standardized_010
|
HydraLM
| 2023-08-06T21:01:19Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:53:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
LarryAIDraw/Doria_v1
|
LarryAIDraw
| 2023-08-06T20:59:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:52:22Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123204/andrea-doria-azur-lane
|
LarryAIDraw/Patchi_V1
|
LarryAIDraw
| 2023-08-06T20:59:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:52:00Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123345/skv-patchouli-knowledge-touhou-lora
|
LarryAIDraw/ryuu_v1
|
LarryAIDraw
| 2023-08-06T20:58:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:50:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123575/ryuu-lion-or-danmachi-lora
|
LarryAIDraw/mudrock-03
|
LarryAIDraw
| 2023-08-06T20:57:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:49:45Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123709/mudrock-or-arknights-or-lora
|
LarryAIDraw/HorikitaLora-12
|
LarryAIDraw
| 2023-08-06T20:57:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:49:21Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123805/suzune-horikita-classroom-of-the-elite-lora
|
simonycl/roberta-large-sst-2-16-13-smoothed
|
simonycl
| 2023-08-06T20:55:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T20:50:09Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-sst-2-16-13-smoothed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst-2-16-13-smoothed
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6487
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 75
- label_smoothing_factor: 0.45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.7106 | 0.5 |
| No log | 2.0 | 2 | 0.7104 | 0.5 |
| No log | 3.0 | 3 | 0.7100 | 0.5 |
| No log | 4.0 | 4 | 0.7094 | 0.5 |
| No log | 5.0 | 5 | 0.7087 | 0.5 |
| No log | 6.0 | 6 | 0.7077 | 0.5 |
| No log | 7.0 | 7 | 0.7066 | 0.5 |
| No log | 8.0 | 8 | 0.7054 | 0.5 |
| No log | 9.0 | 9 | 0.7040 | 0.5 |
| 0.7172 | 10.0 | 10 | 0.7026 | 0.5 |
| 0.7172 | 11.0 | 11 | 0.7011 | 0.5 |
| 0.7172 | 12.0 | 12 | 0.6995 | 0.5 |
| 0.7172 | 13.0 | 13 | 0.6980 | 0.5 |
| 0.7172 | 14.0 | 14 | 0.6965 | 0.5312 |
| 0.7172 | 15.0 | 15 | 0.6951 | 0.5312 |
| 0.7172 | 16.0 | 16 | 0.6936 | 0.5312 |
| 0.7172 | 17.0 | 17 | 0.6921 | 0.5312 |
| 0.7172 | 18.0 | 18 | 0.6906 | 0.5312 |
| 0.7172 | 19.0 | 19 | 0.6895 | 0.5312 |
| 0.6997 | 20.0 | 20 | 0.6884 | 0.5312 |
| 0.6997 | 21.0 | 21 | 0.6874 | 0.5312 |
| 0.6997 | 22.0 | 22 | 0.6867 | 0.5625 |
| 0.6997 | 23.0 | 23 | 0.6860 | 0.5312 |
| 0.6997 | 24.0 | 24 | 0.6854 | 0.5938 |
| 0.6997 | 25.0 | 25 | 0.6846 | 0.6562 |
| 0.6997 | 26.0 | 26 | 0.6840 | 0.625 |
| 0.6997 | 27.0 | 27 | 0.6832 | 0.6562 |
| 0.6997 | 28.0 | 28 | 0.6826 | 0.6875 |
| 0.6997 | 29.0 | 29 | 0.6815 | 0.6875 |
| 0.6874 | 30.0 | 30 | 0.6804 | 0.6875 |
| 0.6874 | 31.0 | 31 | 0.6790 | 0.6875 |
| 0.6874 | 32.0 | 32 | 0.6772 | 0.6875 |
| 0.6874 | 33.0 | 33 | 0.6762 | 0.6562 |
| 0.6874 | 34.0 | 34 | 0.6753 | 0.6562 |
| 0.6874 | 35.0 | 35 | 0.6738 | 0.6875 |
| 0.6874 | 36.0 | 36 | 0.6725 | 0.6875 |
| 0.6874 | 37.0 | 37 | 0.6696 | 0.6875 |
| 0.6874 | 38.0 | 38 | 0.6687 | 0.6875 |
| 0.6874 | 39.0 | 39 | 0.6665 | 0.6875 |
| 0.6594 | 40.0 | 40 | 0.6643 | 0.6875 |
| 0.6594 | 41.0 | 41 | 0.6674 | 0.6875 |
| 0.6594 | 42.0 | 42 | 0.6733 | 0.6875 |
| 0.6594 | 43.0 | 43 | 0.6804 | 0.6875 |
| 0.6594 | 44.0 | 44 | 0.6731 | 0.6875 |
| 0.6594 | 45.0 | 45 | 0.6701 | 0.6875 |
| 0.6594 | 46.0 | 46 | 0.6687 | 0.6875 |
| 0.6594 | 47.0 | 47 | 0.6687 | 0.6562 |
| 0.6594 | 48.0 | 48 | 0.6757 | 0.625 |
| 0.6594 | 49.0 | 49 | 0.6739 | 0.6875 |
| 0.6089 | 50.0 | 50 | 0.6766 | 0.6875 |
| 0.6089 | 51.0 | 51 | 0.6724 | 0.6875 |
| 0.6089 | 52.0 | 52 | 0.6662 | 0.6875 |
| 0.6089 | 53.0 | 53 | 0.6664 | 0.6875 |
| 0.6089 | 54.0 | 54 | 0.6602 | 0.6875 |
| 0.6089 | 55.0 | 55 | 0.6505 | 0.6875 |
| 0.6089 | 56.0 | 56 | 0.6468 | 0.75 |
| 0.6089 | 57.0 | 57 | 0.6370 | 0.75 |
| 0.6089 | 58.0 | 58 | 0.6285 | 0.7812 |
| 0.6089 | 59.0 | 59 | 0.6267 | 0.7812 |
| 0.5694 | 60.0 | 60 | 0.6279 | 0.7812 |
| 0.5694 | 61.0 | 61 | 0.6364 | 0.7812 |
| 0.5694 | 62.0 | 62 | 0.6443 | 0.75 |
| 0.5694 | 63.0 | 63 | 0.6518 | 0.7812 |
| 0.5694 | 64.0 | 64 | 0.6634 | 0.7188 |
| 0.5694 | 65.0 | 65 | 0.6647 | 0.7188 |
| 0.5694 | 66.0 | 66 | 0.6679 | 0.7188 |
| 0.5694 | 67.0 | 67 | 0.6669 | 0.7188 |
| 0.5694 | 68.0 | 68 | 0.6626 | 0.7188 |
| 0.5694 | 69.0 | 69 | 0.6624 | 0.75 |
| 0.5618 | 70.0 | 70 | 0.6614 | 0.7188 |
| 0.5618 | 71.0 | 71 | 0.6592 | 0.75 |
| 0.5618 | 72.0 | 72 | 0.6571 | 0.75 |
| 0.5618 | 73.0 | 73 | 0.6541 | 0.75 |
| 0.5618 | 74.0 | 74 | 0.6499 | 0.75 |
| 0.5618 | 75.0 | 75 | 0.6487 | 0.75 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster04_partitioned_v3_standardized_04
|
HydraLM
| 2023-08-06T20:21:35Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:53:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
CristoJV/q-FrozenLake-v1-4x4-noSlippery
|
CristoJV
| 2023-08-06T19:52:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:52:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="CristoJV/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BauyrjanQ/whisper-kk-sp2n-b16-ms1600-s
|
BauyrjanQ
| 2023-08-06T19:46:25Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T22:16:53Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-kk-sp2n-b16-ms1600-s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-kk-sp2n-b16-ms1600-s
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2659
- Wer: 274.5556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9302 | 0.18 | 800 | 0.3010 | 229.3416 |
| 0.83 | 0.37 | 1600 | 0.2659 | 274.5556 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
alexeynoskov/dqn-SpaceInvadersNoFrameskip-v4
|
alexeynoskov
| 2023-08-06T19:44:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:44:11Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 652.00 +/- 106.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alexeynoskov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alexeynoskov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alexeynoskov
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
s3nh/ziya-llama-13b-medical-merged-GGML
|
s3nh
| 2023-08-06T19:31:36Z | 0 | 6 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T15:48:44Z |
---
license: openrail
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/shibing624/ziya-llama-13b-medical-merged).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
# Medical LLaMA-13B Model
基于LLaMA-13B的中英医疗问答模型
`shibing624/ziya-llama-13b-medical-merged` evaluate test data:
The overall performance of model on QA **test**:
|input_text|predict|
|:-- |:--- |
|一岁宝宝发烧能吃啥药?|孩子的身体是很脆弱的,如果不注意保护好自己就会出现各种疾病。有些家长在看到孩子生病后都非常着急,想要立刻给他们用药治疗,但其实并没有那么简单,因为每个人对于药物的敏感度和耐受性都是不同的,所以我们需要根据具体情况来选择合适的方法进行治疗才可以达到最佳效果。1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少,但也存在过量使用引起胃肠道损伤甚至消化道溃疡出血的风险;2、还有部分小儿退热药中含有扑尔敏等抗组胺药,它们本身属于处方药,应该严格按照医嘱服用,切勿擅自加减剂量或者延长用药时间;3、此外,某些小儿退热药中含有维生素C及葡萄糖酸钙等营养补充剂,只有当患儿伴随脱水症状(如口渴、尿色深)时才考虑添加,否则容易造成电解质紊乱。|
在中文开放测试集中的表现优异,继承了两方面的优势:1)微调训练的底座是Ziya-LLaMA-13B模型,是较强的中英文底座模型,2)微调使用的是高质量240万条中英文医疗指令数据集,和多种通用指令数据集,微调后的模型在医疗行业答复能力达到领先水平,在通用问题上的答复能力不弱于LLaMA-13B。
## Training details
training args:
```json
{"per_device_train_batch_size": 8, "per_device_eval_batch_size": 8, "per_gpu_train_batch_size": null, "per_gpu_eval_batch_size": null, "gradient_accumulation_steps": 1, "eval_accumulation_steps": null, "eval_delay": 0, "learning_rate": 2e-05, "weight_decay": 0.0, "adam_beta1": 0.9, "adam_beta2": 0.999, "adam_epsilon": 1e-08, "max_grad_norm": 1.0, "num_train_epochs": 10.0, "max_steps": -1, "lr_scheduler_type": "linear", "warmup_ratio": 0.0, "warmup_steps": 50, "log_level": "passive", "log_level_replica": "warning", "log_on_each_node": true, "logging_dir": "outputs-ziya-llama-13b-sft-med-v2/logs", "logging_strategy": "steps", "logging_first_step": false, "logging_steps": 50, "logging_nan_inf_filter": true, "save_strategy": "steps", "save_steps": 50, "save_total_limit": 3, "save_safetensors": false, "save_on_each_node": false, "no_cuda": false, "use_mps_device": false, "seed": 42, "data_seed": null, "jit_mode_eval": false, "use_ipex": false, "bf16": false, "fp16": true, "fp16_opt_level": "O1", "half_precision_backend": "cuda_amp", "bf16_full_eval": false, "fp16_full_eval": false, "tf32": null, "local_rank": 0, "xpu_backend": null, "tpu_num_cores": null, "tpu_metrics_debug": false, "debug": [], "dataloader_drop_last": false, "eval_steps": 50, "dataloader_num_workers": 0, "past_index": -1, "run_name": "outputs-ziya-llama-13b-sft-med-v2", "disable_tqdm": false, "remove_unused_columns": false, "label_names": null, "load_best_model_at_end": true, "metric_for_best_model": "loss", "greater_is_better": false, "ignore_data_skip": false, "sharded_ddp": [], "fsdp": [], "fsdp_min_num_params": 0, "fsdp_config": { "fsdp_min_num_params": 0, "xla": false, "xla_fsdp_grad_ckpt": false }, "fsdp_transformer_layer_cls_to_wrap": null, "deepspeed": null, "label_smoothing_factor": 0.0, "optim": "adamw_torch", "optim_args": null, "adafactor": false, "group_by_length": false, "length_column_name": "length", "report_to": [ "tensorboard" ], "ddp_find_unused_parameters": false, "ddp_bucket_cap_mb": null, "dataloader_pin_memory": true, "skip_memory_metrics": true, "use_legacy_prediction_loop": false, "push_to_hub": false, "resume_from_checkpoint": null, "hub_model_id": null, "hub_strategy": "every_save", "hub_token": "<hub_token>", "hub_private_repo": false, "gradient_checkpointing": false, "include_inputs_for_metrics": false, "fp16_backend": "auto", "push_to_hub_model_id": null, "push_to_hub_organization": null, "push_to_hub_token": "<push_to_hub_token>", "mp_parameters": "", "auto_find_batch_size": false, "full_determinism": false, "torchdynamo": null, "ray_scope": "last", "ddp_timeout": 1800, "torch_compile": false, "torch_compile_backend": null, "torch_compile_mode": null }
```
train loss:
<img src="https://huggingface.co/shibing624/ziya-llama-13b-medical-merged/resolve/main/trainloss.png" alt="trainloss">
evaluate loss:
<img src="https://huggingface.co/shibing624/ziya-llama-13b-medical-merged/resolve/main/evalloss.png" alt="trainloss">
## Usage
本项目开源在 github repo:
- [shibing624/textgen](https://github.com/shibing624/textgen)
- [shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
使用textgen库:[textgen](https://github.com/shibing624/textgen),可调用LLaMA模型:
Install package:
```shell
pip install -U textgen
```
```python
from textgen import GptModel
def generate_prompt(instruction):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:{instruction}\n\n### Response: """
model = GptModel("llama", "shibing624/ziya-llama-13b-medical-merged")
predict_sentence = generate_prompt("一岁宝宝发烧能吃啥药?")
r = model.predict([predict_sentence])
print(r) # ["1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少..."]
```
## Usage (HuggingFace Transformers)
Without [textgen](https://github.com/shibing624/textgen), you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install transformers
```
```python
import sys
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer
model = LlamaForCausalLM.from_pretrained("shibing624/ziya-llama-13b-medical-merged", device_map='auto')
tokenizer = LlamaTokenizer.from_pretrained("shibing624/ziya-llama-13b-medical-merged")
device = "cuda" if torch.cuda.is_available() else "cpu"
def generate_prompt(instruction):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:{instruction}\n\n### Response: """
sents = ['一岁宝宝发烧能吃啥药', "who are you?"]
for s in sents:
q = generate_prompt(s)
inputs = tokenizer(q, return_tensors="pt")
inputs = inputs.to(device=device)
generate_ids = ref_model.generate(
**inputs,
max_new_tokens=120,
do_sample=True,
top_p=0.85,
temperature=1.0,
repetition_penalty=1.0,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True)[0]
print(output)
print()
```
output:
```shell
一岁宝宝发烧能吃啥药
孩子的身体是很脆弱的,如果不注意保护好自己就会出现各种疾病。有些家长在看到孩子生病后都非常着急,想要立刻给他们用药治疗,但其实并没有那么简单,因为每个人对于药物的敏感度和耐受性都是不同的,所以我们需要根据具体情况来选择合适的方法进行治疗才可以达到最佳效果。1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少,但也存在过量使用引起胃肠道损伤甚至消化道溃疡出血的风险;2、还有部分小儿退热药中含有扑尔敏等抗组胺药,它们本身属于处方药,应该严格按照医嘱服用,切勿擅自加减剂量或者延长用药时间;3、此外,某些小儿退热药中含有维生素C及葡萄糖酸钙等营养补充剂,只有当患儿伴随脱水症状(如口渴、尿色深)时才考虑添加,否则容易造成电解质紊乱。
```
模型文件组成:
```
ziya-llama-13b-medical-merged
|-- added_tokens.json
|-- config.json
|-- generation_config.json
|-- pytorch_model-00001-of-00003.bin
|-- pytorch_model-00002-of-00003.bin
|-- pytorch_model-00003-of-00003.bin
|-- pytorch_model.bin.index.json
|-- README.md
|-- special_tokens_map.json
|-- tokenizer_config.json
`-- tokenizer.model
```
### Inference Examples
| Human | Assitant |
|------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 小孩发烧怎么办 | 宝宝的抵抗力是比较差的,所以很容易就会出现感冒或者其他疾病。如果宝宝在平时没有注意保暖,那么也可能导致身体受凉而引起了感冒等症状,这样不仅对于健康造成影响,还可能让宝宝出现高热的情况。因此当家长们发现自己的宝宝出现了高热的问题之后一定要及早进行治疗和护理才好。<br/>1、首先我们应该给宝宝服用退烧药物来帮助降温,但是千万不要盲目使用退烧药物,否则只会加重宝宝的病情,最好带着宝宝到医院去检查清楚具体原因之后再针对性地采取措施处理。<br/>2、另外大人们也需要做好预防工作,尤其是天气变化的时候更应该多关心下宝宝的身体状态,避免宝宝受寒受凉从而诱发各种疾病,同时也要尽量减少宝宝接触细菌病毒的机会,增强宝宝的免疫力。<br/>3、除了上面两点之外,大人们还可以通过按摩推拿穴位的方式来缓解宝宝的病痛,促进血液循环,提升新陈代谢速度,达到快速消炎止疼的效果 |
| 一岁宝宝发烧能吃啥药? | 孩子的身体是很脆弱的,如果不注意保护好自己就会出现各种疾病。有些家长在看到孩子生病后都非常着急,想要立刻给他们用药治疗,但其实并没有那么简单,因为每个人对于药物的敏感度和耐受性都是不同的,所以我们需要根据具体情况来选择合适的方法进行治疗才可以达到最佳效果。1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少,但也存在过量使用引起胃肠道损伤甚至消化道溃疡出血的风险;2、还有部分小儿退热药中含有扑尔敏等抗组胺药,它们本身属于处方药,应该严格按照医嘱服用,切勿擅自加减剂量或者延长用药时间;3、此外,某些小儿退热药中含有维生素C及葡萄糖酸钙等营养补充剂,只有当患儿伴随脱水症状(如口渴、尿色深)时才考虑添加,否则容易造成电解质紊乱。 |
| 肛门病变可能是什么疾病的症状 | 痔疮、直肠息肉等。 |
| 医生,您好白带增多泛黄怎么回事? | 你这种情况考虑可能会是阴道炎症引起的建议平时留意卫生清洁,勤换洗衣物内裤,必要时去正规公立医院妇科复诊做分泌物常规等相关检查明确病因后对症治疗,不同原因所致的疾病在临床表现上也存在差异性,如果患者出现了类似于的症状应当及早到专业、权威的医院进行详细的检验和救治 |
| 药流后有巧克力白带是没流干净吗 | 在日常生活中,我们要或多或少了解一些常见疾病的相关知识,不仅能起到增加知识水平的作用,更能让自己更好地预防此类疾病的产生。很多女性都会出现月经紊乱、腹痛等症状,这时候就需要考虑妇科炎症可能,比如说宫颈糜烂和盆腔积液。那么下面小编来给大家介绍一种情况:药物流产后白带呈巧克力色怎么回事?1、首先,对于药物流产之后发现阴道分泌物异常,应该及时去医院做检查确诊具体原因,然后再根据结果进行治疗调理。2、其次,由于每个人身体素质以及患病程度存在差别,所以采取的措施也不尽相同,但无论选择哪种方式处理,都必须严格按照规定剂量服用,并且注意观察副作用反映情况,若出现明显不适感觉,则应立即停止使用。3、最后,建议你保持外阴清洁卫生,勤换内裤,避免吃辛辣刺激食品,而且还要禁忌房事,否则容易引起交叉感染影响恢复健康。 |
| 维胺酯维E乳膏能治理什么疾病 | 皮肤淀粉样变性;脂溢性角化症 |
| 痤疮是什么原因导致的? | 痤疮,又称青春痘、粉刺。它主要发生在面部和胸背部位,表现为黑头粉刺、白头粉刺、丘疹、脓疱、结节等多种形式。其中以黑头粉刺最常见,约占全身皮肤病患者的70%左右。那么,痤疮是什么原因引起的呢?下面就给大家介绍一些相关内容。1、雄性激素分泌过旺:男女都会长痘痘,但是女孩子比较喜欢用化妆品来掩盖脸上的问题,而且很多人不知道这样做反而加重了自己的毛囊炎症状,所以说平时应该少使用含有雌激素类的产品。2、饮食习惯差:经常吃辛辣油腻的东西或者甜食也可能造成体内毒素堆积,从而诱发痤疮。3、压力太大:如果你总是处于紧张焦虑的情绪之中,同时还没有及时调整好心态,那么很容易出现内分泌失调的情况,进而影响到肌肤健康。4、睡眠质量差:熬夜对我们的身体伤害非常严重,尤其是对于年轻人群更甚,他们需要保证充足的睡眠才行,否则很容易让肌肤变得暗沉无光泽。 |
| 脸上长青春痘怎么办,能吃啥药,需要注意啥 | 你好,根据您的描述考虑是痤疮。建议外用维胺酯乳膏或者阿达帕林凝胶进行治疗看看,平时禁忌辛辣和刺激性食物,如海鲜、狗肉、羊肉等“发”动品,以及葱蒜韭姜等刺激性调料。多喝水,少吃油腻高热量食物。 |
| 由于输精管堵塞会不会导致男性不育?,医生您好,我同事这几天不开心,我问她怎么了,她说她老公输精管堵塞,所以一直没孩子,不是不想要,请问输精管堵塞会不会导致男性不育? | 你好!根据你的叙述及症状推测:如果有输精管阻塞或者狭窄等情况都可能会引来无法排出来精液而影响怀上。建议到正规专业的快孕医院实施进行咨询和治疗为宜。对于输精管患者来说,及时救治是可以减低病发,如果不及时救治会影响到受孕情况。因此患者要及时药物救治,并且日常生活中要多看重自身护理 |
| 心肌梗塞能治疗吗? | 心脏病是一种危害性极大的疾病,我们要对它有足够多的了解。在生活中很多人都会患上这个疾病,而且还不知道怎么去预防和护理,所以就导致自己的身体越来越差,那么下面小编给你介绍几点关于心脏病的常识吧!1、冠状动脉粥样硬化性心脏病:又称为冠心病或缺血型心脏病(IHD),是世界各地最重要的死亡原因之一。2、风湿热:主要侵犯心脏瓣膜,可引起二尖瓣狭窄及其他类型的心脏瓣膜病变;3、先天性心脏病:包括房间隔缺损、室间隔缺损等;4、高脂蛋白血症:由于胆固醇与低密度脂蛋白升高,使得动脉壁增厚,管腔缩小,易形成斑块并脱落造成急性栓塞;5、肥胖:过量摄入食物,特别是含糖饮料,加速脂肪堆积,促进动脉粥样硬化发展;6、吸烟:尼古丁刺激交感神经兴奋,使外周阻力增强,收缩压升高,舒张期减短,从而降低左右心室射血分数,诱发心律失常;7、酗酒:长期嗜酒者,肝功受到影响,代谢紊乱,容易出现脂质代谢异常。 |
| 怎么让自己精力充沛,列5点建议 | 1、保证睡眠质量:每天晚上7-8小时的高质量睡眠是提升身体和大脑活跃度最有效的方法。2、合理饮食:均衡营养摄入对于维持人体正常运转至关重要,多吃蔬菜水果,少油腻辛辣刺激性食物。3、适当锻炼:坚持一定强度的锻炼可以促进血液循环,增加肌肉耐力,改善心肺功能等。4、学习新知识:不断接触新事物,开拓视野,思考问题,都会使得我们更具创造力与灵感。5、放松心情:通过冥想或者瑜伽来缓解压力,也可以听音乐或看电影来调节情绪。 |
| 写个春天的诗 | 春风吹拂,花儿绽放。<br/>阳光明媚,鸟语鸣叫。<br/>树木抽芽,草地变绿。<br/>万物复苏,生机勃发。 |
| 南极洲为啥没有北极熊? |南极是世界上最大的冰盖,覆盖着98%以上的地球表面积。它位于南纬60度至74度之间,环绕在海洋中心周围。由于其独特的气候条件和生态系统,南极被称为“真正的荒野”。南极洲拥有丰富多样的动植物资源,但这里并不存在北极熊。 那么,为什么南极洲没有北极熊呢?首先,我们需要了解一下两者栖息地的差异。北极熊主要分布在北半球寒冷的高纬度地区,如加拿大、格陵兰岛等地。而南极洲则处于南半球,属于温带或寒带气候类型。虽然南极洲也有很低的平均气温(-25℃左右),但与北极相比还是太热了。因此,即使北极熊能够适应更严酷的气候条件,它们也无法在南极找到合适的栖息地。另外,南极洲缺乏陆地哺乳动物食物来源,包括鱼类、鲸鱼和企鹅等。尽管南极洲的水域中也有各种鱼类,但数量远少于北极圈内。同时,南极洲的土著居民——企鹅群体繁殖季节期间会消耗掉大部分可用的食物资源,导致当地的鱼类数量减少甚至枯竭。|
### 训练数据集
- 50万条中文ChatGPT指令Belle数据集:[BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
- 100万条中文ChatGPT指令Belle数据集:[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
- 5万条英文ChatGPT指令Alpaca数据集:[50k English Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca#data-release)
- 2万条中文ChatGPT指令Alpaca数据集:[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh)
- 69万条中文指令Guanaco数据集(Belle50万条+Guanaco19万条):[Chinese-Vicuna/guanaco_belle_merge_v1.0](https://huggingface.co/datasets/Chinese-Vicuna/guanaco_belle_merge_v1.0)
- 240万条中文医疗数据集(包括预训练数据和指令微调数据集):[shibing624/medical](https://huggingface.co/datasets/shibing624/medical)
如果需要训练ChatGLM/LLAMA/BLOOM模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen)
## Citation
```latex
@software{textgen,
author = {Ming Xu},
title = {textgen: Implementation of language model finetune},
year = {2023},
url = {https://github.com/shibing624/textgen},
}
```
|
s3nh/MedLLaMA_13B-GGML
|
s3nh
| 2023-08-06T19:30:00Z | 0 | 4 |
transformers
|
[
"transformers",
"text-generation",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T15:46:34Z |
---
license: openrail
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/chaoyi-wu/MedLLaMA_13B).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
|
BigSyal/keisya
|
BigSyal
| 2023-08-06T19:28:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T19:26:26Z |
---
license: creativeml-openrail-m
---
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster00_partitioned_v3_standardized_00
|
HydraLM
| 2023-08-06T19:23:47Z | 10 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:51:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
MattStammers/Bipedal_Walker_v3_Hardcore_Flat_Optimised
|
MattStammers
| 2023-08-06T19:15:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:14:56Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: -85.95 +/- 18.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster01_partitioned_v3_standardized_01
|
HydraLM
| 2023-08-06T19:13:00Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:46:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ThuyNT03/xlm-roberta-base-finetuned-panx-all
|
ThuyNT03
| 2023-08-06T19:12:32Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T18:49:24Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1764
- F1: 0.8572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.297 | 1.0 | 835 | 0.1950 | 0.8093 |
| 0.1555 | 2.0 | 1670 | 0.1687 | 0.8455 |
| 0.1 | 3.0 | 2505 | 0.1764 | 0.8572 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Robayet2023/esm2_t12_35M_UR50D-finetuned-localization
|
Robayet2023
| 2023-08-06T19:10:45Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t12_35M_UR50D",
"base_model:finetune:facebook/esm2_t12_35M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T22:55:53Z |
---
license: mit
base_model: facebook/esm2_t12_35M_UR50D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: esm2_t12_35M_UR50D-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0331
- Accuracy: 0.4835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.042 | 1.0 | 23758 | 0.0388 | 0.4835 |
| 0.0325 | 2.0 | 47516 | 0.0351 | 0.4835 |
| 0.0259 | 3.0 | 71274 | 0.0331 | 0.4835 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.3
- Tokenizers 0.13.3
|
aivance/rebranding-to-aistrova
|
aivance
| 2023-08-06T19:10:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-06T19:08:54Z |
We're moving to https://huggingface.co/aistrova
|
strnam/instruction-bloom-7b1
|
strnam
| 2023-08-06T18:52:54Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T18:52:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: True
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Peniis2/Airplane
|
Peniis2
| 2023-08-06T18:43:04Z | 0 | 0 | null |
[
"en",
"dataset:databricks/databricks-dolly-15k",
"region:us"
] | null | 2023-08-06T18:41:29Z |
---
datasets:
- databricks/databricks-dolly-15k
language:
- en
---
|
ThuyNT03/xlm-roberta-base-finetuned-panx-fr
|
ThuyNT03
| 2023-08-06T18:42:38Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T18:37:41Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8441295546558704
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2787
- F1: 0.8441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 191 | 0.3171 | 0.7910 |
| No log | 2.0 | 382 | 0.2828 | 0.8081 |
| No log | 3.0 | 573 | 0.2787 | 0.8441 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Lilsunx/llama2-qlora-finetunined-french
|
Lilsunx
| 2023-08-06T18:29:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T18:28:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
ichacon/dqn-SpaceInvadersNoFrameskio-v4-0.0.1
|
ichacon
| 2023-08-06T18:21:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T18:21:12Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 337.00 +/- 83.16
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ichacon -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ichacon -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ichacon
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HasanErdin/ppo-Huggy
|
HasanErdin
| 2023-08-06T18:14:39Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-06T18:14:34Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HasanErdin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
textgain/allnli-GroNLP-bert-base-dutch-cased
|
textgain
| 2023-08-06T18:09:12Z | 553 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"nl",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-01-16T13:17:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- nl
widget:
- source_sentence: "De kat slaapt op het bed."
sentences:
- "De poes rust op het matras."
- "De hond slaapt naast het bed."
- "Het bed is gemaakt van hout."
---
# allnli-GroNLP-bert-base-dutch-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["De kat slaapt op het bed.", "De poes rust op het matras."]
model = SentenceTransformer('textgain/allnli-GroNLP-bert-base-dutch-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["De kat slaapt op het bed.", "De poes rust op het matras."]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('textgain/allnli-GroNLP-bert-base-dutch-cased')
model = AutoModel.from_pretrained('textgain/allnli-GroNLP-bert-base-dutch-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4388 with parameters:
```
{'batch_size': 128}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 438,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 439,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ishwarbb23/t52
|
ishwarbb23
| 2023-08-06T17:53:05Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThomasSimonini/t5-end2end-question-generation",
"base_model:finetune:ThomasSimonini/t5-end2end-question-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-05T18:12:16Z |
---
license: apache-2.0
base_model: ThomasSimonini/t5-end2end-question-generation
tags:
- generated_from_trainer
model-index:
- name: t52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t52
This model is a fine-tuned version of [ThomasSimonini/t5-end2end-question-generation](https://huggingface.co/ThomasSimonini/t5-end2end-question-generation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2217 | 0.65 | 100 | 2.9125 |
| 2.9732 | 1.3 | 200 | 2.8349 |
| 2.8996 | 1.95 | 300 | 2.7879 |
| 2.8009 | 2.59 | 400 | 2.7614 |
| 2.7532 | 3.24 | 500 | 2.7406 |
| 2.6964 | 3.89 | 600 | 2.7208 |
| 2.6462 | 4.54 | 700 | 2.7153 |
| 2.6265 | 5.19 | 800 | 2.7037 |
| 2.6089 | 5.84 | 900 | 2.6968 |
| 2.5522 | 6.49 | 1000 | 2.6944 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-last_2_layer_randomized
|
roa7n
| 2023-08-06T17:39:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T17:39:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Pauitbid/llama2-qlora-finetunined-french
|
Pauitbid
| 2023-08-06T17:39:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T17:38:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
li-ping/summary_llama_1_epoch
|
li-ping
| 2023-08-06T17:19:55Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T17:12:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ailabturkiye/Lilith
|
ailabturkiye
| 2023-08-06T17:07:29Z | 0 | 0 | null |
[
"diabloV",
"diablo v",
"lilith",
"villain",
"license:openrail",
"region:us"
] | null | 2023-08-06T16:38:09Z |
---
license: openrail
metrics:
- character
tags:
- diabloV
- diablo v
- lilith
- villain
---
Lilith -Diablo V-
Lilith, Diablo V oyununun baş kötü karakteridir, Model 500 Epoch olup s4500 değerindedir.
Modelin TRAIN ve DATASET'i bana aittir. İzinsiz kullanmak yasaktır. İzin alma halinde, paylaşacağınız sosyal medya platformlarında "Cast" kısmında model sahibi belirtilmelidir.
Discord: Alastor#3115
YouTube: https://www.youtube.com/@NahParti
|
roa7n/gpt2-human_nontata_promoters-last_layer_randomized
|
roa7n
| 2023-08-06T17:00:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T17:00:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ASAHIMM/ASA
|
ASAHIMM
| 2023-08-06T16:58:31Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"aa",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] | null | 2023-08-06T16:57:28Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- accuracy
library_name: adapter-transformers
---
|
miyao-haruto/distilbert-base-uncased-finetuned-emotion
|
miyao-haruto
| 2023-08-06T16:54:20Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T16:34:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9415
- name: F1
type: f1
value: 0.9414152998744435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1548
- Accuracy: 0.9415
- F1: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.2005 | 0.9285 | 0.9292 |
| 0.2278 | 2.0 | 250 | 0.1661 | 0.9305 | 0.9313 |
| 0.2278 | 3.0 | 375 | 0.1505 | 0.9355 | 0.9359 |
| 0.113 | 4.0 | 500 | 0.1447 | 0.9415 | 0.9410 |
| 0.113 | 5.0 | 625 | 0.1469 | 0.9375 | 0.9375 |
| 0.0814 | 6.0 | 750 | 0.1407 | 0.9385 | 0.9384 |
| 0.0814 | 7.0 | 875 | 0.1469 | 0.9395 | 0.9395 |
| 0.0612 | 8.0 | 1000 | 0.1545 | 0.941 | 0.9405 |
| 0.0612 | 9.0 | 1125 | 0.1537 | 0.9385 | 0.9388 |
| 0.0492 | 10.0 | 1250 | 0.1548 | 0.9415 | 0.9414 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.12.1+cu102
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TearGosling/model-playground
|
TearGosling
| 2023-08-06T16:29:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-06T16:28:42Z |
Just toying around with ideas for custom models
|
tlano/ToraFurryMix
|
tlano
| 2023-08-06T16:24:25Z | 0 | 12 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-18T16:06:16Z |
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- stable-diffusion
---
# モデル説明 / Model Description
このモデルは「furry」タグを使用することを念頭に調整したモデルです。<br>
「furry」タグなしでの出力はあまり検証していません。<br>
<br>
ライセンスフリー、かつ、マージ元をはっきりさせることをテーマの1つにしているため、<br>
先達の素晴らしいケモモデルを泣く泣くマージ候補から外しています。<br>
<br>
そのため、ケモ再現度はSD1.5baseにより近く、中々気難しいです。<br>
LoRA・TIの併用やタグの強弱で調整することをオススメします。<br>
**v20** <br>
描画が少し安定した気がします。中距離以上での顔崩れが低減された気がします。<br>
相変わらず手の描画は不安定ですが、badhand等のTIで改善可能です。(ただしヒト型に寄ります)<br>
目がくすみがちですが、プロンプトで色等指定すると安定します。<br>
<br>
# マージしたモデル / Merged Models
<details open><summary>v10</summary>
- sdhk_v40.safetensors (https://civitai.com/models/82813/sdhk)
- dreamshaper_631BakedVae.safetensors (https://civitai.com/models/4384)
- iroiro-lora (agomaru / faceage / hohoaka) (https://huggingface.co/2vXpSwA7/iroiro-lora)
- Tora-NijiFurry-v4.safetensors (https://huggingface.co/tlano/Tora-NijiFurry-LoRA)
</details>
<details open><summary>v20</summary>
- sdhk_v40.safetensors (https://civitai.com/models/82813/sdhk)
- fluffyrock_e92TerminalSnrE65.safetensors (https://civitai.com/models/92450)
- dreamshaper_8.safetensors (https://civitai.com/models/4384)
- iroiro-lora (hohoaka / eye-no_highlight) (https://huggingface.co/2vXpSwA7/iroiro-lora)
- Tora-NijiFurry-v4.safetensors (https://huggingface.co/tlano/Tora-NijiFurry-LoRA)
</details>
<br>
# ライセンス / License
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
CreativeML OpenRAIL-M ライセンス / CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<br>
# 作例
<details><summary>v10</summary>
# 作例1 / Example 1

```
[Positive prompt]
furry girl,rabbit ears,green dress,in forest, sea of flowers,
cowboy shot,smile, hair ornament, looking at viewer, one hand up, head tilt
[Negative prompt]
(worst quality, low quality:2)
[Sampling]
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7
Seed: 2459419738
Size: 512x768
Clip skip: 2
[Hires.fix]
Denoising strength: 0.5
Hires upscale: 2.5
Hires steps: 15
Hires upscaler: ESRGAN_4x
[VAE]
vae-ft-mse-840000-ema-pruned.safetensors
```
<br>
# 作例2 / Example 2

```
作例1の[Positive prompt]から「furry」を削除
それ以外は同一
```
<br>
</details>
<details><summary>v20</summary>
# 作例1 / Example 1

```
[Positive prompt]
furry, orange fur, red fur, two tone fur, full body,smile, in forest, dressing
[Negative prompt]
(worst quality, low quality:2)
[Sampling]
Steps: 30
Sampler: DPM++ 2M Karras
CFG scale: 7
Seed: 102423334
Size: 512x768
Clip skip: 2
[Hires.fix]
Denoising strength: 0.4
Hires upscale: 2.5
Hires steps: 25
Hires upscaler: R-ESRGAN 4x+ Anime6B
[VAE]
vae-ft-mse-840000-ema-pruned.safetensors
```
<br>
</details>
<br>
**作者**<br>
 twitter: [@TlanoAI](https://twitter.com/TlanoAI)<br>
<br>
|
Muhammadreza/mann-e-artistic-2
|
Muhammadreza
| 2023-08-06T16:11:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T16:07:47Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mann-e_artistic-2 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
andyP/ro-sentiment-02
|
andyP
| 2023-08-06T16:08:35Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:readerbench/RoBERT-base",
"base_model:finetune:readerbench/RoBERT-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T14:26:46Z |
---
base_model: readerbench/RoBERT-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: ro-sentiment-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ro-sentiment-02
This model is a fine-tuned version of [readerbench/RoBERT-base](https://huggingface.co/readerbench/RoBERT-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4093
- Accuracy: 0.8312
- Precision: 0.8488
- Recall: 0.8866
- F1: 0.8673
- F1 Weighted: 0.8298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.3e-05
- train_batch_size: 96
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------:|
| 0.4289 | 1.0 | 1086 | 0.4168 | 0.8303 | 0.8868 | 0.8570 | 0.8717 | 0.8317 |
| 0.3807 | 2.0 | 2172 | 0.3926 | 0.8424 | 0.8933 | 0.8680 | 0.8804 | 0.8434 |
| 0.3306 | 3.0 | 3258 | 0.4093 | 0.8312 | 0.8488 | 0.8866 | 0.8673 | 0.8298 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Allenpai/Recc-A
|
Allenpai
| 2023-08-06T15:57:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-05T21:40:07Z |
Training procedure
The following bitsandbytes quantization config was used during training:
load_in_8bit: True load_in_4bit: False llm_int8_threshold: 6.0 llm_int8_skip_modules: None llm_int8_enable_fp32_cpu_offload: False llm_int8_has_fp16_weight: False bnb_4bit_quant_type: fp4 bnb_4bit_use_double_quant: False bnb_4bit_compute_dtype: float32
Framework versions
PEFT 0.4.0.dev0
|
Penisek/mortalcio
|
Penisek
| 2023-08-06T15:49:09Z | 0 | 0 | null |
[
"music",
"pl",
"region:us"
] | null | 2023-08-06T15:44:25Z |
---
language:
- pl
tags:
- music
---
|
lhy/char-bert-base-uncased
|
lhy
| 2023-08-06T15:48:31Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: char-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# char-bert-base-uncased
This model is a fine-tuned version of [char-bert-base-uncased/checkpoint-1840240](https://huggingface.co/char-bert-base-uncased/checkpoint-1840240) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 0.8329 | 1.0 | 92012 | 0.4066 |
| 0.4066 | 2.0 | 184024 | 0.3223 |
| 0.3422 | 3.0 | 276036 | 0.2803 |
| 0.3044 | 4.0 | 368048 | 0.2560 |
| 0.2782 | 5.0 | 460060 | 0.2399 |
| 0.2593 | 6.0 | 552072 | 0.2265 |
| 0.2693 | 7.0 | 644084 | 0.2366 |
| 0.2559 | 8.0 | 736096 | 0.2228 |
| 0.2431 | 9.0 | 828108 | 0.2112 |
| 0.2334 | 10.0 | 920120 | 0.2103 |
| 0.2453 | 11.0 | 1012132 | 0.2164 |
| 0.2372 | 12.0 | 1104144 | 0.2113 |
| 0.2288 | 13.0 | 1196156 | 0.2004 |
| 0.2208 | 14.0 | 1288168 | 0.2002 |
| 0.2152 | 15.0 | 1380180 | 0.1941 |
| 0.2241 | 16.0 | 1472192 | 0.1940 |
| 0.2188 | 17.0 | 1564204 | 0.1954 |
| 0.2132 | 18.0 | 1656216 | 0.1968 |
| 0.2077 | 19.0 | 1748228 | 0.1887 |
| 0.2036 | 20.0 | 1840240 | 0.1863 |
| 0.2109 | 21.0 | 1932252 | 0.2009 |
| 0.2075 | 22.0 | 2024264 | 0.1840 |
| 0.2031 | 23.0 | 2116276 | 0.1884 |
| 0.1992 | 24.0 | 2208288 | 0.1902 |
| 0.196 | 25.0 | 2300300 | 0.1760 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
arhamk/Reinforce-Pixelcopter-PLE-v0
|
arhamk
| 2023-08-06T15:47:02Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T14:59:31Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.20 +/- 10.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TevinWang/mixart-gen
|
TevinWang
| 2023-08-06T15:44:22Z | 0 | 0 | null |
[
"musicgen",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-08-06T15:28:47Z |
---
inference: false
tags:
- musicgen
license: cc-by-nc-4.0
---
# MusicGen - Melody - 1.5B
Audiocraft provides the code and models for MusicGen, a simple and controllable model for music generation.
MusicGen is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods like MusicLM, MusicGen doesn't not require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [large](https://huggingface.co/facebook/musicgen-large)
- [**melody** (this checkpoint)](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
- <a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
- <a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HugginFace"/>
</a>
- You can run MusicGen locally as well:
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
import torchaudio
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained('melody')
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ['happy rock', 'energetic EDM', 'sad jazz']
melody, sr = torchaudio.load('./assets/bach.mp3')
# generates using the melody from the given audio and the provided descriptions.
wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr)
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][https://arxiv.org/abs/2306.05284].
**Citation details**:
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** All vocals have been removed from the data source using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). The model is therefore not able to produce vocals.
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
|
kartashoffv/vashkontrol-sentiment-rubert
|
kartashoffv
| 2023-08-06T15:44:16Z | 242 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"sentiment",
"ru",
"dataset:kartashoffv/vash_kontrol_reviews",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-29T21:10:22Z |
---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
- sentiment
metrics:
- f1
model-index:
- name: vashkontrol-sentiment-rubert
results: []
license: mit
datasets:
- kartashoffv/vash_kontrol_reviews
language:
- ru
pipeline_tag: text-classification
widget:
- text: "Отзывчивые и понимающие работники, обслуживание очень понравилось, специалист проявила большое терпение чтобы восстановить пароль от Госуслуг. Спасибо!"
---
# Sentimental assessment of portal reviews "VashKontrol"
The model is designed to evaluate the tone of reviews from the [VashKontrol portal](https://vashkontrol.ru/).
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on a following dataset: [kartashoffv/vash_kontrol_reviews](https://huggingface.co/datasets/kartashoffv/vash_kontrol_reviews).
It achieves the following results on the evaluation set:
- Loss: 0.1085
- F1: 0.9461
## Model description
The model predicts a sentiment label (positive, neutral, negative) for a submitted text review.
## Training and evaluation data
The model was trained on the corpus of reviews of the [VashControl portal](https://vashkontrol.ru/), left by users in the period from 2020 to 2022 inclusive.
The total number of reviews was 17,385. The sentimental assessment of the dataset was carried out by the author manually by dividing the general dataset into positive/neutral/negative reviews.
The resulting classes:
0 (positive): 13045
1 (neutral): 1196
2 (negative): 3144
Class weighting was used to solve the class imbalance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0992 | 1.0 | 1391 | 0.0737 | 0.9337 |
| 0.0585 | 2.0 | 2782 | 0.0616 | 0.9384 |
| 0.0358 | 3.0 | 4173 | 0.0787 | 0.9441 |
| 0.0221 | 4.0 | 5564 | 0.0918 | 0.9488 |
| 0.0106 | 5.0 | 6955 | 0.1085 | 0.9461 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
### Usage
```
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('kartashoffv/vashkontrol-sentiment-rubert')
model = AutoModelForSequenceClassification.from_pretrained('kartashoffv/vashkontrol-sentiment-rubert', return_dict=True)
@torch.no_grad()
def predict(review):
inputs = tokenizer(review, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
pred_label = torch.argmax(predicted, dim=1).numpy()
return pred_label
```
### Labels
```
0: POSITIVE
1: NEUTRAL
2: NEGATIVE
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.