modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 00:47:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 00:46:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pyf98/tedlium2_conformer_e15
|
pyf98
| 2022-12-19T00:43:26Z | 0 | 1 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:tedlium2",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-12-19T00:41:08Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- tedlium2
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/tedlium2_conformer_e15`
This model was trained by Yifan Peng using tedlium2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 8ee35df7260008e9a8a20d9a9b64773a02f706ef
pip install -e .
cd egs2/tedlium2/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_conformer_e15
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Dec 17 04:27:41 CST 2022`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202209`
- pytorch version: `pytorch 1.12.1`
- Git hash: `26f432bc859e5e40cac1a86042d498ba7baffbb0`
- Commit date: `Fri Dec 9 02:16:01 2022 +0000`
## asr_train_asr_conformer_e15_raw_en_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev|466|14671|93.5|4.1|2.5|1.0|7.5|70.0|
|decode_asr_asr_model_valid.acc.ave/test|1155|27500|93.4|4.0|2.6|1.0|7.6|64.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev|466|78259|97.0|0.8|2.1|0.8|3.8|70.0|
|decode_asr_asr_model_valid.acc.ave/test|1155|145066|97.0|0.9|2.2|0.9|4.0|64.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev|466|28296|95.0|2.8|2.2|0.8|5.9|70.0|
|decode_asr_asr_model_valid.acc.ave/test|1155|52113|95.1|2.5|2.4|0.9|5.8|64.2|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_e15.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_e15_raw_en_bpe500_sp
ngpu: 1
seed: 2022
num_workers: 6
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 59747
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 50000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- s
- ▁the
- t
- ▁a
- ▁and
- ▁to
- d
- e
- ▁of
- ''''
- n
- ing
- ▁in
- ▁i
- ▁that
- i
- a
- l
- p
- m
- y
- o
- ▁it
- ▁we
- c
- u
- ▁you
- ed
- ▁
- r
- ▁is
- re
- ▁this
- ar
- g
- ▁so
- al
- b
- ▁s
- or
- ▁f
- ▁c
- in
- k
- f
- ▁for
- ic
- er
- le
- ▁be
- ▁do
- ▁re
- ve
- ▁e
- ▁w
- ▁was
- es
- ▁they
- ly
- h
- ▁on
- v
- ▁are
- ri
- ▁have
- an
- ▁what
- ▁with
- ▁t
- w
- ur
- it
- ent
- ▁can
- ▁he
- ▁but
- ra
- ce
- ▁me
- ▁b
- ▁ma
- ▁p
- ll
- ▁st
- ▁one
- 'on'
- ▁about
- th
- ▁de
- en
- ▁all
- ▁not
- il
- ▁g
- ch
- at
- ▁there
- ▁mo
- ter
- ation
- tion
- ▁at
- ▁my
- ro
- ▁as
- te
- ▁le
- ▁con
- ▁like
- ▁people
- ▁or
- ▁an
- el
- ▁if
- ▁from
- ver
- ▁su
- ▁co
- ate
- ▁these
- ol
- ci
- ▁now
- ▁see
- ▁out
- ▁our
- ion
- ▁know
- ect
- ▁just
- as
- ▁ex
- ▁ch
- ▁d
- ▁when
- ▁very
- ▁think
- ▁who
- ▁because
- ▁go
- ▁up
- ▁us
- ▁pa
- ▁no
- ies
- ▁di
- ▁ho
- om
- ive
- ▁get
- id
- ▁o
- ▁hi
- un
- ▁how
- ▁by
- ir
- et
- ck
- ity
- ▁po
- ul
- ▁which
- ▁mi
- ▁some
- z
- ▁sp
- ▁un
- ▁going
- ▁pro
- ist
- ▁se
- ▁look
- ▁time
- ment
- de
- ▁more
- ▁had
- ng
- ▁would
- ge
- la
- ▁here
- ▁really
- x
- ▁your
- ▁them
- us
- me
- ▁en
- ▁two
- ▁k
- ▁li
- ▁world
- ne
- ow
- ▁way
- ▁want
- ▁work
- ▁don
- ▁lo
- ▁fa
- ▁were
- ▁their
- age
- vi
- ▁ha
- ac
- der
- est
- ▁bo
- am
- ▁other
- able
- ▁actually
- ▁sh
- ▁make
- ▁ba
- ▁la
- ine
- ▁into
- ▁where
- ▁could
- ▁comp
- ting
- ▁has
- ▁will
- ▁ne
- j
- ical
- ally
- ▁vi
- ▁things
- ▁te
- igh
- ▁say
- ▁years
- ers
- ▁ra
- ther
- ▁than
- ru
- ▁ro
- op
- ▁did
- ▁any
- ▁new
- ound
- ig
- ▁well
- mo
- ▁she
- ▁na
- ▁been
- he
- ▁thousand
- ▁car
- ▁take
- ▁right
- ▁then
- ▁need
- ▁start
- ▁hundred
- ▁something
- ▁over
- ▁com
- ia
- ▁kind
- um
- if
- ▁those
- ▁first
- ▁pre
- ta
- ▁said
- ize
- end
- ▁even
- ▁thing
- one
- ▁back
- ite
- ▁every
- ▁little
- ry
- ▁life
- ▁much
- ke
- ▁also
- ▁most
- ant
- per
- ▁three
- ▁come
- ▁lot
- ance
- ▁got
- ▁talk
- ▁per
- ▁inter
- ▁sa
- ▁use
- ▁mu
- ▁part
- ish
- ence
- ▁happen
- ▁bi
- ▁mean
- ough
- ▁qu
- ▁bu
- ▁day
- ▁ga
- ▁only
- ▁many
- ▁different
- ▁dr
- ▁th
- ▁show
- ful
- ▁down
- ated
- ▁good
- ▁tra
- ▁around
- ▁idea
- ▁human
- ous
- ▁put
- ▁through
- ▁five
- ▁why
- ▁change
- ▁real
- ff
- ible
- ▁fact
- ▁same
- ▁jo
- ▁live
- ▁year
- ▁problem
- ▁ph
- ▁four
- ▁give
- ▁big
- ▁tell
- ▁great
- ▁try
- ▁va
- ▁ru
- ▁system
- ▁six
- ▁plan
- ▁place
- ▁build
- ▁called
- ▁again
- ▁point
- ▁twenty
- ▁percent
- ▁nine
- ▁find
- ▁app
- ▁after
- ▁long
- ▁eight
- ▁imp
- ▁gene
- ▁design
- ▁today
- ▁should
- ▁made
- ious
- ▁came
- ▁learn
- ▁last
- ▁own
- way
- ▁turn
- ▁seven
- ▁high
- ▁question
- ▁person
- ▁brain
- ▁important
- ▁another
- ▁thought
- ▁trans
- ▁create
- ness
- ▁hu
- ▁power
- ▁act
- land
- ▁play
- ▁sort
- ▁old
- ▁before
- ▁course
- ▁understand
- ▁feel
- ▁might
- ▁each
- ▁million
- ▁better
- ▁together
- ▁ago
- ▁example
- ▁help
- ▁story
- ▁next
- ▁hand
- ▁school
- ▁water
- ▁develop
- ▁technology
- que
- ▁second
- ▁grow
- ▁still
- ▁cell
- ▁believe
- ▁number
- ▁small
- ▁between
- qui
- ▁data
- ▁become
- ▁america
- ▁maybe
- ▁space
- ▁project
- ▁organ
- ▁vo
- ▁children
- ▁book
- graph
- ▁open
- ▁fifty
- ▁picture
- ▁health
- ▁thirty
- ▁africa
- ▁reason
- ▁large
- ▁hard
- ▁computer
- ▁always
- ▁sense
- ▁money
- ▁women
- ▁everything
- ▁information
- ▁country
- ▁teach
- ▁energy
- ▁experience
- ▁food
- ▁process
- qua
- ▁interesting
- ▁future
- ▁science
- q
- '0'
- '5'
- '6'
- '9'
- '3'
- '8'
- '4'
- N
- A
- '7'
- S
- G
- F
- R
- L
- U
- E
- T
- H
- _
- B
- D
- J
- M
- ă
- ō
- ť
- '2'
- '-'
- '1'
- C
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 15
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
anamhira/ppo-LunarLander-v2
|
anamhira
| 2022-12-19T00:16:14Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-19T00:15:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 9.18 +/- 101.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
flashgod/QLtaxi-v3
|
flashgod
| 2022-12-18T23:30:36Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T23:30:30Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: QLtaxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="flashgod/QLtaxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jestemleon/bert-nlp-project-imdb
|
jestemleon
| 2022-12-18T22:49:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-12-03T13:41:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-nlp-project-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-nlp-project-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1745 | 0.37 | 453 | 2.9488 |
| 3.0364 | 0.75 | 906 | 2.9024 |
| 2.9915 | 1.12 | 1359 | 2.8552 |
| 2.9427 | 1.5 | 1812 | 2.8371 |
| 2.9247 | 1.87 | 2265 | 2.8125 |
| 2.902 | 2.25 | 2718 | 2.7948 |
| 2.8997 | 2.62 | 3171 | 2.8013 |
| 2.8914 | 3.0 | 3624 | 2.8113 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tashatsar/ppo-LunarLander-v2-LR
|
tashatsar
| 2022-12-18T22:38:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T16:34:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.45 +/- 65.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BlancOfficial/Healy_AnimeBlend
|
BlancOfficial
| 2022-12-18T22:26:44Z | 0 | 5 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-18T20:50:10Z |
---
license: creativeml-openrail-m
---
### Reposted directly from [Civitai](https://civitai.com/models/1400/healys-anime-blend)
---
This is a blend of some anime models mixed with "realistic" stuff to get a look Healy has been trying to accomplish for awhile.
I take no credit whatsoever, [Healy](https://civitai.com/user/Healy) just smashed rocks together like a caveman and the outcome somehow worked.
It can create NSFW stuff to I think, but i've noticed the outcomes remain pretty tolerable with "cleavage" in the negative prompts.
---
### Output Comparison

Prompt: (Beautiful woman, very close symmetric portrait:1.2) (Red Shiny Eyes, Black hair, pony tail, wearing rags, thick thighs, Narrow waist, Wide hips, grown up, Athletic, Feminine, Fully clothed, playful expression:1.2) photo, render, 8k, octane render, cinema 4d, blender, Futuristic star trek style, dark, atmospheric 4k ultra detailed, cinematic sensual, Sharp focus, humorous illustration, hyperrealistic, big depth of field, Masterpiece, colors, 3d octane render, 4k, concept art, trending on artstation, solo, full body shot (hyperrealistic, Hyperdetailed, Vivid colors, Wlop, stanley artgerm lau:1.5)
Negative prompt: Glasses, Cleavage, Watermark, bad artist, helmet, blur, blurry, text, b&w, 3d, bad art, poorly drawn, blurry, disfigured, deformed, extra limbs, ugly hands, extra fingers
Size: 1024x1280, Seed: 3278428817, Steps: 30, Sampler: Euler a, CFG scale: 15, Model hash: 8a3b8d01, First pass size: 512x640, Denoising strength: 0.6
---
### Output Examples:

Prompt: (Close portrait of Beautiful woman with a circle of water in the background:1.2) (Blue Shiny Eyes, Blonde, bob cut, Black closed hoodie, thick thighs, Wide hips, Adult, Fully clothed, playful expression:1.2) photo, render, 8k, octane render, cinema 4d, blender, Futuristic star trek style, dark, atmospheric 4k ultra detailed, cinematic sensual, Sharp focus, humorous illustration, hyperrealistic, big depth of field, colors, 3d octane render, 4k, concept art, trending on artstation, solo, full body shot (hyperrealistic, Hyperdetailed, Vivid colors, Wlop, stanley artgerm lau:1.5)
Negative prompt: Glasses, Cleavage, Watermark, bad artist, helmet, blur, blurry, text, b&w, 3d, bad art, poorly drawn, blurry, disfigured, deformed, extra limbs, ugly hands, extra fingers
Size: 1024x1280, Seed: 3301144353, Steps: 40, Sampler: Euler a, CFG scale: 12, Model hash: 8a3b8d01, First pass size: 512x640, Denoising strength: 0.6

Prompt: (Beautiful woman, symmetric portrait, triangle neon light background:1.2) (Green Shiny Eyes, brunette, messy long hair, Black long skirt, white turtleneck, thick thighs, Narrow waist, Wide hips, grown up, Athletic, Feminine, Fully clothed, playful expression:1.2) photo, render, 8k, octane render, cinema 4d, blender, Futuristic star trek style, dark, atmospheric 4k ultra detailed, cinematic sensual, Sharp focus, humorous illustration, hyperrealistic, big depth of field, Masterpiece, colors, 3d octane render, 4k, concept art, trending on artstation, solo, full body shot (hyperrealistic, Hyperdetailed, Vivid colors, Wlop, stanley artgerm lau:1.5)
Negative prompt: Glasses, Cleavage, Watermark, bad artist, helmet, blur, blurry, text, b&w, 3d, bad art, poorly drawn, blurry, disfigured, deformed, extra limbs, ugly hands, extra fingers
Size: 1024x1280, Seed: 977393216, Steps: 30, Sampler: Euler a, CFG scale: 15, Mask blur: 4, Model hash: 8a3b8d01, Denoising strength: 0.65

Prompt: (Beautiful woman, symmetric portrait, Hands behind back, Water droplets floating in the air:1.2) (Green Shiny Eyes, Blonde, messy long hair, Black long skirt, white turtleneck, thick thighs, Narrow waist, Wide hips, grown up, Athletic, Feminine, Fully clothed, playful expression:1.2) photo, render, 8k, octane render, cinema 4d, blender, Futuristic star trek style, dark, atmospheric 4k ultra detailed, cinematic sensual, Sharp focus, humorous illustration, hyperrealistic, big depth of field, Masterpiece, colors, 3d octane render, 4k, concept art, trending on artstation, solo, full body shot (hyperrealistic, Hyperdetailed, Vivid colors, Wlop, stanley artgerm lau:1.5)
Negative prompt: Glasses, Cleavage, Watermark, bad artist, helmet, blur, blurry, text, b&w, 3d, bad art, poorly drawn, blurry, disfigured, deformed, extra limbs, ugly hands, extra fingers
Size: 1024x1280, Seed: 1616183337, Steps: 30, Sampler: Euler a, CFG scale: 15, Model hash: 8a3b8d01, First pass size: 512x640, Denoising strength: 0.6
|
augustolf/navezinha
|
augustolf
| 2022-12-18T22:00:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T21:59:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.69 +/- 17.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jestemleon/bert-nlp-project-news
|
jestemleon
| 2022-12-18T21:43:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-12-18T21:29:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-nlp-project-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-nlp-project-news
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4196 | 0.35 | 8 | 3.9775 |
| 4.1578 | 0.7 | 16 | 3.8826 |
| 4.055 | 1.04 | 24 | 3.7820 |
| 3.954 | 1.39 | 32 | 3.6726 |
| 3.916 | 1.74 | 40 | 3.7244 |
| 3.864 | 2.09 | 48 | 3.7631 |
| 3.8837 | 2.43 | 56 | 3.6904 |
| 3.8965 | 2.78 | 64 | 3.6775 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fawazatvetted/fine_tuned_mpnetv2
|
fawazatvetted
| 2022-12-18T21:37:35Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-12-18T21:37:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 84227 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
alkiskoudounas/whisper-el-medium-augmented
|
alkiskoudounas
| 2022-12-18T21:33:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"el",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-18T16:35:48Z |
---
language:
- el
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium Greek - Robust
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 el
type: mozilla-foundation/common_voice_11_0
config: el
split: test
args: el
metrics:
- name: Wer
type: wer
value: 21.684621099554235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Greek - Robust
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 el dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3168
- Wer: 21.6846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3865 | 1.17 | 500 | 0.5842 | 51.4487 |
| 0.2302 | 2.35 | 1000 | 0.4861 | 39.3202 |
| 0.1321 | 3.52 | 1500 | 0.4536 | 37.4257 |
| 0.0916 | 4.69 | 2000 | 0.4103 | 39.6824 |
| 0.0497 | 5.87 | 2500 | 0.4101 | 29.1883 |
| 0.03 | 7.04 | 3000 | 0.4121 | 28.0089 |
| 0.0156 | 8.22 | 3500 | 0.3842 | 26.7459 |
| 0.0037 | 9.39 | 4000 | 0.3433 | 28.7054 |
| 0.0008 | 10.56 | 4500 | 0.3244 | 21.8332 |
| 0.0006 | 11.74 | 5000 | 0.3178 | 21.5267 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.7.1
- Tokenizers 0.12.1
|
z4x/PPO-LunarLander-v2
|
z4x
| 2022-12-18T21:20:16Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T21:10:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.53 +/- 19.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aherzberg/whisper-dpv-finetuned-BEST-MODEL
|
aherzberg
| 2022-12-18T21:08:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-18T18:50:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-dpv-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-dpv-finetuned
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- epoch: 13.07
- eval_loss: 0.0002
- eval_runtime: 8695.8511
- eval_samples_per_second: 0.458
- eval_steps_per_second: 0.458
- eval_wer: 0.0112
- step: 13000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
farsipal/whisper-sm-el-intlv-xl
|
farsipal
| 2022-12-18T20:45:14Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"el",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-16T15:22:15Z |
---
language:
- el
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: whisper-sm-el-intlv-xl
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: el
split: test
metrics:
- name: Wer
type: wer
value: 19.48365527488856
---
# whisper-sm-el-intlv-xl
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 (el) and the google/fleurs (el_gr) datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4725
- Wer: 19.4837
## Model description
The model was trained over 10000 steps on translation from Greek to English.
## Intended uses & limitations
This model was part of the Whisper Finetuning Event (Dec 2022) and was used primarily to compare relative improvements between transcription and translation tasks.
## Training and evaluation data
The training datasets combined examples from both train and evaluation splits and use the train split of the mozilla-foundation/common_voice_11_0 (el) dataset for evaluation and selection of the best checkpoint.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0545 | 2.49 | 1000 | 0.2891 | 22.4926 |
| 0.0093 | 4.98 | 2000 | 0.3927 | 20.1337 |
| 0.0018 | 7.46 | 3000 | 0.4031 | 20.1616 |
| 0.001 | 9.95 | 4000 | 0.4209 | 19.6880 |
| 0.0008 | 12.44 | 5000 | 0.4498 | 20.0966 |
| 0.0005 | 14.93 | 6000 | 0.4725 | 19.4837 |
| 0.0002 | 17.41 | 7000 | 0.4917 | 19.5951 |
| 0.0001 | 19.9 | 8000 | 0.5050 | 19.6230 |
| 0.0001 | 22.39 | 9000 | 0.5146 | 19.5672 |
| 0.0001 | 24.88 | 10000 | 0.5186 | 19.4837 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1.dev0
- Tokenizers 0.12.1
|
4eonsbl4ck/q-FrozenLake-v1-4x4-noSlippery
|
4eonsbl4ck
| 2022-12-18T20:45:08Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T20:34:32Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="4eonsbl4ck/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gpfl/ppo-Huggy
|
gpfl
| 2022-12-18T20:42:14Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-18T20:42:03Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: gpfl/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chist/q-Taxi-v3
|
chist
| 2022-12-18T20:21:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T20:20:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="chist/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jenya-g/q-Taxi-v3
|
jenya-g
| 2022-12-18T20:16:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T20:13:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jenya-g/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
venuv62/spoofing_vit_16_224
|
venuv62
| 2022-12-18T19:30:49Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-18T18:55:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: spoofing_vit_16_224
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spoofing_vit_16_224
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0560
- Accuracy: 0.7088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7746 | 0.99 | 54 | 0.6401 | 0.6405 |
| 0.339 | 1.99 | 108 | 0.9389 | 0.6042 |
| 0.0437 | 2.99 | 162 | 1.0560 | 0.7088 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sgangireddy/whisper-base-cv-lowLR-cs
|
sgangireddy
| 2022-12-18T19:19:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"cs",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T18:58:12Z |
---
language:
- cs
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper base Czech CV low LR
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 cs
type: mozilla-foundation/common_voice_11_0
config: cs
split: test
args: cs
metrics:
- name: Wer
type: wer
value: 42.9052871954476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base Czech CV low LR
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_11_0 cs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5171
- Wer: 42.9053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6046 | 4.01 | 1000 | 0.6535 | 52.3084 |
| 0.4037 | 8.02 | 2000 | 0.5706 | 46.6879 |
| 0.3172 | 12.03 | 3000 | 0.5369 | 44.1042 |
| 0.3606 | 16.04 | 4000 | 0.5218 | 43.0766 |
| 0.3792 | 21.01 | 5000 | 0.5171 | 42.9053 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
sgangireddy/whisper-base-cv-cs
|
sgangireddy
| 2022-12-18T18:55:58Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"cs",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T18:31:00Z |
---
language:
- cs
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper base Czech CV
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 cs
type: mozilla-foundation/common_voice_11_0
config: cs
split: test
args: cs
metrics:
- name: Wer
type: wer
value: 33.995690687096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base Czech CV
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_11_0 cs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5394
- Wer: 33.9957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.206 | 4.01 | 1000 | 0.4356 | 36.2443 |
| 0.0332 | 8.02 | 2000 | 0.4583 | 34.0509 |
| 0.0074 | 12.03 | 3000 | 0.5119 | 34.4395 |
| 0.005 | 16.04 | 4000 | 0.5394 | 33.9957 |
| 0.0045 | 21.01 | 5000 | 0.5461 | 34.1025 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
ihanif/xls-r-1b-pashto
|
ihanif
| 2022-12-18T18:28:35Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"google/fleurs",
"generated_from_trainer",
"dataset:fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-18T15:53:54Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- google/fleurs
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: facebook/wav2vec2-xls-r-1b
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: GOOGLE/FLEURS - PS_AF
type: fleurs
config: ps_af
split: test
args: 'Config: ps_af, Training split: train+validation, Eval split: test'
metrics:
- name: Wer
type: wer
value: 0.9294849931787176
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook/wav2vec2-xls-r-1b
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the GOOGLE/FLEURS - PS_AF dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1921
- Wer: 0.9295
- Cer: 0.9608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:------:|:---------------:|:------:|
| 19.9558 | 1.27 | 100 | 3.2660 | 20.9197 | 1.0 |
| 19.7186 | 2.53 | 200 | 1.1692 | 19.2447 | 1.0 |
| 15.203 | 3.8 | 300 | 0.9687 | 15.0053 | 0.9998 |
| 6.4303 | 5.06 | 400 | 0.9911 | 6.5437 | 0.9632 |
| 4.5712 | 6.33 | 500 | 0.9546 | 4.9040 | 0.9323 |
| 3.3986 | 12.66 | 1000 | 4.1921 | 0.9295 | 0.9608 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
dugongo/ppo-LunarLander-v2
|
dugongo
| 2022-12-18T18:17:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T18:17:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.85 +/- 15.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SvetKochnev/riffusion-model-v1-f16
|
SvetKochnev
| 2022-12-18T17:26:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-18T16:55:45Z |
Model Details
Developed by: Seth Forsgren, Hayk Martiros
Model type: Diffusion-based text-to-image generation model
Language(s): English
License: The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based.
Model Description: This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model that uses a fixed, pretrained text encoder (CLIP ViT-L/14) as suggested in the Imagen paper.
|
Verah/ai-protest-anime
|
Verah
| 2022-12-18T17:15:46Z | 0 | 6 | null |
[
"stable-diffusion",
"text-to-image",
"license:openrail++",
"region:us"
] |
text-to-image
| 2022-12-17T08:26:46Z |
---
license: openrail++
thumbnail: "https://huggingface.co/Verah/ai-protest-anime/resolve/main/s0.webp"
tags:
- stable-diffusion
- text-to-image
inference: false
---
# "AI Protest" Anime Model

This model has been trained to simulate what it may be like if the current (December 2022) artstation protest images against AI were actually used as training data inside a conventional anime stable diffusion model.
For version 2, I trained two dreambooth models on the AI Protest imagery at 576px and 704px for 6k steps each. These unique models were then 50/50 merged. The intent behind this is regularization. The key word is still **ai protest**
Version 1 was a quick and dirty DreamBooth model trained without regularization for 3023 steps. the key word is **ai protest**, simply use it in your prompt. **you may wish to increase the weight and/or duplicate it, as the influence is quite weak.**
The base model (of both versions) is an early preview of WD1.4 (colloquially "WD 1.3.5") [wd-1-4-float32-booru-110k](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/9fa4a42a9c4a0948472fa909e6c1a39be0dda699/models/wd-1-4-float32-booru-110k.ckpt) This means you should probably be using danbooru-style image tags in your prompts
## new samples (model version 2)
negative prompt (for all):
- traditional media, graphite medium, ugly, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, lowres, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, username, blurry, bad feet, sketch
if you add `flat color, flat shading` to the negative prompt you can get uncanny early CG-like images.
prompts for the header images:
- (ai protest:1.3), [:1girl, finely detailed, beautiful, arknights, ruins, still life, text, (ai protest), solo, long hair, white hair, red eyes, headgear:0.24]
- (ai protest:1.3), [:1girl, finely detailed, (cowboy shot), beautiful, arknights, ruins, still life, text, (ai protest), solo, long hair, white hair, red eyes, headgear:0.1]
- (ai protest:1.3), [:1girl, (upper body:1.2), finely detailed, beautiful, arknights, ruins, still life, text, (ai protest:1.3), solo, long hair, white hair, red eyes, headgear:0.4]
*I regularly use the prompt editing feature of automatic's UI. the fundamental syntax is for example: `[A:B:0.1]` this would be interprited as prompt A for the first 10% of samples, then after which it would become prompt B. In the examples above I am omitting any prompt A. With this method it will first draw the AI Protest sign, then add the anime girl to it after*

- (ai protest:1.4), [:1girl, bangs, black hair, blazer, flower, grey jacket, hair flower, hair ornament, jacket, long hair, looking at viewer, portrait, purple eyes, school uniform, solo, swept bangs, twintails, upper body, white background, idolmaster, idolmaster shiny colors, fukumaru koito, ruins, text, (ai protest:1.2):0.15]
- (ai protest:1.2), [:1girl, bangs, black hair, blazer, flower, grey jacket, hair flower, hair ornament, jacket, long hair, looking at viewer, portrait, purple eyes, school uniform, solo, swept bangs, twintails, upper body, white background, idolmaster, idolmaster shiny colors, fukumaru koito, text, (ai protest:1.2):0.15]
- (ai protest:1.3), [:1girl, armband, bangs, bare shoulders, belt, black gloves, black hair, black shirt, blue eyes, breasts, coat, cropped legs, floating hair, gloves, hair between eyes, long hair, long sleeves, mask, medium breasts, midriff, mouth mask, no headwear, no navel, open clothes, open coat, shirt, sleeveless, sleeveless shirt, solo, stomach, upper body, white coat, blue archive, saori \(blue archive\), ai protest:0.1]
- (ai protest:1.3), [:1girl, bangs, black dress, closed mouth, cropped torso, dress, green eyes, green hair, long sleeves, looking at viewer, medium hair, simple background, solo, upper body, wavy hair, white background, one-punch man, tatsumaki, ai protest:0.1]
Other tips: You don't neccessarily need to use the prompt editing trick, I just like it. A second pass in img2img or via enabling highres fix can improve the fidelity of outputs.
## old samples (model version 1)

(ai protest:1.3), 1girl, mecha musume, headgear, (ai protest:1.3), (masterpiece), (best quality), (ultra-detailed), best illustration, (extremely delicate and beautiful), (ai protest:1.3)
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad feet

(ai protest:1.3), 1girl, upper body, mecha musume, headgear, (ai protest:1.3)

(ai protest:1.2), 1girl, bangs, black dress, closed mouth, cropped torso, dress, green eyes, green hair, long sleeves, looking at viewer, medium hair, simple background, solo, upper body, wavy hair, white background, one-punch man, tatsumaki

(ai protest:1.3), 1girl, mecha musume, headgear, (ai protest:1.3), (masterpiece), (best quality), (ultra-detailed), best illustration, (extremely delicate and beautiful), (ai protest:1.3)

(ai protest:1.6), mordred \(fate\) wears armor fighting, sword,
Negative prompt: (missing digits:1.5), (extra digits:1.5), extra limb, bad art, incomplete, weird colors, blurry, poorly drawn, deformed, cartoon, b&w, missing limbs, inconsistent, multiple girls, 1boy, male, 2boys, short hair, hu tao, lumine, keqing, shenhe, mona, eula, yelan, beidou, contorted, signature, watermark, username, blurry, artist name, symmetrical, bad hands, jpeg artifacts, error, pixelated, multiple girls, 2girls, 3girls,

(ai protest:1.3), 1girl, upper body, mecha musume, headgear, (ai protest:1.3), (masterpiece), (best quality), (ultra-detailed), best illustration, (extremely delicate and beautiful)
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad feet

ai protest, 1girl, tattoo, masterpiece, best quality, ultra-detailed, illustration
|
mrm8488/mt5-base-finetuned-notes-summaries
|
mrm8488
| 2022-12-18T17:09:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-12-18T16:07:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-notes-summaries
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-notes-summaries
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 5.5563
- Rouge2: 1.1271
- Rougel: 5.1075
- Rougelsum: 5.1383
- Gen Len: 10.0222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 446 | nan | 5.5563 | 1.1271 | 5.1075 | 5.1383 | 10.0222 |
| 0.0 | 2.0 | 892 | nan | 5.5563 | 1.1271 | 5.1075 | 5.1383 | 10.0222 |
| 0.0 | 3.0 | 1338 | nan | 5.5563 | 1.1271 | 5.1075 | 5.1383 | 10.0222 |
| 0.0 | 4.0 | 1784 | nan | 5.5563 | 1.1271 | 5.1075 | 5.1383 | 10.0222 |
| 0.0 | 5.0 | 2230 | nan | 5.5563 | 1.1271 | 5.1075 | 5.1383 | 10.0222 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sd-concepts-library/cosmic-galaxy-characters-style
|
sd-concepts-library
| 2022-12-18T16:48:27Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-12-18T16:41:22Z |
---
license: mit
---
### Cosmic galaxy characters style on Stable Diffusion
This is the `<cosmicgalaxy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
Walid-Rovo/ppo-LunarLander-v2
|
Walid-Rovo
| 2022-12-18T16:32:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T14:58:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -162.67 +/- 61.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Cesar514/q-FrozenLake-v1-4x4-noSlippery
|
Cesar514
| 2022-12-18T16:09:54Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T16:09:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Cesar514/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
msgerasyov/q-Taxi-v3
|
msgerasyov
| 2022-12-18T15:42:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T15:26:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="msgerasyov/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lsaulier/q-Taxi-v3
|
lsaulier
| 2022-12-18T15:19:32Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T15:16:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="lsaulier/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lsaulier/q-FrozenLake-v1-4x4-noSlippery
|
lsaulier
| 2022-12-18T15:08:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T15:08:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="lsaulier/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
newbie4000/ppo-LunarLander-v2
|
newbie4000
| 2022-12-18T15:04:32Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T14:34:15Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 296.96 +/- 19.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lukechoi76/q-FrozenLake-v1-4x4-noSlippery
|
lukechoi76
| 2022-12-18T14:45:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T14:45:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="luke76/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lambdaofgod/query_nbow_embedder
|
lambdaofgod
| 2022-12-18T14:44:55Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-12-18T14:44:50Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# lambdaofgod/query_nbow_embedder
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('lambdaofgod/query_nbow_embedder')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/query_nbow_embedder)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(6912, 200)
)
(1): WordWeights(
(emb_layer): Embedding(6912, 1)
)
(2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ryusangwon/distilbert-base-uncased-finetuned-emotion
|
ryusangwon
| 2022-12-18T14:32:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-18T10:20:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an emtion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2254
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3271 | 0.903 | 0.8983 |
| No log | 2.0 | 500 | 0.2254 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
jenya-g/PPO-LunarLander-v2
|
jenya-g
| 2022-12-18T14:18:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T13:13:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.03 +/- 17.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ihanif/whisper-small-pashto-dropout
|
ihanif
| 2022-12-18T14:06:09Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"pashto",
"ps",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-16T15:51:05Z |
---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
- automatic-speech-recognition
- hf-asr-leaderboard
- pashto
- ps
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Small Pashto
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs ps_af
type: google/fleurs
args: 'config: ps_af, split: test'
metrics:
- name: Wer
type: wer
value: 56.651029055690074
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Pashto
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs ps_af dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2273
- Wer: 56.6510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 2.1183 | 3.7 | 100 | 1.3170 | 76.9522 |
| 0.8565 | 7.41 | 200 | 0.9367 | 61.9930 |
| 0.2246 | 11.11 | 300 | 0.9642 | 58.8302 |
| 0.054 | 14.81 | 400 | 1.0876 | 57.9903 |
| 0.0159 | 18.52 | 500 | 1.1798 | 57.8768 |
| 0.0045 | 22.22 | 600 | 1.2309 | 56.6510 |
| 0.0026 | 100.0 | 700 | 1.2581 | 56.8478 |
| 0.0023 | 114.29 | 800 | 1.2710 | 56.7570 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
ihanif/whisper-small-pashto
|
ihanif
| 2022-12-18T14:03:39Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"pashto",
"ps",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-09T18:36:30Z |
---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
- hf-asr-leaderboard
- pashto
- ps
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Small Pashto
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs ps_af
type: google/fleurs
args: 'config: ps_af, split: test'
metrics:
- name: Wer
type: wer
value: 63.10532687651331
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Pashto
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs ps_af dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1800
- Wer: 63.1053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 5200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.0871 | 14.29 | 100 | 2.0102 | 230.2739 |
| 1.465 | 28.57 | 200 | 1.4969 | 137.2427 |
| 1.1617 | 42.86 | 300 | 1.2716 | 76.3242 |
| 1.0019 | 57.14 | 400 | 1.1645 | 71.3756 |
| 0.9052 | 71.43 | 500 | 1.1051 | 69.7866 |
| 0.8334 | 85.71 | 600 | 1.0691 | 68.2657 |
| 0.7838 | 100.0 | 700 | 1.0483 | 67.1686 |
| 0.7539 | 114.29 | 800 | 1.0363 | 66.4195 |
| 0.7377 | 128.57 | 900 | 1.0297 | 66.2001 |
| 0.7325 | 142.86 | 1000 | 1.0277 | 66.0033 |
| 0.6952 | 157.14 | 1100 | 1.0122 | 65.0575 |
| 0.6531 | 171.43 | 1200 | 1.0014 | 64.4219 |
| 0.6189 | 185.71 | 1300 | 0.9945 | 63.7939 |
| 0.5993 | 200.0 | 1400 | 0.9896 | 63.3550 |
| 0.5757 | 214.29 | 1500 | 0.9864 | 63.2264 |
| 0.5601 | 228.57 | 1600 | 0.9845 | 62.9162 |
| 0.5482 | 242.86 | 1700 | 0.9833 | 62.8178 |
| 0.5382 | 257.14 | 1800 | 0.9827 | 62.8405 |
| 0.5325 | 271.43 | 1900 | 0.9823 | 62.7648 |
| 0.5287 | 285.71 | 2000 | 0.9822 | 62.8178 |
| 0.3494 | 357.14 | 2500 | 1.0026 | 61.6147 |
| 0.2287 | 428.57 | 3000 | 1.0533 | 61.5163 |
| 0.1525 | 500.0 | 3500 | 1.1041 | 62.0536 |
| 0.1089 | 571.43 | 4000 | 1.1451 | 62.5076 |
| 0.0871 | 642.86 | 4500 | 1.1704 | 62.9313 |
| 0.0797 | 714.29 | 5000 | 1.1791 | 63.1659 |
| 0.0799 | 728.57 | 5100 | 1.1800 | 63.1053 |
| 0.0791 | 742.86 | 5200 | 1.1803 | 63.1129 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Finnish-NLP/whisper-large-v2-finnish
|
Finnish-NLP
| 2022-12-18T13:57:57Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"finnish",
"fi",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T11:11:25Z |
---
language:
- fi
license: apache-2.0
tags:
- whisper-event
- finnish
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
- cer
model-index:
- name: Whisper Large V2 Finnish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: fi
split: test
args: fi
metrics:
- name: Wer
type: wer
value: 10.42
- name: Cer
type: cer
value: 1.91
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS
type: google/fleurs
config: fi_fi
split: test
args: fi_fi
metrics:
- name: Wer
type: wer
value: 10.2
- name: Cer
type: cer
value: 3.36
---
|
socokal/vit-base-beans
|
socokal
| 2022-12-18T13:56:46Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-18T13:49:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9774436090225563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0720
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1111 | 1.54 | 100 | 0.0720 | 0.9774 |
| 0.0249 | 3.08 | 200 | 0.1081 | 0.9774 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
ybot/ppo-LunarLander-v2
|
ybot
| 2022-12-18T13:44:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-12T23:51:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 285.98 +/- 24.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Payoto/roberta-base-finetuned-squad
|
Payoto
| 2022-12-18T13:28:43Z | 67 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-17T18:40:43Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 3
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.1
|
AgentXXX/q-FrozenLake-v1-4x4-noSlippery
|
AgentXXX
| 2022-12-18T13:28:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T13:28:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AgentXXX/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ahmadmwali/finetuning-sentiment-hausa21
|
ahmadmwali
| 2022-12-18T13:24:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-18T10:58:31Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-hausa21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-hausa21
This model is a fine-tuned version of [mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1444
- Accuracy: 0.9586
- F1: 0.9586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
flashgod/ppo-LunarLanderV2
|
flashgod
| 2022-12-18T13:23:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T13:23:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.41 +/- 21.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ad3zp/ppo-Lunar-Lander-v2
|
Ad3zp
| 2022-12-18T13:09:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T13:09:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.24 +/- 17.32
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
leviethoang/wav2vec2-large-xls-r-300m-vi-75p
|
leviethoang
| 2022-12-18T13:09:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-18T09:25:35Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-vi-75p
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-vi-75p
This model is a fine-tuned version of [leviethoang/wav2vec2-large-xls-r-300m-vi-25p](https://huggingface.co/leviethoang/wav2vec2-large-xls-r-300m-vi-25p) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7880
- Wer: 0.4324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.962 | 1.68 | 400 | 1.2033 | 0.4428 |
| 0.7977 | 3.36 | 800 | 1.3410 | 0.4731 |
| 0.644 | 5.04 | 1200 | 1.4682 | 0.4796 |
| 0.5156 | 6.72 | 1600 | 1.4940 | 0.4826 |
| 0.4531 | 8.4 | 2000 | 1.5071 | 0.4734 |
| 0.3882 | 10.08 | 2400 | 1.5408 | 0.4694 |
| 0.3469 | 11.76 | 2800 | 1.5975 | 0.4697 |
| 0.3096 | 13.45 | 3200 | 1.7120 | 0.4728 |
| 0.2825 | 15.13 | 3600 | 1.7052 | 0.4632 |
| 0.2607 | 16.81 | 4000 | 1.6870 | 0.4575 |
| 0.2301 | 18.49 | 4400 | 1.7205 | 0.4653 |
| 0.2096 | 20.17 | 4800 | 1.7352 | 0.4504 |
| 0.1915 | 21.85 | 5200 | 1.7948 | 0.4465 |
| 0.1685 | 23.53 | 5600 | 1.7994 | 0.4400 |
| 0.1543 | 25.21 | 6000 | 1.7613 | 0.4435 |
| 0.1378 | 26.89 | 6400 | 1.8300 | 0.4365 |
| 0.1278 | 28.57 | 6800 | 1.7880 | 0.4324 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Payoto/gpt2-wikitext2
|
Payoto
| 2022-12-18T12:48:33Z | 36 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T17:53:18Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.1
|
tagotec/ppo-LunarLander-v2
|
tagotec
| 2022-12-18T12:20:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T12:19:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.91 +/- 16.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
arampacha/whisper-large-hy
|
arampacha
| 2022-12-18T12:15:04Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hy",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-14T11:27:53Z |
---
language:
- hy
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: whisper-base-hy
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hy-AM
split: test
args: hy-AM
metrics:
- name: Wer
type: wer
value: 22.36842105263158
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-hy
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Wer: 22.3684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1394 | 5.87 | 400 | 0.1780 | 28.2895 |
| 0.0536 | 11.75 | 800 | 0.1739 | 24.6053 |
| 0.0247 | 17.64 | 1200 | 0.2098 | 22.9605 |
| 0.0154 | 23.52 | 1600 | 0.2035 | 22.1382 |
| 0.0103 | 29.41 | 2000 | 0.2204 | 22.3684 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
jlondonobo/whisper-large-v2-es
|
jlondonobo
| 2022-12-18T11:32:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"es",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-18T04:30:24Z |
---
language:
- es
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large V2 Spanish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 es
type: mozilla-foundation/common_voice_11_0
config: es
split: test
args: es
metrics:
- name: Wer
type: wer
value: 5.074450392391248
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2 Spanish
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 es dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1648
- Wer: 5.0745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1556 | 0.5 | 750 | 0.1683 | 5.0959 |
| 0.1732 | 1.35 | 1500 | 0.1648 | 5.0745 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
anuragshas/whisper-large-v2-ml
|
anuragshas
| 2022-12-18T11:02:10Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ml",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-12T20:46:25Z |
---
language:
- ml
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large-v2 Malayalam
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 ml
type: mozilla-foundation/common_voice_11_0
config: ml
split: test
args: ml
metrics:
- name: Wer
type: wer
value: 25.478927203065133
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Malayalam
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 ml dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4170
- Wer: 25.4789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0 | 71.01 | 1000 | 0.4170 | 25.4789 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
emmyapi/distilbart-podimo-data-5
|
emmyapi
| 2022-12-18T10:25:14Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"Summarization",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-15T16:26:15Z |
---
tasks: summarization
license: apache-2.0
tags:
- generated_from_trainer
- Summarization
model-index:
- name: distilbart-podimo-data-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-podimo-data-5
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1325
## Model description
model | rouge1 | rouge2 | rougeL | rougeLsum
--- | --- | --- | --- |---
sshleifer/distilbart-cnn-12-6 | 0.202654 | 0.025766 | 0.123072 | 0.130183
emmyapi/distilbart-podimo-data-3 | 0.235147 | 0.047087 | 0.151535 | 0.161782
emmyapi/distilbart-podimo-data-4 | 0.236926 | 0.048327 | 0.153539 | 0.165026
emmyapi/distilbart-podimo-data-5 | 0.259024 | 0.061665 | 0.167187 | 0.178399
emmyapi/distilbart-podimo-data-7 | 0.298888 | 0.059900 | 0.159479 | 0.185049
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3477 | 3.33 | 500 | 3.7027 |
| 2.6286 | 6.66 | 1000 | 3.6995 |
| 2.0718 | 10.0 | 1500 | 3.8868 |
| 1.7806 | 13.33 | 2000 | 4.1325 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Bachchu/wav2vec2-large-xlsr-asamese-demo-colab
|
Bachchu
| 2022-12-18T09:57:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-18T08:56:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-asamese-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-asamese-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3328
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 25.0 | 100 | 4.8185 | 1.0 |
| No log | 50.0 | 200 | 3.5026 | 1.0 |
| No log | 75.0 | 300 | 3.4142 | 1.0 |
| 6.5763 | 100.0 | 400 | 3.3328 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
marma/whisper-small-sv
|
marma
| 2022-12-18T09:34:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:dataset/riksdagen",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-16T08:05:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- dataset/riksdagen
metrics:
- wer
model-index:
- name: whisper-small-sv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: dataset/riksdagen audiofolder
type: dataset/riksdagen
config: test
split: test
args: audiofolder
metrics:
- name: WER
type: wer
value: 0.22405586116204554
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: test
args:
language: sv-SE
metrics:
- name: WER
type: wer
value: 26.69
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-sv
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the dataset/riksdagen audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2917
- Wer: 0.2241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5023 | 0.04 | 250 | 0.5072 | 0.2949 |
| 0.4678 | 0.08 | 500 | 0.4632 | 0.2780 |
| 0.4233 | 0.12 | 750 | 0.4384 | 0.2749 |
| 0.4113 | 0.17 | 1000 | 0.4205 | 0.2673 |
| 0.3994 | 0.21 | 1250 | 0.4079 | 0.2649 |
| 0.3841 | 0.25 | 1500 | 0.3947 | 0.2609 |
| 0.3775 | 0.29 | 1750 | 0.3854 | 0.2564 |
| 0.383 | 0.33 | 2000 | 0.3781 | 0.2540 |
| 0.3651 | 0.37 | 2250 | 0.3721 | 0.2532 |
| 0.3456 | 0.42 | 2500 | 0.3651 | 0.2517 |
| 0.3719 | 0.46 | 2750 | 0.3612 | 0.2481 |
| 0.3399 | 0.5 | 3000 | 0.3561 | 0.2437 |
| 0.3428 | 0.54 | 3250 | 0.3522 | 0.2465 |
| 0.3442 | 0.58 | 3500 | 0.3451 | 0.2399 |
| 0.3315 | 0.62 | 3750 | 0.3431 | 0.2417 |
| 0.3299 | 0.66 | 4000 | 0.3404 | 0.2428 |
| 0.3417 | 0.71 | 4250 | 0.3373 | 0.2395 |
| 0.3399 | 0.75 | 4500 | 0.3332 | 0.2390 |
| 0.3222 | 0.79 | 4750 | 0.3310 | 0.2385 |
| 0.3319 | 0.83 | 5000 | 0.3291 | 0.2372 |
| 0.3188 | 0.87 | 5250 | 0.3265 | 0.2359 |
| 0.3197 | 0.91 | 5500 | 0.3240 | 0.2378 |
| 0.3099 | 0.96 | 5750 | 0.3215 | 0.2342 |
| 0.3132 | 1.0 | 6000 | 0.3195 | 0.2374 |
| 0.286 | 1.04 | 6250 | 0.3179 | 0.2348 |
| 0.2765 | 1.08 | 6500 | 0.3166 | 0.2354 |
| 0.2795 | 1.12 | 6750 | 0.3153 | 0.2324 |
| 0.2825 | 1.16 | 7000 | 0.3145 | 0.2316 |
| 0.2865 | 1.21 | 7250 | 0.3144 | 0.2329 |
| 0.2703 | 1.25 | 7500 | 0.3126 | 0.2326 |
| 0.2792 | 1.29 | 7750 | 0.3121 | 0.2324 |
| 0.2749 | 1.33 | 8000 | 0.3106 | 0.2325 |
| 0.2762 | 1.37 | 8250 | 0.3093 | 0.2315 |
| 0.2813 | 1.41 | 8500 | 0.3080 | 0.2302 |
| 0.2755 | 1.45 | 8750 | 0.3078 | 0.2321 |
| 0.2779 | 1.5 | 9000 | 0.3062 | 0.2305 |
| 0.2764 | 1.54 | 9250 | 0.3059 | 0.2336 |
| 0.2763 | 1.58 | 9500 | 0.3041 | 0.2310 |
| 0.2723 | 1.62 | 9750 | 0.3027 | 0.2292 |
| 0.2756 | 1.66 | 10000 | 0.3026 | 0.2301 |
| 0.2663 | 1.7 | 10250 | 0.3008 | 0.2262 |
| 0.269 | 1.75 | 10500 | 0.3006 | 0.2280 |
| 0.2682 | 1.79 | 10750 | 0.3002 | 0.2291 |
| 0.2721 | 1.83 | 11000 | 0.2994 | 0.2267 |
| 0.2681 | 1.87 | 11250 | 0.2987 | 0.2288 |
| 0.278 | 1.91 | 11500 | 0.2978 | 0.2296 |
| 0.2625 | 1.95 | 11750 | 0.2978 | 0.2278 |
| 0.2583 | 1.99 | 12000 | 0.2967 | 0.2259 |
| 0.2403 | 2.04 | 12250 | 0.2976 | 0.2276 |
| 0.2414 | 2.08 | 12500 | 0.2972 | 0.2264 |
| 0.251 | 2.12 | 12750 | 0.2969 | 0.2256 |
| 0.2404 | 2.16 | 13000 | 0.2968 | 0.2253 |
| 0.2473 | 2.2 | 13250 | 0.2966 | 0.2253 |
| 0.2444 | 2.24 | 13500 | 0.2965 | 0.2262 |
| 0.2512 | 2.29 | 13750 | 0.2962 | 0.2253 |
| 0.2417 | 2.33 | 14000 | 0.2950 | 0.2280 |
| 0.2445 | 2.37 | 14250 | 0.2950 | 0.2256 |
| 0.2461 | 2.41 | 14500 | 0.2949 | 0.2262 |
| 0.2496 | 2.45 | 14750 | 0.2944 | 0.2261 |
| 0.2422 | 2.49 | 15000 | 0.2942 | 0.2248 |
| 0.2415 | 2.53 | 15250 | 0.2940 | 0.2252 |
| 0.2465 | 2.58 | 15500 | 0.2932 | 0.2269 |
| 0.2508 | 2.62 | 15750 | 0.2931 | 0.2245 |
| 0.2339 | 2.66 | 16000 | 0.2930 | 0.2257 |
| 0.2441 | 2.7 | 16250 | 0.2923 | 0.2247 |
| 0.2444 | 2.74 | 16500 | 0.2921 | 0.2246 |
| 0.2416 | 2.78 | 16750 | 0.2918 | 0.2264 |
| 0.2425 | 2.83 | 17000 | 0.2916 | 0.2251 |
| 0.2404 | 2.87 | 17250 | 0.2916 | 0.2234 |
| 0.2456 | 2.91 | 17500 | 0.2911 | 0.2238 |
| 0.2384 | 2.95 | 17750 | 0.2908 | 0.2252 |
| 0.244 | 2.99 | 18000 | 0.2905 | 0.2251 |
| 0.2197 | 3.03 | 18250 | 0.2919 | 0.2239 |
| 0.2194 | 3.08 | 18500 | 0.2919 | 0.2237 |
| 0.2294 | 3.12 | 18750 | 0.2919 | 0.2243 |
| 0.2225 | 3.16 | 19000 | 0.2918 | 0.2252 |
| 0.2229 | 3.2 | 19250 | 0.2919 | 0.2242 |
| 0.2153 | 3.24 | 19500 | 0.2917 | 0.2241 |
| 0.2137 | 3.28 | 19750 | 0.2917 | 0.2239 |
| 0.2194 | 3.32 | 20000 | 0.2917 | 0.2241 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.0a0+8a1a93a
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ruzarx/q-FrozenLake-v1-4x4-noSlippery
|
ruzarx
| 2022-12-18T09:26:37Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T09:26:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ruzarx/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dhandapanip/Ss
|
dhandapanip
| 2022-12-18T08:09:02Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-12-18T08:09:02Z |
---
license: bigscience-bloom-rail-1.0
---
|
mdsunbeam/ppo-LunarLander-v2
|
mdsunbeam
| 2022-12-18T08:04:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T08:03:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.77 +/- 15.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nu-dialogue/sfc2022-stable-diffusion
|
nu-dialogue
| 2022-12-18T07:20:46Z | 16 | 3 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"ja",
"japanese",
"arxiv:2112.10752",
"license:other",
"diffusers:JapaneseStableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-18T04:50:44Z |
---
language: ja
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- ja
- japanese
inference: true
# extra_gated_prompt: |-
# One more step before getting this model.
# This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
# The CreativeML OpenRAIL License specifies:
# 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
# 2. rinna Co., Ltd. claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
# 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
# Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
# By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well.
# extra_gated_fields:
# I have read the License and agree with its terms: checkbox
---
# SFCOCO Stable Diffusion Model Card
SFCOCO Stable Diffusion is a Japanese-specific latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
This model was fine-tuned by using a powerful Japanese-specific latent text-to-image diffusion model, [Japanese Stable Diffusion](https://huggingface.co/rinna/japanese-stable-diffusion).
We use the [Stable Diffusion text-to-image fine-tuning script](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) of [🤗 Diffusers](https://github.com/huggingface/diffusers)
[](https://colab.research.google.com/github/nu-dialogue/clip-prefix-caption-jp/blob/master/notebooks/sfc2022_stable_diffusion.ipynb)
## Model Details
- **Developed by:** Atsumoto Ohashi
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** Japanese
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model (LDM)](https://arxiv.org/abs/2112.10752) that used [Japanese Stable Diffusion](https://huggingface.co/rinna/japanese-stable-diffusion) as a pre-trained model.
- **Resources for more information:** [Japanese Stable Diffusion GitHub Repository](https://github.com/rinnakk/japanese-stable-diffusion)
## Examples
Firstly, install our package as follows. This package is modified [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Japanese Stable Diffusion.
```bash
pip install git+https://github.com/rinnakk/japanese-stable-diffusion
```
Run this command to log in with your HF Hub token if you haven't before:
```bash
huggingface-cli login
```
Running the pipeline with the k_lms scheduler:
```python
import torch
from torch import autocast
from diffusers import LMSDiscreteScheduler
from japanese_stable_diffusion import JapaneseStableDiffusionPipeline
model_id = "nu-dialogue/sfc2022-stable-diffusion"
device = "cuda"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = JapaneseStableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True, torch_dtype=torch.float16)
pipe = pipe.to(device)
prompt = "福澤諭吉像の写真"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("output.png")
```
_Note: `JapaneseStableDiffusionPipeline` is almost same as diffusers' `StableDiffusionPipeline` but added some lines to initialize our models properly._
## Training
**Training Data**
We used the SFCOCO2021 and SFCOCO2022 dataset for training the model.
You can see these datasets in [this repository](https://github.com/nu-dialogue/clip-prefix-caption-jp).
**Training Procedure**
SFCOCO Stable Diffusion has the same architecture as Japanese Stable Diffusion and was trained by using Japanese Stable Diffusion.
We use the [Stable Diffusion text-to-image fine-tuning script](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) of [🤗 Diffusers](https://github.com/huggingface/diffusers)
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
```bibtex
@misc{japanese_stable_diffusion,
author = {Shing, Makoto and Sawada, Kei},
title = {Japanese Stable Diffusion},
howpublished = {\url{https://github.com/rinnakk/japanese-stable-diffusion}},
month = {September},
year = {2022},
}
```
*This model card was written by: Atsumoto Ohashi and is based on the [Japanese Stable Diffusion Model Card](https://github.com/rinnakk/japanese-stable-diffusion).*
|
Fiacre/ComicsBlend
|
Fiacre
| 2022-12-18T06:05:57Z | 0 | 9 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-17T04:54:25Z |
---
license: creativeml-openrail-m
---
# How to use:
Download "ComicsBlend.ckpt" and add it to your model folder. Important: add all these keywords to your prompt: ComplexLA style, nvinkpunk, marioalberti artstyle, ghibli style
# Individual components of the blend:
This is an equal part blend of four models at 25% Complex-Lineart, 25% Inkpunk-Diffusion, 25% Comic-Diffusion, 25% Ghibli Diffusion.
# Link to the constituent models:
https://huggingface.co/Conflictx/Complex-Lineart
https://huggingface.co/Envvi/Inkpunk-Diffusion
https://huggingface.co/ogkalu/Comic-Diffusion
https://huggingface.co/nitrosocke/Ghibli-Diffusion
# Prompts
Important: Use all the prompt from the constituant models at the same time: ComplexLA style, nvinkpunk, marioalberti artstyle, ghibli style
# Sample images:













# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Uberduck/iSTFTNet
|
Uberduck
| 2022-12-18T04:36:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-05T07:05:25Z |
# iSTFTNet Pre-Trained Models
https://github.com/rishikksh20/iSTFTNet-pytorch
Information about files:
As of right now, the files seen in this repository were trained on 22khz sample rates only.
Format:
- g_ = generator
- do_ = discriminator
- _xxxxxx = step #
- music_ = models with music indicate it has been trained on specifically music data. as of right now, the music data is from Free Music Archive (FMA)
|
vjkrish/lunarLander
|
vjkrish
| 2022-12-18T04:24:06Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T04:11:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -606.02 +/- 190.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Phoeo/iwakura_lain_hypernetwork
|
Phoeo
| 2022-12-18T03:57:07Z | 0 | 0 | null |
[
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-12-05T20:07:43Z |
---
license: cc-by-sa-4.0
---
Hypernetwork trained on AnythingV3.
Keyword: `iwakuralain`
[](https://postimg.cc/VrwQK5P4)
|
ai-project/wav2vec2-large-xls-r-300m-vi-25p
|
ai-project
| 2022-12-18T03:32:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T05:29:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-vi-colab-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-vi-colab-all
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.4537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.448 | 2.4 | 400 | inf | 1.0 |
| 2.8589 | 4.79 | 800 | inf | 0.7777 |
| 1.4919 | 7.19 | 1200 | inf | 0.5968 |
| 1.1255 | 9.58 | 1600 | inf | 0.5540 |
| 0.9354 | 11.98 | 2000 | inf | 0.4970 |
| 0.7816 | 14.37 | 2400 | inf | 0.4799 |
| 0.6822 | 16.77 | 2800 | inf | 0.4785 |
| 0.5768 | 19.16 | 3200 | inf | 0.4704 |
| 0.5031 | 21.56 | 3600 | inf | 0.4609 |
| 0.4589 | 23.95 | 4000 | inf | 0.4585 |
| 0.4136 | 26.35 | 4400 | inf | 0.4592 |
| 0.3829 | 28.74 | 4800 | inf | 0.4537 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AinTziLLo/ppo-LunarLander-v2
|
AinTziLLo
| 2022-12-18T02:27:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T01:11:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 285.39 +/- 21.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Scrya/whisper-tiny-id
|
Scrya
| 2022-12-18T02:27:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T11:00:38Z |
---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Tiny ID - FLEURS-CV
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: id_id
split: test
metrics:
- type: wer
value: 30.8
name: WER
- type: cer
value: 11.29
name: CER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: id
split: test
metrics:
- type: wer
value: 32.49
name: WER
- type: cer
value: 12.25
name: CER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny ID - FLEURS-CV
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5129
- Wer: 31.1298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.617 | 1.43 | 500 | 0.5956 | 40.1521 |
| 0.4062 | 2.86 | 1000 | 0.4991 | 33.2066 |
| 0.2467 | 4.29 | 1500 | 0.4755 | 31.6802 |
| 0.1904 | 5.71 | 2000 | 0.4681 | 30.5907 |
| 0.118 | 7.14 | 2500 | 0.4776 | 30.9368 |
| 0.0941 | 8.57 | 3000 | 0.4831 | 30.7297 |
| 0.0771 | 10.0 | 3500 | 0.4912 | 31.1014 |
| 0.0536 | 11.43 | 4000 | 0.5043 | 31.2319 |
| 0.0502 | 12.86 | 4500 | 0.5113 | 31.2404 |
| 0.0418 | 14.29 | 5000 | 0.5129 | 31.1298 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
greedypiggy/ppo-Huggy
|
greedypiggy
| 2022-12-18T01:59:16Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-18T01:59:08Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: greedypiggy/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
geninhu/whisper-medium-gl
|
geninhu
| 2022-12-18T01:20:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"gl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T14:24:54Z |
---
language:
- gl
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium Galician
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 gl
type: mozilla-foundation/common_voice_11_0
config: gl
split: test
args: gl
metrics:
- name: Wer
type: wer
value: 8.41678391128031
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Galician
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 gl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2864
- Wer: 8.4168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0074 | 6.01 | 1000 | 0.2564 | 8.8927 |
| 0.0006 | 12.03 | 2000 | 0.2864 | 8.4168 |
| 0.0003 | 19.01 | 3000 | 0.3043 | 8.5078 |
| 0.0002 | 25.02 | 4000 | 0.3145 | 8.4913 |
| 0.0002 | 32.01 | 5000 | 0.3189 | 8.4706 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
jlondonobo/whisper-large-v2-pt-v3
|
jlondonobo
| 2022-12-18T01:19:32Z | 14 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T18:57:25Z |
---
language:
- pt
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large Portuguese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 pt
type: mozilla-foundation/common_voice_11_0
config: pt
split: test
args: pt
metrics:
- name: Wer
type: wer
value: 4.8385198634858195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Portuguese
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 pt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1503
- Wer: 4.8385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1526 | 0.33 | 500 | 0.1588 | 4.9074 |
| 0.1046 | 1.3 | 1000 | 0.1510 | 4.8806 |
| 0.079 | 2.28 | 1500 | 0.1503 | 4.8385 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
tashatsar/ppo-LunarLander-v2
|
tashatsar
| 2022-12-18T01:17:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T00:39:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.94 +/- 13.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
toastedshibe/ppo-LunarLander-v2
|
toastedshibe
| 2022-12-18T00:30:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-18T00:20:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.60 +/- 15.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kris666/distilbert-base-uncased-finetuned-cola
|
kris666
| 2022-12-18T00:28:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-17T22:40:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5600275777662214
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5336
- Matthews Correlation: 0.5600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5212 | 1.0 | 535 | 0.5335 | 0.4275 |
| 0.3458 | 2.0 | 1070 | 0.5003 | 0.4923 |
| 0.2343 | 3.0 | 1605 | 0.5336 | 0.5600 |
| 0.174 | 4.0 | 2140 | 0.7611 | 0.5332 |
| 0.1205 | 5.0 | 2675 | 0.8059 | 0.5547 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pittawat/autotrain-twitter-covid-19-spam-detection-2512177276
|
pittawat
| 2022-12-18T00:20:04Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:pittawat/autotrain-data-twitter-covid-19-spam-detection",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-18T00:19:06Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pittawat/autotrain-data-twitter-covid-19-spam-detection
co2_eq_emissions:
emissions: 1.0218403202204225
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2512177276
- CO2 Emissions (in grams): 1.0218
## Validation Metrics
- Loss: 0.275
- Accuracy: 0.906
- Precision: 0.930
- Recall: 0.960
- AUC: 0.882
- F1: 0.945
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pittawat/autotrain-twitter-covid-19-spam-detection-2512177276
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pittawat/autotrain-twitter-covid-19-spam-detection-2512177276", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pittawat/autotrain-twitter-covid-19-spam-detection-2512177276", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
PawanUP85/Arcpro
|
PawanUP85
| 2022-12-17T23:40:10Z | 0 | 0 | null |
[
"license:bsd-3-clause-clear",
"region:us"
] | null | 2022-12-17T23:39:15Z |
---
license: bsd-3-clause-clear
---
git lfs install
git clone https://huggingface.co/PawanUP85/Arcpro
|
Balthamos/chantum-test-q
|
Balthamos
| 2022-12-17T23:36:38Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-17T03:35:15Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: chantum1
---
### Chantum Test q Dreambooth model trained by Balthamos with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
chantum1 (use that on your prompt)

|
camenduru/xformers-hf-a10g
|
camenduru
| 2022-12-17T23:29:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-05T11:22:57Z |
---
title: xformers-hf-a10g
emoji: 🚀
colorFrom: indigo
colorTo: indigo
pinned: false
---
https://github.com/camenduru/stable-diffusion-webui-colab/releases
|
kejian/fanatic-awr
|
kejian
| 2022-12-17T23:13:42Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-17T08:29:32Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: fanatic-awr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fanatic-awr
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 6294
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'batch_size': 128,
'every_n_steps': 128,
'force_call_on': [6294],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 128,
'force_call_on': [6294],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': 'cf05a2b0558c03b08c78f07662c22989785b9520',
'value_head_config': {'is_detached': False}},
'path_or_name': 'kejian/mighty-mle'},
'objective': {'alpha': 0.05, 'beta': 1, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 256,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'fanatic-awr',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 6294,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/dypfced3
|
DrishtiSharma/whisper-small-hindi-3k-steps
|
DrishtiSharma
| 2022-12-17T22:59:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T20:58:54Z |
---
language:
- hi
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hindi - Drishti Sharma
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 16.67658639318744
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hindi - Drishti Sharma
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3013
- Wer: 16.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0188 | 3.67 | 3000 | 0.3013 | 16.6766 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Musha-the-Yusha/MountainCar-v0
|
Musha-the-Yusha
| 2022-12-17T22:40:35Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-17T22:00:34Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
metrics:
- type: mean_reward
value: -132.80 +/- 22.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
spayot/hf-drl-unit1bonus-ppo-Huggy
|
spayot
| 2022-12-17T22:40:30Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-17T22:40:18Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: spayot/hf-drl-unit1bonus-ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kejian/fanatic-rwr
|
kejian
| 2022-12-17T22:36:04Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-17T07:45:53Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: fanatic-rwr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fanatic-rwr
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'batch_size': 128,
'every_n_steps': 256,
'force_call_on': [12588],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 256,
'force_call_on': [12588],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': 'cf05a2b0558c03b08c78f07662c22989785b9520',
'value_head_config': {'is_detached': False}},
'path_or_name': 'kejian/mighty-mle'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'fanatic-rwr',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 12588,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/j6mrfl54
|
jbdaniel/bert-large-uncased-finetuned-bert-large-uncase-p1
|
jbdaniel
| 2022-12-17T22:21:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-12-17T18:33:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-bert-large-uncase-p1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-bert-large-uncase-p1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0816 | 1.0 | 11392 | 0.0993 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AsmaAsma/my-awesome-setfit-model
|
AsmaAsma
| 2022-12-17T21:31:02Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-12-15T18:09:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 28,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Schoolar/ppo-LunarLander-long_training_5kk
|
Schoolar
| 2022-12-17T21:22:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-17T21:22:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PP0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.06 +/- 12.12
name: mean_reward
verified: false
---
# **PP0** Agent playing **LunarLander-v2**
This is a trained model of a **PP0** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anuragshas/whisper-large-v2-ta
|
anuragshas
| 2022-12-17T21:19:11Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ta",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T15:00:39Z |
---
language:
- ta
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large-v2 Tamil
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 ta
type: mozilla-foundation/common_voice_11_0
config: ta
split: test
args: ta
metrics:
- name: Wer
type: wer
value: 8.45381557902738
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Tamil
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 ta dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1727
- Wer: 8.4538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0723 | 1.27 | 1000 | 0.1727 | 8.4538 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
MohamedSaad/CovidAutoTrainTest
|
MohamedSaad
| 2022-12-17T21:01:18Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"ar",
"dataset:MohamedSaad/autotrain-data-covid",
"doi:10.57967/hf/0219",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-17T20:59:22Z |
---
tags:
- autotrain
- text-classification
language:
- ar
widget:
- text: "I love AutoTrain 🤗"
datasets:
- MohamedSaad/autotrain-data-covid
co2_eq_emissions:
emissions: 1.7646991170797304
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2509577239
- CO2 Emissions (in grams): 1.7647
## Validation Metrics
- Loss: 1.861
- Accuracy: 0.319
- Macro F1: 0.231
- Micro F1: 0.319
- Weighted F1: 0.337
- Macro Precision: 0.270
- Micro Precision: 0.319
- Weighted Precision: 0.613
- Macro Recall: 0.346
- Micro Recall: 0.319
- Weighted Recall: 0.319
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/MohamedSaad/autotrain-covid-2509577239
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MohamedSaad/autotrain-covid-2509577239", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("MohamedSaad/autotrain-covid-2509577239", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
kanixwang/eth-setfit-payment-model
|
kanixwang
| 2022-12-17T20:22:34Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-12-17T06:19:22Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 26915 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 26915,
"warmup_steps": 2692,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sd-concepts-library/painting-made-by-bruegel-v4
|
sd-concepts-library
| 2022-12-17T20:02:41Z | 0 | 4 | null |
[
"license:mit",
"region:us"
] | null | 2022-12-17T18:01:28Z |
---
license: mit
---
### painting made by bruegel V4 on Stable Diffusion
This version includes entire paintings, as well as close ups.
This is the `<bruegel-style-artwork>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Using stabilityai/stable-diffusion-2-base
Example output:

Here is the new concept you will be able to use as a `style`:



























































































































































































































































|
sinsforeal/akazaakariv2
|
sinsforeal
| 2022-12-17T20:02:16Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-12-17T19:50:21Z |
---
license: openrail
---
Akaza Akari model trained on 157 768res images. I use this prompt to get her school uniform and what not
masterpiece, best quality, 1girl, solo, long_sleeves, purple eyes, akazaakari, red hair, short hair, ahoge, double bun, nanamori school uniform, short sleeves, (white shirt:1.1) , (black sailor collar:1.2), pleated skirt, namori, yuru yuri, standing,
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name
Steps: 18, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1872758733, Size: 720x1280, Model hash: 647c01d6, Batch size: 4, Batch pos: 0, Denoising strength: 0.7, Clip skip: 2, First pass size: 0x0

|
Schoolar/ppo-LunarLander-long_training_2kk
|
Schoolar
| 2022-12-17T19:55:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-17T19:54:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PP0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.04 +/- 13.72
name: mean_reward
verified: false
---
# **PP0** Agent playing **LunarLander-v2**
This is a trained model of a **PP0** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
malmarz/whisper_small_s5k_b64_nofreeze_mgb2cv11
|
malmarz
| 2022-12-17T19:54:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-16T20:22:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4429
- Wer: 52.7568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3629 | 1.03 | 1000 | 0.4917 | 53.1291 |
| 0.289 | 2.06 | 2000 | 0.4747 | 61.3855 |
| 0.2996 | 3.08 | 3000 | 0.4542 | 55.4692 |
| 0.2331 | 4.11 | 4000 | 0.4353 | 51.4917 |
| 0.1566 | 5.14 | 5000 | 0.4429 | 52.7568 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
dkuznetsov/ppo-LunarLander-v2
|
dkuznetsov
| 2022-12-17T19:40:09Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-17T15:34:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.41 +/- 19.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sambosis/distilbert-base-uncased-finetuned-squad
|
Sambosis
| 2022-12-17T19:01:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-12-11T18:37:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6224 | 1.0 | 692 | 1.1812 |
| 1.0216 | 2.0 | 1384 | 1.2495 |
| 0.5638 | 3.0 | 2076 | 1.3098 |
| 0.3679 | 4.0 | 2768 | 1.6784 |
| 0.2703 | 5.0 | 3460 | 1.8842 |
| 0.1057 | 6.0 | 4152 | 2.1904 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
dor88/Taxi-v3
|
dor88
| 2022-12-17T18:53:27Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-17T18:53:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.68
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dor88/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"], is_slippery=False)
```
|
sd-concepts-library/ahx-model-3
|
sd-concepts-library
| 2022-12-17T18:48:30Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-12-17T18:48:26Z |
---
license: mit
---
### ahx-model-3 on Stable Diffusion
This is the `<ahx-model-3>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
KMW/ppo-LunarLander-v2
|
KMW
| 2022-12-17T18:45:39Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-17T18:45:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.49 +/- 20.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
msoilan-usal/ppo-Huggy
|
msoilan-usal
| 2022-12-17T18:42:36Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2022-12-17T18:42:22Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: msoilan-usal/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
polejowska/vit-vit-base-patch16-224-in21k-eurosat
|
polejowska
| 2022-12-17T18:36:59Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-17T17:32:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-vit-base-patch16-224-in21k-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.988641975308642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-vit-base-patch16-224-in21k-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0957
- Accuracy: 0.9886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3303 | 0.99 | 147 | 0.2950 | 0.9790 |
| 0.1632 | 1.99 | 294 | 0.1593 | 0.9842 |
| 0.1097 | 2.99 | 441 | 0.1223 | 0.9859 |
| 0.0868 | 3.99 | 588 | 0.1053 | 0.9877 |
| 0.0651 | 4.99 | 735 | 0.0957 | 0.9886 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Phillippe/Musical_Isotope_Hypernetworks
|
Phillippe
| 2022-12-17T18:28:54Z | 0 | 2 | null |
[
"license:openrail",
"region:us"
] | null | 2022-12-17T18:12:31Z |
---
license: openrail
---
Hypernetworks of the Musical Isotope girls: Kafu, Sekai, Rime, Coko, and Haru.





|
Aileenvl/ppo-LunarLander-v2
|
Aileenvl
| 2022-12-17T18:17:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-17T18:16:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.26 +/- 14.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.