modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
zwtharry/Taxiiv3
|
zwtharry
| 2023-07-12T00:35:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T06:49:17Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxiiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zwtharry/Taxiiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
WasuratS/distilhubert-finetuned-gtzan
|
WasuratS
| 2023-07-12T00:27:18Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-08T14:11:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set on best epoch:
- Loss: 0.7305
- Accuracy: 0.9
## Model description
Distilhubert is distilled version of the [HuBERT](https://huggingface.co/docs/transformers/model_doc/hubert) and pretrained on data set with 16k frequency. <br/>
Architecture of this model is CTC or Connectionist Temporal Classification is a technique that is used with encoder-only transformer. <br/>
## Training and evaluation data
Training + Evaluation data set is GTZAN which is a popular dataset of 999 songs for music genre classification. <br/>
Each song is a 30-second clip from one of 10 genres of music, spanning disco to metal.<br/>
Train set is 899 songs and Evaluation set is 100 songs remainings.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1728 | 1.0 | 225 | 2.0896 | 0.42 |
| 1.4211 | 2.0 | 450 | 1.4951 | 0.55 |
| 1.2155 | 3.0 | 675 | 1.0669 | 0.72 |
| 1.0175 | 4.0 | 900 | 0.8862 | 0.69 |
| 0.3516 | 5.0 | 1125 | 0.6265 | 0.83 |
| 0.6135 | 6.0 | 1350 | 0.6485 | 0.78 |
| 0.0807 | 7.0 | 1575 | 0.6567 | 0.78 |
| 0.0303 | 8.0 | 1800 | 0.7615 | 0.83 |
| 0.2663 | 9.0 | 2025 | 0.6612 | 0.86 |
| 0.0026 | 10.0 | 2250 | 0.8354 | 0.85 |
| 0.0337 | 11.0 | 2475 | 0.6768 | 0.87 |
| 0.0013 | 12.0 | 2700 | 0.7718 | 0.87 |
| 0.001 | 13.0 | 2925 | 0.7570 | 0.88 |
| 0.0008 | 14.0 | 3150 | 0.8170 | 0.89 |
| 0.0006 | 15.0 | 3375 | 0.7920 | 0.89 |
| 0.0005 | 16.0 | 3600 | 0.9859 | 0.83 |
| 0.0004 | 17.0 | 3825 | 0.8190 | 0.9 |
| 0.0003 | 18.0 | 4050 | 0.7305 | 0.9 |
| 0.0003 | 19.0 | 4275 | 0.8025 | 0.88 |
| 0.0002 | 20.0 | 4500 | 0.8208 | 0.87 |
| 0.0003 | 21.0 | 4725 | 0.7358 | 0.88 |
| 0.0002 | 22.0 | 4950 | 0.8681 | 0.87 |
| 0.0002 | 23.0 | 5175 | 0.7831 | 0.9 |
| 0.0003 | 24.0 | 5400 | 0.8583 | 0.88 |
| 0.0002 | 25.0 | 5625 | 0.8138 | 0.88 |
| 0.0002 | 26.0 | 5850 | 0.7871 | 0.89 |
| 0.0002 | 27.0 | 6075 | 0.8893 | 0.88 |
| 0.0002 | 28.0 | 6300 | 0.8284 | 0.89 |
| 0.0001 | 29.0 | 6525 | 0.8388 | 0.89 |
| 0.0001 | 30.0 | 6750 | 0.8305 | 0.9 |
| 0.0001 | 31.0 | 6975 | 0.8377 | 0.88 |
| 0.0153 | 32.0 | 7200 | 0.8496 | 0.88 |
| 0.0001 | 33.0 | 7425 | 0.8381 | 0.88 |
| 0.0001 | 34.0 | 7650 | 0.8440 | 0.88 |
| 0.0001 | 35.0 | 7875 | 0.8458 | 0.88 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
macapa/emotion-text-classification
|
macapa
| 2023-07-12T00:24:51Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-07-12T00:24:34Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
yusha17/ppo-LunarLander-v2
|
yusha17
| 2023-07-12T00:14:17Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T02:56:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 315.81 +/- 9.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
conorjudge/distilbert-base-uncased-finetuned-sprint-meds
|
conorjudge
| 2023-07-12T00:11:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-25T13:08:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sprint-meds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sprint-meds
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8427
- Accuracy: 0.8790
- F1: 0.8630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.8256 | 1.0 | 21 | 1.9309 | 0.6868 | 0.5992 |
| 1.7067 | 2.0 | 42 | 1.8220 | 0.6993 | 0.6190 |
| 1.5327 | 3.0 | 63 | 1.7250 | 0.7189 | 0.6489 |
| 1.4475 | 4.0 | 84 | 1.6374 | 0.7509 | 0.6903 |
| 1.3108 | 5.0 | 105 | 1.5627 | 0.7438 | 0.6843 |
| 1.1881 | 6.0 | 126 | 1.4905 | 0.7669 | 0.7135 |
| 1.1726 | 7.0 | 147 | 1.4287 | 0.7847 | 0.7379 |
| 1.0681 | 8.0 | 168 | 1.3705 | 0.7829 | 0.7368 |
| 0.9392 | 9.0 | 189 | 1.3214 | 0.7954 | 0.7513 |
| 0.9603 | 10.0 | 210 | 1.2741 | 0.8043 | 0.7613 |
| 0.8349 | 11.0 | 231 | 1.2415 | 0.8185 | 0.7793 |
| 0.8094 | 12.0 | 252 | 1.2028 | 0.8256 | 0.7883 |
| 0.787 | 13.0 | 273 | 1.1673 | 0.8310 | 0.7951 |
| 0.7128 | 14.0 | 294 | 1.1412 | 0.8381 | 0.8056 |
| 0.6821 | 15.0 | 315 | 1.1091 | 0.8399 | 0.8074 |
| 0.6177 | 16.0 | 336 | 1.0906 | 0.8399 | 0.8098 |
| 0.633 | 17.0 | 357 | 1.0645 | 0.8434 | 0.8170 |
| 0.5734 | 18.0 | 378 | 1.0415 | 0.8470 | 0.8199 |
| 0.5181 | 19.0 | 399 | 1.0233 | 0.8416 | 0.8153 |
| 0.4926 | 20.0 | 420 | 1.0076 | 0.8470 | 0.8209 |
| 0.4773 | 21.0 | 441 | 0.9896 | 0.8434 | 0.8184 |
| 0.4361 | 22.0 | 462 | 0.9768 | 0.8470 | 0.8216 |
| 0.4385 | 23.0 | 483 | 0.9624 | 0.8505 | 0.8261 |
| 0.3962 | 24.0 | 504 | 0.9520 | 0.8559 | 0.8309 |
| 0.392 | 25.0 | 525 | 0.9392 | 0.8577 | 0.8339 |
| 0.4095 | 26.0 | 546 | 0.9331 | 0.8577 | 0.8359 |
| 0.3389 | 27.0 | 567 | 0.9242 | 0.8577 | 0.8348 |
| 0.3296 | 28.0 | 588 | 0.9117 | 0.8577 | 0.8344 |
| 0.3527 | 29.0 | 609 | 0.9026 | 0.8665 | 0.8465 |
| 0.315 | 30.0 | 630 | 0.9008 | 0.8648 | 0.8431 |
| 0.2891 | 31.0 | 651 | 0.8923 | 0.8648 | 0.8433 |
| 0.3283 | 32.0 | 672 | 0.8818 | 0.8701 | 0.8507 |
| 0.2967 | 33.0 | 693 | 0.8799 | 0.8683 | 0.8479 |
| 0.2657 | 34.0 | 714 | 0.8750 | 0.8683 | 0.8479 |
| 0.3015 | 35.0 | 735 | 0.8727 | 0.8719 | 0.8526 |
| 0.2847 | 36.0 | 756 | 0.8656 | 0.8754 | 0.8575 |
| 0.2614 | 37.0 | 777 | 0.8630 | 0.8772 | 0.8589 |
| 0.26 | 38.0 | 798 | 0.8604 | 0.8754 | 0.8598 |
| 0.2557 | 39.0 | 819 | 0.8588 | 0.8772 | 0.8612 |
| 0.2389 | 40.0 | 840 | 0.8562 | 0.8790 | 0.8619 |
| 0.2464 | 41.0 | 861 | 0.8529 | 0.8790 | 0.8615 |
| 0.2304 | 42.0 | 882 | 0.8529 | 0.8772 | 0.8613 |
| 0.2356 | 43.0 | 903 | 0.8514 | 0.8790 | 0.8636 |
| 0.2291 | 44.0 | 924 | 0.8479 | 0.8790 | 0.8631 |
| 0.2323 | 45.0 | 945 | 0.8457 | 0.8790 | 0.8631 |
| 0.2281 | 46.0 | 966 | 0.8454 | 0.8790 | 0.8638 |
| 0.2163 | 47.0 | 987 | 0.8432 | 0.8790 | 0.8633 |
| 0.226 | 48.0 | 1008 | 0.8433 | 0.8790 | 0.8631 |
| 0.229 | 49.0 | 1029 | 0.8431 | 0.8790 | 0.8631 |
| 0.2388 | 50.0 | 1050 | 0.8427 | 0.8790 | 0.8630 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_simkd_CEKD_tNone_aNone_tNone_gNone
|
jordyvl
| 2023-07-12T00:02:54Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T22:30:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_simkd_CEKD_tNone_aNone_tNone_gNone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_simkd_CEKD_tNone_aNone_tNone_gNone
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0689
- Accuracy: 0.6
- Brier Loss: 0.6433
- Nll: 2.4057
- F1 Micro: 0.6
- F1 Macro: 0.6101
- Ece: 0.3353
- Aurc: 0.1685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 0.0859 | 0.0675 | 0.9373 | 7.3238 | 0.0675 | 0.0163 | 0.1099 | 0.9351 |
| No log | 2.0 | 50 | 0.0810 | 0.0675 | 0.9372 | 7.0436 | 0.0675 | 0.0153 | 0.1067 | 0.9365 |
| No log | 3.0 | 75 | 0.0804 | 0.0725 | 0.9368 | 6.5507 | 0.0725 | 0.0268 | 0.1041 | 0.9438 |
| No log | 4.0 | 100 | 0.0800 | 0.0725 | 0.9362 | 6.2816 | 0.0725 | 0.0293 | 0.1056 | 0.9404 |
| No log | 5.0 | 125 | 0.0797 | 0.0775 | 0.9352 | 6.1624 | 0.0775 | 0.0225 | 0.1125 | 0.9037 |
| No log | 6.0 | 150 | 0.0793 | 0.0875 | 0.9337 | 6.0364 | 0.0875 | 0.0376 | 0.1173 | 0.8572 |
| No log | 7.0 | 175 | 0.0788 | 0.13 | 0.9307 | 4.5728 | 0.13 | 0.0918 | 0.1430 | 0.7693 |
| No log | 8.0 | 200 | 0.0781 | 0.2325 | 0.9246 | 3.6321 | 0.2325 | 0.1958 | 0.2225 | 0.5621 |
| No log | 9.0 | 225 | 0.0770 | 0.31 | 0.9103 | 3.3593 | 0.31 | 0.2693 | 0.2782 | 0.4570 |
| No log | 10.0 | 250 | 0.0755 | 0.34 | 0.8830 | 2.9550 | 0.34 | 0.2911 | 0.2951 | 0.4131 |
| No log | 11.0 | 275 | 0.0740 | 0.4075 | 0.8559 | 2.6844 | 0.4075 | 0.3802 | 0.3347 | 0.3241 |
| No log | 12.0 | 300 | 0.0730 | 0.47 | 0.8216 | 2.7315 | 0.47 | 0.4439 | 0.3582 | 0.2707 |
| No log | 13.0 | 325 | 0.0720 | 0.4925 | 0.7913 | 2.6641 | 0.4925 | 0.4606 | 0.3561 | 0.2588 |
| No log | 14.0 | 350 | 0.0717 | 0.4725 | 0.7854 | 2.7229 | 0.4725 | 0.4565 | 0.3296 | 0.2732 |
| No log | 15.0 | 375 | 0.0708 | 0.5125 | 0.7515 | 2.4866 | 0.5125 | 0.4890 | 0.3445 | 0.2379 |
| No log | 16.0 | 400 | 0.0704 | 0.5375 | 0.7424 | 2.4355 | 0.5375 | 0.5131 | 0.3525 | 0.2259 |
| No log | 17.0 | 425 | 0.0702 | 0.545 | 0.7259 | 2.5234 | 0.545 | 0.5227 | 0.3427 | 0.2199 |
| No log | 18.0 | 450 | 0.0696 | 0.545 | 0.7253 | 2.5796 | 0.545 | 0.5318 | 0.3471 | 0.2118 |
| No log | 19.0 | 475 | 0.0697 | 0.56 | 0.7163 | 2.3050 | 0.56 | 0.5547 | 0.3494 | 0.2048 |
| 0.0745 | 20.0 | 500 | 0.0692 | 0.565 | 0.7044 | 2.4019 | 0.565 | 0.5669 | 0.3598 | 0.1869 |
| 0.0745 | 21.0 | 525 | 0.0690 | 0.5775 | 0.6983 | 2.3271 | 0.5775 | 0.5805 | 0.3615 | 0.1906 |
| 0.0745 | 22.0 | 550 | 0.0689 | 0.58 | 0.6855 | 2.2368 | 0.58 | 0.5808 | 0.3572 | 0.1851 |
| 0.0745 | 23.0 | 575 | 0.0690 | 0.56 | 0.6905 | 2.4557 | 0.56 | 0.5709 | 0.3387 | 0.1925 |
| 0.0745 | 24.0 | 600 | 0.0688 | 0.57 | 0.6895 | 2.3632 | 0.57 | 0.5736 | 0.3516 | 0.1912 |
| 0.0745 | 25.0 | 625 | 0.0686 | 0.5775 | 0.6826 | 2.3272 | 0.5775 | 0.5838 | 0.3376 | 0.1802 |
| 0.0745 | 26.0 | 650 | 0.0689 | 0.5625 | 0.6886 | 2.2696 | 0.5625 | 0.5754 | 0.3445 | 0.1917 |
| 0.0745 | 27.0 | 675 | 0.0687 | 0.575 | 0.6765 | 2.3387 | 0.575 | 0.5800 | 0.3511 | 0.1861 |
| 0.0745 | 28.0 | 700 | 0.0689 | 0.5775 | 0.6785 | 2.3039 | 0.5775 | 0.5821 | 0.3546 | 0.1860 |
| 0.0745 | 29.0 | 725 | 0.0685 | 0.6 | 0.6720 | 2.4176 | 0.6 | 0.6013 | 0.3606 | 0.1750 |
| 0.0745 | 30.0 | 750 | 0.0685 | 0.5925 | 0.6690 | 2.2827 | 0.5925 | 0.5962 | 0.3646 | 0.1750 |
| 0.0745 | 31.0 | 775 | 0.0685 | 0.5825 | 0.6682 | 2.2957 | 0.5825 | 0.5885 | 0.3476 | 0.1771 |
| 0.0745 | 32.0 | 800 | 0.0687 | 0.585 | 0.6700 | 2.2669 | 0.585 | 0.5914 | 0.3428 | 0.1797 |
| 0.0745 | 33.0 | 825 | 0.0685 | 0.59 | 0.6652 | 2.3359 | 0.59 | 0.5927 | 0.3429 | 0.1775 |
| 0.0745 | 34.0 | 850 | 0.0686 | 0.5825 | 0.6717 | 2.3900 | 0.5825 | 0.5919 | 0.3453 | 0.1790 |
| 0.0745 | 35.0 | 875 | 0.0685 | 0.5875 | 0.6721 | 2.3131 | 0.5875 | 0.5932 | 0.3579 | 0.1799 |
| 0.0745 | 36.0 | 900 | 0.0686 | 0.5925 | 0.6625 | 2.3435 | 0.5925 | 0.6005 | 0.3441 | 0.1728 |
| 0.0745 | 37.0 | 925 | 0.0685 | 0.5875 | 0.6649 | 2.4475 | 0.5875 | 0.5885 | 0.3550 | 0.1756 |
| 0.0745 | 38.0 | 950 | 0.0685 | 0.5925 | 0.6607 | 2.2842 | 0.5925 | 0.5962 | 0.3410 | 0.1732 |
| 0.0745 | 39.0 | 975 | 0.0685 | 0.6 | 0.6605 | 2.2073 | 0.6 | 0.6083 | 0.3414 | 0.1708 |
| 0.0599 | 40.0 | 1000 | 0.0685 | 0.575 | 0.6578 | 2.3075 | 0.575 | 0.5788 | 0.3341 | 0.1773 |
| 0.0599 | 41.0 | 1025 | 0.0685 | 0.5975 | 0.6598 | 2.1562 | 0.5975 | 0.6067 | 0.3462 | 0.1685 |
| 0.0599 | 42.0 | 1050 | 0.0685 | 0.5925 | 0.6592 | 2.3363 | 0.5925 | 0.5999 | 0.3262 | 0.1733 |
| 0.0599 | 43.0 | 1075 | 0.0683 | 0.5925 | 0.6545 | 2.2970 | 0.5925 | 0.5975 | 0.3413 | 0.1741 |
| 0.0599 | 44.0 | 1100 | 0.0686 | 0.5975 | 0.6590 | 2.2220 | 0.5975 | 0.6061 | 0.3425 | 0.1698 |
| 0.0599 | 45.0 | 1125 | 0.0684 | 0.585 | 0.6563 | 2.2507 | 0.585 | 0.5876 | 0.3214 | 0.1795 |
| 0.0599 | 46.0 | 1150 | 0.0684 | 0.5975 | 0.6578 | 2.2677 | 0.5975 | 0.6082 | 0.3374 | 0.1712 |
| 0.0599 | 47.0 | 1175 | 0.0684 | 0.5925 | 0.6531 | 2.3091 | 0.5925 | 0.5974 | 0.3362 | 0.1716 |
| 0.0599 | 48.0 | 1200 | 0.0685 | 0.5825 | 0.6539 | 2.3803 | 0.5825 | 0.5901 | 0.3098 | 0.1790 |
| 0.0599 | 49.0 | 1225 | 0.0685 | 0.59 | 0.6518 | 2.1855 | 0.59 | 0.6001 | 0.3229 | 0.1759 |
| 0.0599 | 50.0 | 1250 | 0.0685 | 0.595 | 0.6513 | 2.3357 | 0.595 | 0.6004 | 0.3307 | 0.1711 |
| 0.0599 | 51.0 | 1275 | 0.0684 | 0.59 | 0.6499 | 2.3253 | 0.59 | 0.5968 | 0.3298 | 0.1708 |
| 0.0599 | 52.0 | 1300 | 0.0684 | 0.61 | 0.6500 | 2.3352 | 0.61 | 0.6196 | 0.3692 | 0.1687 |
| 0.0599 | 53.0 | 1325 | 0.0685 | 0.595 | 0.6518 | 2.2189 | 0.595 | 0.6036 | 0.3278 | 0.1735 |
| 0.0599 | 54.0 | 1350 | 0.0684 | 0.6025 | 0.6501 | 2.3238 | 0.6025 | 0.6114 | 0.3410 | 0.1668 |
| 0.0599 | 55.0 | 1375 | 0.0684 | 0.595 | 0.6479 | 2.2696 | 0.595 | 0.6022 | 0.3341 | 0.1719 |
| 0.0599 | 56.0 | 1400 | 0.0685 | 0.595 | 0.6496 | 2.3172 | 0.595 | 0.6008 | 0.3239 | 0.1720 |
| 0.0599 | 57.0 | 1425 | 0.0684 | 0.595 | 0.6476 | 2.2983 | 0.595 | 0.6023 | 0.3310 | 0.1667 |
| 0.0599 | 58.0 | 1450 | 0.0684 | 0.605 | 0.6483 | 2.2607 | 0.605 | 0.6140 | 0.3563 | 0.1660 |
| 0.0599 | 59.0 | 1475 | 0.0685 | 0.5975 | 0.6491 | 2.3956 | 0.5975 | 0.6091 | 0.3222 | 0.1691 |
| 0.0576 | 60.0 | 1500 | 0.0685 | 0.5925 | 0.6476 | 2.2049 | 0.5925 | 0.6032 | 0.3240 | 0.1716 |
| 0.0576 | 61.0 | 1525 | 0.0685 | 0.6 | 0.6482 | 2.3095 | 0.6 | 0.6068 | 0.3276 | 0.1703 |
| 0.0576 | 62.0 | 1550 | 0.0685 | 0.6025 | 0.6448 | 2.2755 | 0.6025 | 0.6101 | 0.3303 | 0.1673 |
| 0.0576 | 63.0 | 1575 | 0.0685 | 0.6 | 0.6480 | 2.3857 | 0.6 | 0.6078 | 0.3358 | 0.1687 |
| 0.0576 | 64.0 | 1600 | 0.0685 | 0.59 | 0.6465 | 2.3280 | 0.59 | 0.5990 | 0.3198 | 0.1705 |
| 0.0576 | 65.0 | 1625 | 0.0684 | 0.605 | 0.6438 | 2.3484 | 0.605 | 0.6125 | 0.3346 | 0.1651 |
| 0.0576 | 66.0 | 1650 | 0.0686 | 0.6 | 0.6462 | 2.2443 | 0.6 | 0.6084 | 0.3371 | 0.1706 |
| 0.0576 | 67.0 | 1675 | 0.0685 | 0.6025 | 0.6449 | 2.3717 | 0.6025 | 0.6115 | 0.3317 | 0.1674 |
| 0.0576 | 68.0 | 1700 | 0.0685 | 0.595 | 0.6449 | 2.3396 | 0.595 | 0.6003 | 0.3292 | 0.1676 |
| 0.0576 | 69.0 | 1725 | 0.0686 | 0.595 | 0.6460 | 2.3315 | 0.595 | 0.6047 | 0.3339 | 0.1683 |
| 0.0576 | 70.0 | 1750 | 0.0687 | 0.5975 | 0.6480 | 2.3967 | 0.5975 | 0.6070 | 0.3404 | 0.1702 |
| 0.0576 | 71.0 | 1775 | 0.0686 | 0.6 | 0.6456 | 2.3870 | 0.6 | 0.6095 | 0.3215 | 0.1689 |
| 0.0576 | 72.0 | 1800 | 0.0686 | 0.59 | 0.6455 | 2.3966 | 0.59 | 0.5985 | 0.3273 | 0.1691 |
| 0.0576 | 73.0 | 1825 | 0.0686 | 0.5875 | 0.6472 | 2.3619 | 0.5875 | 0.5975 | 0.3465 | 0.1711 |
| 0.0576 | 74.0 | 1850 | 0.0686 | 0.595 | 0.6436 | 2.4181 | 0.595 | 0.6054 | 0.3183 | 0.1706 |
| 0.0576 | 75.0 | 1875 | 0.0686 | 0.6 | 0.6440 | 2.4160 | 0.6 | 0.6077 | 0.3285 | 0.1677 |
| 0.0576 | 76.0 | 1900 | 0.0687 | 0.6025 | 0.6446 | 2.4184 | 0.6025 | 0.6111 | 0.3408 | 0.1685 |
| 0.0576 | 77.0 | 1925 | 0.0686 | 0.6025 | 0.6440 | 2.4208 | 0.6025 | 0.6111 | 0.3323 | 0.1670 |
| 0.0576 | 78.0 | 1950 | 0.0687 | 0.5975 | 0.6438 | 2.4236 | 0.5975 | 0.6063 | 0.3298 | 0.1689 |
| 0.0576 | 79.0 | 1975 | 0.0687 | 0.5975 | 0.6438 | 2.4521 | 0.5975 | 0.6057 | 0.3328 | 0.1692 |
| 0.0565 | 80.0 | 2000 | 0.0687 | 0.6 | 0.6448 | 2.4213 | 0.6 | 0.6088 | 0.3368 | 0.1682 |
| 0.0565 | 81.0 | 2025 | 0.0688 | 0.5975 | 0.6444 | 2.4257 | 0.5975 | 0.6076 | 0.3179 | 0.1681 |
| 0.0565 | 82.0 | 2050 | 0.0687 | 0.6 | 0.6446 | 2.4225 | 0.6 | 0.6102 | 0.3392 | 0.1673 |
| 0.0565 | 83.0 | 2075 | 0.0687 | 0.6 | 0.6437 | 2.4571 | 0.6 | 0.6091 | 0.3281 | 0.1681 |
| 0.0565 | 84.0 | 2100 | 0.0688 | 0.595 | 0.6439 | 2.4360 | 0.595 | 0.6042 | 0.3256 | 0.1685 |
| 0.0565 | 85.0 | 2125 | 0.0688 | 0.6 | 0.6436 | 2.4396 | 0.6 | 0.6104 | 0.3318 | 0.1683 |
| 0.0565 | 86.0 | 2150 | 0.0688 | 0.6 | 0.6434 | 2.3977 | 0.6 | 0.6095 | 0.3273 | 0.1675 |
| 0.0565 | 87.0 | 2175 | 0.0688 | 0.595 | 0.6432 | 2.4303 | 0.595 | 0.6053 | 0.3146 | 0.1687 |
| 0.0565 | 88.0 | 2200 | 0.0688 | 0.5975 | 0.6431 | 2.4222 | 0.5975 | 0.6071 | 0.3326 | 0.1686 |
| 0.0565 | 89.0 | 2225 | 0.0688 | 0.6 | 0.6440 | 2.4042 | 0.6 | 0.6108 | 0.3303 | 0.1678 |
| 0.0565 | 90.0 | 2250 | 0.0688 | 0.6 | 0.6433 | 2.3998 | 0.6 | 0.6096 | 0.3301 | 0.1679 |
| 0.0565 | 91.0 | 2275 | 0.0689 | 0.6 | 0.6434 | 2.4026 | 0.6 | 0.6108 | 0.3362 | 0.1680 |
| 0.0565 | 92.0 | 2300 | 0.0689 | 0.5975 | 0.6435 | 2.4037 | 0.5975 | 0.6083 | 0.3335 | 0.1680 |
| 0.0565 | 93.0 | 2325 | 0.0689 | 0.5975 | 0.6434 | 2.4060 | 0.5975 | 0.6077 | 0.3344 | 0.1679 |
| 0.0565 | 94.0 | 2350 | 0.0689 | 0.6 | 0.6433 | 2.4024 | 0.6 | 0.6106 | 0.3204 | 0.1683 |
| 0.0565 | 95.0 | 2375 | 0.0689 | 0.595 | 0.6432 | 2.4060 | 0.595 | 0.6052 | 0.3423 | 0.1684 |
| 0.0565 | 96.0 | 2400 | 0.0689 | 0.6 | 0.6432 | 2.4044 | 0.6 | 0.6101 | 0.3404 | 0.1684 |
| 0.0565 | 97.0 | 2425 | 0.0689 | 0.6 | 0.6434 | 2.4042 | 0.6 | 0.6101 | 0.3349 | 0.1683 |
| 0.0565 | 98.0 | 2450 | 0.0689 | 0.6 | 0.6432 | 2.4055 | 0.6 | 0.6101 | 0.3390 | 0.1684 |
| 0.0565 | 99.0 | 2475 | 0.0689 | 0.6 | 0.6433 | 2.4056 | 0.6 | 0.6101 | 0.3393 | 0.1685 |
| 0.056 | 100.0 | 2500 | 0.0689 | 0.6 | 0.6433 | 2.4057 | 0.6 | 0.6101 | 0.3353 | 0.1685 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ben-xl8/wmt22-cometkiwi-da
|
ben-xl8
| 2023-07-11T23:59:39Z | 0 | 1 | null |
[
"translation",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-11T20:03:20Z |
---
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_button_content: Acknowledge license
pipeline_tag: translation
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: cc-by-nc-sa-4.0
---
This is a [COMET](https://github.com/Unbabel/COMET) quality estimation model by Unbabel: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation.
# Paper
[CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task](https://aclanthology.org/2022.wmt-1.60) (Rei et al., WMT 2022)
# License:
cc-by-nc-sa-4.0
# Usage for Inference Endpoint
```python
import json
import requests
API_URL = ""
API_TOKEN="MY_API_KEY"
headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json",
}
def query(url, headers, payload):
data = json.dumps(payload)
response = requests.request("POST", url, headers=headers, data=data)
return json.loads(response.content.decode("utf-8"))
payload = {
"inputs": {
"batch_size": 8,
"workers": None,
"data": [
{
"src": "Youll be picking fruit and generally helping us do all the usual farm work",
"mt": "당신은 과일을 따기도 하고 대체로 우리가 하는 일상적인 농장 일을 돕게 될 겁니다",
},{
"src": "Youll be picking fruit and generally helping us do all the usual farm work",
"mt": "당신은 과일을 따기도 하고 대체로 우리가 하는 일상적인 농장 일을 돕게 될 겁니다",
},{
"src": "Youll be picking fruit and generally helping us do all the usual farm work",
"mt": "당신은 과일을 따기도 하고 대체로 우리가 하는 일상적인 농장 일을 돕게 될 겁니다",
},{
"src": "Youll be picking fruit and generally helping us do all the usual farm work",
"mt": "당신은 과일을 따기도 하고 대체로 우리가 하는 일상적인 농장 일을 돕게 될 겁니다",
},{
"src": "Youll be picking fruit and generally helping us do all the usual farm work",
"mt": "당신은 과일을 따기도 하고 대체로 우리가 하는 일상적인 농장 일을 돕게 될 겁니다",
},{
"src": "Youll be picking fruit and generally helping us do all the usual farm work",
"mt": "당신은 과일을 따기도 하고 대체로 우리가 하는 일상적인 농장 일을 돕게 될 겁니다",
},{
"src": "Youll be picking fruit and generally helping us do all the usual farm work",
"mt": "당신은 과일을 따기도 하고 대체로 우리가 하는 일상적인 농장 일을 돕게 될 겁니다",
},
]
}
}
scores = query(API_URL, headers, payload)
```
# Intended uses
Unbabel's model is intented to be used for **reference-free MT evaluation**.
Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of InfoXLM which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
|
KnutJaegersberg/wikipedia_categories_setfit
|
KnutJaegersberg
| 2023-07-11T23:52:45Z | 2 | 1 |
setfit
|
[
"setfit",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"e5",
"dataset:KnutJaegersberg/wikipedia_categories",
"dataset:KnutJaegersberg/wikipedia_categories_labels",
"license:mit",
"region:us"
] |
sentence-similarity
| 2023-07-11T11:01:19Z |
---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- setfit
- e5
license: mit
datasets:
- KnutJaegersberg/wikipedia_categories
- KnutJaegersberg/wikipedia_categories_labels
---
This English model (e5-large as basis) predicts wikipedia categories (roundabout 37 labels). It is trained on the concatenation of the headlines of the lower level categories articles in few shot setting (i.e. 8 subcategories with their headline concatenations per level 2 category).
Accuracy on test data split is 85 %.
Note that these numbers are just an indicator that training worked, it will differ in production settings, which is why this classifier is meant for corpus exploration.
Use the wikipedia_categories_labels dataset as key.
from setfit import SetFitModel
Download from Hub and run inference
model = SetFitModel.from_pretrained("KnutJaegersberg/wikipedia_categories_setfit")
Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
|
Evan-Lin/Bart-RL-many-entailment-keywordmax-attractive
|
Evan-Lin
| 2023-07-11T23:11:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-11T06:12:39Z |
attractive 1
keyword 1/4
entailment 1
mul 10
normalization
|
sl8425/troubleshooting_steps_printer
|
sl8425
| 2023-07-11T23:07:39Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T20:49:26Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: sl8425/troubleshooting_steps_printer
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sl8425/troubleshooting_steps_printer
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8644
- Validation Loss: 0.8744
- Train Accuracy: 0.7457
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 369, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.6729 | 1.1343 | 0.6428 | 0 |
| 1.0262 | 0.9056 | 0.7366 | 1 |
| 0.8644 | 0.8744 | 0.7457 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
talaugust/sci-writing-strategies
|
talaugust
| 2023-07-11T23:05:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-11T22:42:06Z |
RoBERTa Science writing strategy classifiers
This is a finetuned BART Large model from the paper:
"Writing Strategies for Science Communication: Data and Computational Analysis",
By Tal August, Lauren Kim, Katharina Reinecke, and Noah A. Smith
Published at the Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020.
Abstract: Communicating complex scientific ideas without misleading or overwhelming the public is challenging. While science communication guides exist, they rarely offer empirical evidence for how their strategies are used in practice. Writing strategies that can be automatically recognized could greatly support science communication efforts by enabling tools to detect and suggest strategies for writers. We compile a set of writing strategies drawn from a wide range of prescriptive sources and develop an annotation scheme allowing humans to recognize them. We collect a corpus of 128k science writing documents in English and annotate a subset of this corpus. We use the annotations to train transformer-based classifiers and measure the strategies’ use in the larger corpus. We find that the use of strategies, such as storytelling and emphasizing the most important findings, varies significantly across publications with different reader audiences.
Description
The model is finetuned on the task of identifying if a given sentence from a science news article is using a particular writing strategy (e.g., emphasizing the real world impact of the scientific findings).
The intended use of this model is to identify common science communication writing strategies.
The model is trained on annotated sentences drawn from science news articles. The URLs for the original news articles are at [https://github.com/talaugust/scientific-writing-strategies].
Biases & Limitations
The goal of this model is to enable a wider audience of readers to understand and engage with scientific writing. A risk, though, is that such attempts might instead widen the gap to accessing scientific information. The texts in the datasets we train our models on are in General or Academic American. English. Many people, especially those who have been historically underrepresented in STEM disciplines and medicine, may not be comfortable with this dialect of English. This risks further alienating the readers we hope to serve. An important and exciting direction in NLP is making models more flexible to dialects and low-resource languages.
|
jovi848/autotrain-eng-ta-json-73876139369
|
jovi848
| 2023-07-11T23:05:03Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:jovi848/autotrain-data-eng-ta-json",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-11T22:16:50Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- jovi848/autotrain-data-eng-ta-json
co2_eq_emissions:
emissions: 33.5213011411702
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 73876139369
- CO2 Emissions (in grams): 33.5213
## Validation Metrics
- Loss: 0.000
- SacreBLEU: 0.001
- Gen len: 19.000
|
SHENMU007/neunit_BASE_V11.4
|
SHENMU007
| 2023-07-11T22:52:37Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-11T20:08:34Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
crowbarmassage/ppo-Pyramids
|
crowbarmassage
| 2023-07-11T22:48:42Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-11T22:48:40Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: crowbarmassage/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fgeyer/a2c-AntBulletEnv-v0
|
fgeyer
| 2023-07-11T22:40:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T22:24:37Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2384.01 +/- 64.45
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jordyvl/vit-small_tobacco3482_kd_NKD_t1.0_g1.5
|
jordyvl
| 2023-07-11T22:34:14Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-11T21:57:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_NKD_t1.0_g1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_NKD_t1.0_g1.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9399
- Accuracy: 0.82
- Brier Loss: 0.3024
- Nll: 1.1952
- F1 Micro: 0.82
- F1 Macro: 0.7964
- Ece: 0.1494
- Aurc: 0.0548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 4.9730 | 0.205 | 0.8886 | 5.4337 | 0.205 | 0.1356 | 0.2736 | 0.7527 |
| No log | 2.0 | 14 | 4.6039 | 0.355 | 0.8122 | 3.5880 | 0.3550 | 0.2120 | 0.3197 | 0.5132 |
| No log | 3.0 | 21 | 4.2754 | 0.515 | 0.7054 | 2.0283 | 0.515 | 0.4046 | 0.3275 | 0.2966 |
| No log | 4.0 | 28 | 4.0263 | 0.6 | 0.5799 | 1.5709 | 0.6 | 0.5157 | 0.3062 | 0.1890 |
| No log | 5.0 | 35 | 3.8749 | 0.725 | 0.4857 | 1.5338 | 0.7250 | 0.6949 | 0.3181 | 0.1194 |
| No log | 6.0 | 42 | 3.7023 | 0.765 | 0.3925 | 1.2926 | 0.765 | 0.6948 | 0.2394 | 0.0908 |
| No log | 7.0 | 49 | 3.7728 | 0.78 | 0.3668 | 1.3007 | 0.78 | 0.7355 | 0.2478 | 0.0754 |
| No log | 8.0 | 56 | 3.7328 | 0.785 | 0.3459 | 1.2487 | 0.785 | 0.7501 | 0.2261 | 0.0804 |
| No log | 9.0 | 63 | 3.7092 | 0.77 | 0.3289 | 1.0921 | 0.7700 | 0.7672 | 0.2019 | 0.0767 |
| No log | 10.0 | 70 | 3.6273 | 0.795 | 0.3150 | 1.0342 | 0.795 | 0.7690 | 0.1927 | 0.0716 |
| No log | 11.0 | 77 | 3.5677 | 0.83 | 0.2754 | 1.3837 | 0.83 | 0.7933 | 0.1697 | 0.0532 |
| No log | 12.0 | 84 | 3.5668 | 0.815 | 0.2816 | 1.1304 | 0.815 | 0.7934 | 0.1563 | 0.0617 |
| No log | 13.0 | 91 | 3.6080 | 0.83 | 0.2723 | 0.9515 | 0.83 | 0.8088 | 0.1648 | 0.0543 |
| No log | 14.0 | 98 | 3.6095 | 0.815 | 0.3050 | 1.2020 | 0.815 | 0.8207 | 0.1523 | 0.0633 |
| No log | 15.0 | 105 | 3.6685 | 0.805 | 0.3060 | 1.2725 | 0.805 | 0.7920 | 0.1618 | 0.0676 |
| No log | 16.0 | 112 | 3.5523 | 0.825 | 0.2832 | 0.9447 | 0.825 | 0.8163 | 0.1614 | 0.0569 |
| No log | 17.0 | 119 | 3.5294 | 0.805 | 0.2752 | 0.9918 | 0.805 | 0.7636 | 0.1537 | 0.0549 |
| No log | 18.0 | 126 | 3.5382 | 0.8 | 0.2870 | 1.2294 | 0.8000 | 0.7885 | 0.1603 | 0.0583 |
| No log | 19.0 | 133 | 3.5541 | 0.82 | 0.2905 | 1.2181 | 0.82 | 0.8204 | 0.1400 | 0.0618 |
| No log | 20.0 | 140 | 3.4717 | 0.835 | 0.2606 | 1.1119 | 0.835 | 0.8146 | 0.1382 | 0.0531 |
| No log | 21.0 | 147 | 3.6074 | 0.79 | 0.3099 | 1.2144 | 0.79 | 0.7771 | 0.1419 | 0.0599 |
| No log | 22.0 | 154 | 3.5448 | 0.805 | 0.2868 | 1.2075 | 0.805 | 0.7761 | 0.1439 | 0.0581 |
| No log | 23.0 | 161 | 3.6070 | 0.805 | 0.3057 | 1.2908 | 0.805 | 0.7831 | 0.1393 | 0.0627 |
| No log | 24.0 | 168 | 3.5289 | 0.81 | 0.2716 | 1.1844 | 0.81 | 0.7879 | 0.1358 | 0.0550 |
| No log | 25.0 | 175 | 3.5502 | 0.82 | 0.2827 | 1.1141 | 0.82 | 0.7908 | 0.1460 | 0.0554 |
| No log | 26.0 | 182 | 3.5747 | 0.82 | 0.2829 | 1.1727 | 0.82 | 0.8027 | 0.1330 | 0.0565 |
| No log | 27.0 | 189 | 3.6091 | 0.83 | 0.2787 | 1.1040 | 0.83 | 0.8067 | 0.1347 | 0.0563 |
| No log | 28.0 | 196 | 3.5917 | 0.82 | 0.2837 | 1.1775 | 0.82 | 0.7975 | 0.1513 | 0.0564 |
| No log | 29.0 | 203 | 3.6087 | 0.815 | 0.2875 | 1.1448 | 0.815 | 0.7998 | 0.1339 | 0.0542 |
| No log | 30.0 | 210 | 3.6018 | 0.815 | 0.2819 | 1.1613 | 0.815 | 0.8027 | 0.1507 | 0.0535 |
| No log | 31.0 | 217 | 3.6350 | 0.815 | 0.2845 | 1.2278 | 0.815 | 0.7866 | 0.1401 | 0.0537 |
| No log | 32.0 | 224 | 3.6290 | 0.82 | 0.2815 | 1.1528 | 0.82 | 0.7950 | 0.1424 | 0.0520 |
| No log | 33.0 | 231 | 3.6642 | 0.815 | 0.2865 | 1.1504 | 0.815 | 0.7946 | 0.1379 | 0.0542 |
| No log | 34.0 | 238 | 3.6778 | 0.815 | 0.2929 | 1.2116 | 0.815 | 0.7890 | 0.1437 | 0.0538 |
| No log | 35.0 | 245 | 3.6867 | 0.82 | 0.2869 | 1.1547 | 0.82 | 0.7904 | 0.1404 | 0.0529 |
| No log | 36.0 | 252 | 3.6931 | 0.795 | 0.2946 | 1.1478 | 0.795 | 0.7694 | 0.1494 | 0.0543 |
| No log | 37.0 | 259 | 3.7166 | 0.82 | 0.2921 | 1.2109 | 0.82 | 0.7904 | 0.1489 | 0.0534 |
| No log | 38.0 | 266 | 3.7024 | 0.81 | 0.2889 | 1.1516 | 0.81 | 0.7888 | 0.1508 | 0.0536 |
| No log | 39.0 | 273 | 3.7353 | 0.81 | 0.2943 | 1.2088 | 0.81 | 0.7812 | 0.1466 | 0.0537 |
| No log | 40.0 | 280 | 3.7198 | 0.82 | 0.2891 | 1.1515 | 0.82 | 0.8014 | 0.1285 | 0.0536 |
| No log | 41.0 | 287 | 3.7413 | 0.815 | 0.2899 | 1.2124 | 0.815 | 0.7959 | 0.1471 | 0.0537 |
| No log | 42.0 | 294 | 3.7272 | 0.82 | 0.2896 | 1.2071 | 0.82 | 0.8002 | 0.1414 | 0.0532 |
| No log | 43.0 | 301 | 3.7609 | 0.815 | 0.2925 | 1.2100 | 0.815 | 0.7868 | 0.1486 | 0.0528 |
| No log | 44.0 | 308 | 3.7589 | 0.815 | 0.2922 | 1.2074 | 0.815 | 0.7877 | 0.1398 | 0.0537 |
| No log | 45.0 | 315 | 3.7820 | 0.815 | 0.2961 | 1.2078 | 0.815 | 0.7874 | 0.1499 | 0.0535 |
| No log | 46.0 | 322 | 3.7663 | 0.82 | 0.2926 | 1.2053 | 0.82 | 0.8014 | 0.1369 | 0.0532 |
| No log | 47.0 | 329 | 3.7850 | 0.82 | 0.2944 | 1.2079 | 0.82 | 0.7904 | 0.1374 | 0.0532 |
| No log | 48.0 | 336 | 3.7802 | 0.82 | 0.2935 | 1.2025 | 0.82 | 0.7981 | 0.1483 | 0.0537 |
| No log | 49.0 | 343 | 3.7954 | 0.82 | 0.2937 | 1.2068 | 0.82 | 0.7900 | 0.1354 | 0.0528 |
| No log | 50.0 | 350 | 3.7974 | 0.815 | 0.2954 | 1.2020 | 0.815 | 0.7907 | 0.1491 | 0.0534 |
| No log | 51.0 | 357 | 3.8081 | 0.815 | 0.2965 | 1.2035 | 0.815 | 0.7907 | 0.1533 | 0.0533 |
| No log | 52.0 | 364 | 3.8171 | 0.815 | 0.2982 | 1.2033 | 0.815 | 0.7907 | 0.1466 | 0.0537 |
| No log | 53.0 | 371 | 3.8136 | 0.815 | 0.2961 | 1.2035 | 0.815 | 0.7907 | 0.1399 | 0.0531 |
| No log | 54.0 | 378 | 3.8244 | 0.815 | 0.2977 | 1.2024 | 0.815 | 0.7907 | 0.1586 | 0.0538 |
| No log | 55.0 | 385 | 3.8265 | 0.815 | 0.2963 | 1.2004 | 0.815 | 0.7907 | 0.1506 | 0.0537 |
| No log | 56.0 | 392 | 3.8376 | 0.82 | 0.2980 | 1.2011 | 0.82 | 0.7964 | 0.1471 | 0.0536 |
| No log | 57.0 | 399 | 3.8428 | 0.82 | 0.2982 | 1.1994 | 0.82 | 0.7964 | 0.1562 | 0.0535 |
| No log | 58.0 | 406 | 3.8418 | 0.82 | 0.2973 | 1.2004 | 0.82 | 0.7964 | 0.1484 | 0.0537 |
| No log | 59.0 | 413 | 3.8507 | 0.82 | 0.2984 | 1.2009 | 0.82 | 0.7931 | 0.1563 | 0.0538 |
| No log | 60.0 | 420 | 3.8560 | 0.82 | 0.2989 | 1.2001 | 0.82 | 0.7964 | 0.1579 | 0.0540 |
| No log | 61.0 | 427 | 3.8563 | 0.82 | 0.2974 | 1.1997 | 0.82 | 0.7964 | 0.1560 | 0.0536 |
| No log | 62.0 | 434 | 3.8648 | 0.815 | 0.2986 | 1.1995 | 0.815 | 0.7907 | 0.1532 | 0.0540 |
| No log | 63.0 | 441 | 3.8682 | 0.82 | 0.2991 | 1.1991 | 0.82 | 0.7964 | 0.1570 | 0.0536 |
| No log | 64.0 | 448 | 3.8735 | 0.82 | 0.2989 | 1.1984 | 0.82 | 0.7964 | 0.1481 | 0.0539 |
| No log | 65.0 | 455 | 3.8794 | 0.82 | 0.3000 | 1.1981 | 0.82 | 0.7964 | 0.1496 | 0.0543 |
| No log | 66.0 | 462 | 3.8824 | 0.82 | 0.3002 | 1.1980 | 0.82 | 0.7964 | 0.1567 | 0.0539 |
| No log | 67.0 | 469 | 3.8842 | 0.82 | 0.3005 | 1.1983 | 0.82 | 0.7964 | 0.1438 | 0.0542 |
| No log | 68.0 | 476 | 3.8866 | 0.82 | 0.3001 | 1.1978 | 0.82 | 0.7964 | 0.1418 | 0.0540 |
| No log | 69.0 | 483 | 3.8912 | 0.82 | 0.3003 | 1.1977 | 0.82 | 0.7964 | 0.1570 | 0.0541 |
| No log | 70.0 | 490 | 3.8959 | 0.82 | 0.3008 | 1.1971 | 0.82 | 0.7964 | 0.1445 | 0.0544 |
| No log | 71.0 | 497 | 3.8964 | 0.82 | 0.3002 | 1.1977 | 0.82 | 0.7964 | 0.1366 | 0.0543 |
| 3.4649 | 72.0 | 504 | 3.9021 | 0.82 | 0.3009 | 1.1969 | 0.82 | 0.7964 | 0.1471 | 0.0543 |
| 3.4649 | 73.0 | 511 | 3.9052 | 0.82 | 0.3015 | 1.1976 | 0.82 | 0.7964 | 0.1532 | 0.0546 |
| 3.4649 | 74.0 | 518 | 3.9043 | 0.82 | 0.3002 | 1.1973 | 0.82 | 0.7964 | 0.1371 | 0.0544 |
| 3.4649 | 75.0 | 525 | 3.9096 | 0.82 | 0.3004 | 1.1966 | 0.82 | 0.7964 | 0.1417 | 0.0543 |
| 3.4649 | 76.0 | 532 | 3.9099 | 0.82 | 0.3010 | 1.1965 | 0.82 | 0.7964 | 0.1428 | 0.0545 |
| 3.4649 | 77.0 | 539 | 3.9151 | 0.82 | 0.3016 | 1.1963 | 0.82 | 0.7964 | 0.1460 | 0.0548 |
| 3.4649 | 78.0 | 546 | 3.9143 | 0.82 | 0.3010 | 1.1970 | 0.82 | 0.7964 | 0.1447 | 0.0543 |
| 3.4649 | 79.0 | 553 | 3.9164 | 0.82 | 0.3014 | 1.1966 | 0.82 | 0.7964 | 0.1436 | 0.0545 |
| 3.4649 | 80.0 | 560 | 3.9198 | 0.82 | 0.3018 | 1.1965 | 0.82 | 0.7964 | 0.1520 | 0.0545 |
| 3.4649 | 81.0 | 567 | 3.9218 | 0.82 | 0.3015 | 1.1959 | 0.82 | 0.7964 | 0.1440 | 0.0546 |
| 3.4649 | 82.0 | 574 | 3.9236 | 0.82 | 0.3018 | 1.1961 | 0.82 | 0.7964 | 0.1439 | 0.0546 |
| 3.4649 | 83.0 | 581 | 3.9248 | 0.82 | 0.3017 | 1.1959 | 0.82 | 0.7964 | 0.1440 | 0.0546 |
| 3.4649 | 84.0 | 588 | 3.9267 | 0.82 | 0.3018 | 1.1958 | 0.82 | 0.7964 | 0.1442 | 0.0545 |
| 3.4649 | 85.0 | 595 | 3.9286 | 0.82 | 0.3019 | 1.1959 | 0.82 | 0.7964 | 0.1443 | 0.0546 |
| 3.4649 | 86.0 | 602 | 3.9300 | 0.82 | 0.3020 | 1.1958 | 0.82 | 0.7964 | 0.1444 | 0.0545 |
| 3.4649 | 87.0 | 609 | 3.9320 | 0.82 | 0.3022 | 1.1956 | 0.82 | 0.7964 | 0.1446 | 0.0546 |
| 3.4649 | 88.0 | 616 | 3.9327 | 0.82 | 0.3022 | 1.1957 | 0.82 | 0.7964 | 0.1446 | 0.0545 |
| 3.4649 | 89.0 | 623 | 3.9340 | 0.82 | 0.3022 | 1.1955 | 0.82 | 0.7964 | 0.1436 | 0.0546 |
| 3.4649 | 90.0 | 630 | 3.9346 | 0.82 | 0.3022 | 1.1956 | 0.82 | 0.7964 | 0.1447 | 0.0546 |
| 3.4649 | 91.0 | 637 | 3.9360 | 0.82 | 0.3023 | 1.1953 | 0.82 | 0.7964 | 0.1438 | 0.0546 |
| 3.4649 | 92.0 | 644 | 3.9368 | 0.82 | 0.3023 | 1.1954 | 0.82 | 0.7964 | 0.1438 | 0.0546 |
| 3.4649 | 93.0 | 651 | 3.9374 | 0.82 | 0.3023 | 1.1954 | 0.82 | 0.7964 | 0.1437 | 0.0548 |
| 3.4649 | 94.0 | 658 | 3.9380 | 0.82 | 0.3023 | 1.1953 | 0.82 | 0.7964 | 0.1438 | 0.0548 |
| 3.4649 | 95.0 | 665 | 3.9385 | 0.82 | 0.3023 | 1.1953 | 0.82 | 0.7964 | 0.1494 | 0.0549 |
| 3.4649 | 96.0 | 672 | 3.9391 | 0.82 | 0.3024 | 1.1952 | 0.82 | 0.7964 | 0.1494 | 0.0548 |
| 3.4649 | 97.0 | 679 | 3.9393 | 0.82 | 0.3024 | 1.1952 | 0.82 | 0.7964 | 0.1495 | 0.0548 |
| 3.4649 | 98.0 | 686 | 3.9396 | 0.82 | 0.3024 | 1.1952 | 0.82 | 0.7964 | 0.1494 | 0.0548 |
| 3.4649 | 99.0 | 693 | 3.9398 | 0.82 | 0.3024 | 1.1952 | 0.82 | 0.7964 | 0.1494 | 0.0548 |
| 3.4649 | 100.0 | 700 | 3.9399 | 0.82 | 0.3024 | 1.1952 | 0.82 | 0.7964 | 0.1494 | 0.0548 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
0xMaka/based-bert-sc
|
0xMaka
| 2023-07-11T22:28:41Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:0xMaka/trading-candles-subset-sc-format",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T17:56:55Z |
---
datasets:
- 0xMaka/trading-candles-subset-sc-format
language:
- en
metrics:
- accuracy
- f1
widget:
- text: 'identify candle: 17284.58,17264.41,17284.58,17264.41'
example_title: Bear
- text: 'identify candle: open: 17343.43, close: 17625.18, high: 17804.68, low: 17322.15'
example_title: Bull
license: gpl
---
# Based Bert for sequence classification
This model is a POC and shouldn't be used for any production task.
## Model description
Based Bert SC is a text classification bot for binary classification of a trading candles opening and closing prices.
## Uses and limitations
This model can reliably return the bullish or bearish status of a candle given the opening, closing, high and low, in a format shown.
It will have trouble if the order of the numbers change (even if tags are included).
### How to use
You can use this model directly with a pipeline
```python
>>> from transformers import pipeline
>>> pipe = pipeline("text-classification", model="0xMaka/based-bert-sc")
>>> text = "identify candle: open: 21788.19, close: 21900, high: 21965.23, low: 21788.19"
>>> pipe(text)
[{'label': 'Bullish', 'score': 0.9999682903289795}]
```
## Finetuning
For parameters: https://github.com/0xMaka/based-bert-sc/blob/main/trainer.py
This model was fine tuned on an RTX-3060-Mobile
```
// BUS_WIDTH = 192
// CLOCK_RATE = 1750
// DDR_MULTI = 8 // DDR6
// BWTheoretical = (((CLOCK_RATE * (10 ** 6)) * (BUS_WIDTH/8)) * DDR_MULI) / (10 ** 9)
// BWTheoretical == 336 GB/s
```
Self-measured effective (GB/s): 316.280736
|
AACEE/textual_inversion_cat
|
AACEE
| 2023-07-11T22:10:09Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-08T07:11:18Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - AACEE/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
crowbarmassage/ppo-SnowballTarget
|
crowbarmassage
| 2023-07-11T22:08:09Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-11T22:08:08Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: crowbarmassage/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kyungsukim-ai/distilbert-base-uncased-finetuned-squad
|
kyungsukim-ai
| 2023-07-11T22:05:45Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-09T23:24:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ysmaicon/distilbert-base-uncased-finetuned-cola
|
ysmaicon
| 2023-07-11T22:04:26Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T21:15:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ysmaicon/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ysmaicon/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1941
- Validation Loss: 0.5355
- Train Matthews Correlation: 0.5256
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5184 | 0.4689 | 0.4919 | 0 |
| 0.3229 | 0.4772 | 0.5191 | 1 |
| 0.1941 | 0.5355 | 0.5256 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
epicmobile181/huggingface_sequence_classification
|
epicmobile181
| 2023-07-11T22:03:01Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T18:00:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: huggingface_sequence_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huggingface_sequence_classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-tiny_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone
|
jordyvl
| 2023-07-11T21:56:36Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-11T21:05:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0396
- Accuracy: 0.735
- Brier Loss: 0.7729
- Nll: 1.4473
- F1 Micro: 0.735
- F1 Macro: 0.6948
- Ece: 0.5886
- Aurc: 0.0947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 0.0553 | 0.085 | 0.8991 | 5.2518 | 0.085 | 0.0595 | 0.1614 | 0.8792 |
| No log | 2.0 | 50 | 0.0488 | 0.035 | 0.9007 | 7.4288 | 0.035 | 0.0069 | 0.1218 | 0.9410 |
| No log | 3.0 | 75 | 0.0481 | 0.045 | 0.8999 | 6.0525 | 0.045 | 0.0087 | 0.1349 | 0.9308 |
| No log | 4.0 | 100 | 0.0478 | 0.05 | 0.8991 | 5.6444 | 0.0500 | 0.0149 | 0.1378 | 0.9211 |
| No log | 5.0 | 125 | 0.0475 | 0.14 | 0.8981 | 5.8239 | 0.14 | 0.0863 | 0.1987 | 0.8452 |
| No log | 6.0 | 150 | 0.0471 | 0.305 | 0.8964 | 5.7469 | 0.305 | 0.1652 | 0.3097 | 0.5016 |
| No log | 7.0 | 175 | 0.0466 | 0.305 | 0.8950 | 4.9568 | 0.305 | 0.1899 | 0.3035 | 0.5165 |
| No log | 8.0 | 200 | 0.0461 | 0.315 | 0.8931 | 4.8214 | 0.315 | 0.1811 | 0.3152 | 0.4687 |
| No log | 9.0 | 225 | 0.0455 | 0.315 | 0.8907 | 4.7406 | 0.315 | 0.2028 | 0.3225 | 0.4671 |
| No log | 10.0 | 250 | 0.0449 | 0.35 | 0.8862 | 4.7538 | 0.35 | 0.1972 | 0.3364 | 0.4263 |
| No log | 11.0 | 275 | 0.0443 | 0.37 | 0.8793 | 4.6283 | 0.37 | 0.2106 | 0.3455 | 0.4084 |
| No log | 12.0 | 300 | 0.0438 | 0.4 | 0.8731 | 3.9664 | 0.4000 | 0.2443 | 0.3685 | 0.3731 |
| No log | 13.0 | 325 | 0.0434 | 0.425 | 0.8628 | 3.9702 | 0.425 | 0.2574 | 0.3842 | 0.3601 |
| No log | 14.0 | 350 | 0.0430 | 0.465 | 0.8586 | 3.8630 | 0.465 | 0.3226 | 0.4112 | 0.3024 |
| No log | 15.0 | 375 | 0.0428 | 0.46 | 0.8488 | 3.9046 | 0.46 | 0.2693 | 0.4082 | 0.2854 |
| No log | 16.0 | 400 | 0.0424 | 0.475 | 0.8430 | 3.2916 | 0.4750 | 0.2802 | 0.4183 | 0.2626 |
| No log | 17.0 | 425 | 0.0421 | 0.555 | 0.8439 | 2.7780 | 0.555 | 0.4109 | 0.4760 | 0.2123 |
| No log | 18.0 | 450 | 0.0418 | 0.575 | 0.8317 | 2.8629 | 0.575 | 0.4399 | 0.4869 | 0.2123 |
| No log | 19.0 | 475 | 0.0415 | 0.665 | 0.8329 | 2.5145 | 0.665 | 0.5077 | 0.5655 | 0.1361 |
| 0.0491 | 20.0 | 500 | 0.0412 | 0.635 | 0.8121 | 2.7489 | 0.635 | 0.5155 | 0.5235 | 0.1686 |
| 0.0491 | 21.0 | 525 | 0.0410 | 0.655 | 0.8221 | 1.7853 | 0.655 | 0.5182 | 0.5509 | 0.1545 |
| 0.0491 | 22.0 | 550 | 0.0406 | 0.685 | 0.8045 | 1.5894 | 0.685 | 0.5486 | 0.5627 | 0.1305 |
| 0.0491 | 23.0 | 575 | 0.0405 | 0.68 | 0.7984 | 1.7241 | 0.68 | 0.5489 | 0.5545 | 0.1296 |
| 0.0491 | 24.0 | 600 | 0.0402 | 0.725 | 0.7959 | 1.5667 | 0.7250 | 0.6156 | 0.5926 | 0.1055 |
| 0.0491 | 25.0 | 625 | 0.0402 | 0.68 | 0.7927 | 1.4334 | 0.68 | 0.5853 | 0.5453 | 0.1239 |
| 0.0491 | 26.0 | 650 | 0.0401 | 0.705 | 0.7808 | 1.8114 | 0.705 | 0.5856 | 0.5735 | 0.1109 |
| 0.0491 | 27.0 | 675 | 0.0399 | 0.71 | 0.7859 | 1.6101 | 0.7100 | 0.6176 | 0.5679 | 0.1034 |
| 0.0491 | 28.0 | 700 | 0.0399 | 0.715 | 0.7808 | 1.3423 | 0.715 | 0.6612 | 0.5582 | 0.1130 |
| 0.0491 | 29.0 | 725 | 0.0398 | 0.705 | 0.7789 | 1.3921 | 0.705 | 0.6477 | 0.5615 | 0.1175 |
| 0.0491 | 30.0 | 750 | 0.0397 | 0.73 | 0.7767 | 1.5801 | 0.7300 | 0.6758 | 0.5741 | 0.1069 |
| 0.0491 | 31.0 | 775 | 0.0397 | 0.72 | 0.7774 | 1.3193 | 0.72 | 0.6653 | 0.5790 | 0.1004 |
| 0.0491 | 32.0 | 800 | 0.0396 | 0.745 | 0.7729 | 1.4864 | 0.745 | 0.6931 | 0.5941 | 0.0933 |
| 0.0491 | 33.0 | 825 | 0.0396 | 0.74 | 0.7736 | 1.5161 | 0.74 | 0.6901 | 0.5828 | 0.0934 |
| 0.0491 | 34.0 | 850 | 0.0396 | 0.745 | 0.7754 | 1.5432 | 0.745 | 0.6963 | 0.5911 | 0.0857 |
| 0.0491 | 35.0 | 875 | 0.0396 | 0.74 | 0.7744 | 1.4773 | 0.74 | 0.6936 | 0.5966 | 0.0896 |
| 0.0491 | 36.0 | 900 | 0.0397 | 0.715 | 0.7762 | 1.3769 | 0.715 | 0.6827 | 0.5675 | 0.1048 |
| 0.0491 | 37.0 | 925 | 0.0396 | 0.72 | 0.7744 | 1.3882 | 0.72 | 0.6780 | 0.5689 | 0.0970 |
| 0.0491 | 38.0 | 950 | 0.0396 | 0.72 | 0.7762 | 1.4098 | 0.72 | 0.6874 | 0.5701 | 0.1016 |
| 0.0491 | 39.0 | 975 | 0.0395 | 0.74 | 0.7728 | 1.3890 | 0.74 | 0.6894 | 0.5861 | 0.0902 |
| 0.0386 | 40.0 | 1000 | 0.0396 | 0.74 | 0.7724 | 1.5265 | 0.74 | 0.6936 | 0.5906 | 0.0881 |
| 0.0386 | 41.0 | 1025 | 0.0396 | 0.725 | 0.7730 | 1.3516 | 0.7250 | 0.6768 | 0.5784 | 0.0942 |
| 0.0386 | 42.0 | 1050 | 0.0396 | 0.73 | 0.7728 | 1.3633 | 0.7300 | 0.6847 | 0.5899 | 0.0945 |
| 0.0386 | 43.0 | 1075 | 0.0396 | 0.735 | 0.7730 | 1.3670 | 0.735 | 0.6874 | 0.5830 | 0.0940 |
| 0.0386 | 44.0 | 1100 | 0.0395 | 0.73 | 0.7727 | 1.4707 | 0.7300 | 0.6850 | 0.5914 | 0.0930 |
| 0.0386 | 45.0 | 1125 | 0.0396 | 0.725 | 0.7721 | 1.4269 | 0.7250 | 0.6810 | 0.5770 | 0.0934 |
| 0.0386 | 46.0 | 1150 | 0.0396 | 0.72 | 0.7730 | 1.3567 | 0.72 | 0.6793 | 0.5717 | 0.0976 |
| 0.0386 | 47.0 | 1175 | 0.0396 | 0.715 | 0.7731 | 1.3708 | 0.715 | 0.6757 | 0.5717 | 0.0974 |
| 0.0386 | 48.0 | 1200 | 0.0396 | 0.735 | 0.7724 | 1.4118 | 0.735 | 0.6874 | 0.5791 | 0.0923 |
| 0.0386 | 49.0 | 1225 | 0.0396 | 0.72 | 0.7729 | 1.3647 | 0.72 | 0.6837 | 0.5711 | 0.0965 |
| 0.0386 | 50.0 | 1250 | 0.0396 | 0.725 | 0.7727 | 1.3773 | 0.7250 | 0.6820 | 0.5740 | 0.0963 |
| 0.0386 | 51.0 | 1275 | 0.0396 | 0.73 | 0.7736 | 1.3286 | 0.7300 | 0.6847 | 0.5766 | 0.0939 |
| 0.0386 | 52.0 | 1300 | 0.0396 | 0.725 | 0.7732 | 1.3810 | 0.7250 | 0.6817 | 0.5830 | 0.0944 |
| 0.0386 | 53.0 | 1325 | 0.0396 | 0.725 | 0.7725 | 1.3568 | 0.7250 | 0.6820 | 0.5763 | 0.0948 |
| 0.0386 | 54.0 | 1350 | 0.0396 | 0.73 | 0.7731 | 1.3693 | 0.7300 | 0.6847 | 0.5768 | 0.0941 |
| 0.0386 | 55.0 | 1375 | 0.0396 | 0.745 | 0.7728 | 1.3631 | 0.745 | 0.7112 | 0.5842 | 0.0928 |
| 0.0386 | 56.0 | 1400 | 0.0396 | 0.715 | 0.7731 | 1.4175 | 0.715 | 0.6712 | 0.5600 | 0.0976 |
| 0.0386 | 57.0 | 1425 | 0.0396 | 0.725 | 0.7725 | 1.3668 | 0.7250 | 0.6929 | 0.5738 | 0.0962 |
| 0.0386 | 58.0 | 1450 | 0.0396 | 0.73 | 0.7734 | 1.3903 | 0.7300 | 0.6958 | 0.5868 | 0.0963 |
| 0.0386 | 59.0 | 1475 | 0.0396 | 0.725 | 0.7729 | 1.4120 | 0.7250 | 0.6765 | 0.5756 | 0.0945 |
| 0.0373 | 60.0 | 1500 | 0.0396 | 0.725 | 0.7732 | 1.3655 | 0.7250 | 0.6820 | 0.5754 | 0.0951 |
| 0.0373 | 61.0 | 1525 | 0.0396 | 0.745 | 0.7727 | 1.3676 | 0.745 | 0.7038 | 0.5913 | 0.0921 |
| 0.0373 | 62.0 | 1550 | 0.0396 | 0.72 | 0.7729 | 1.3629 | 0.72 | 0.6797 | 0.5762 | 0.0969 |
| 0.0373 | 63.0 | 1575 | 0.0396 | 0.725 | 0.7730 | 1.4242 | 0.7250 | 0.6865 | 0.5811 | 0.0950 |
| 0.0373 | 64.0 | 1600 | 0.0396 | 0.725 | 0.7735 | 1.3658 | 0.7250 | 0.6923 | 0.5750 | 0.0959 |
| 0.0373 | 65.0 | 1625 | 0.0396 | 0.73 | 0.7731 | 1.4296 | 0.7300 | 0.6958 | 0.5769 | 0.0954 |
| 0.0373 | 66.0 | 1650 | 0.0396 | 0.735 | 0.7727 | 1.4780 | 0.735 | 0.6980 | 0.5851 | 0.0938 |
| 0.0373 | 67.0 | 1675 | 0.0396 | 0.725 | 0.7725 | 1.3669 | 0.7250 | 0.6824 | 0.5715 | 0.0938 |
| 0.0373 | 68.0 | 1700 | 0.0396 | 0.725 | 0.7730 | 1.4327 | 0.7250 | 0.6804 | 0.5741 | 0.0940 |
| 0.0373 | 69.0 | 1725 | 0.0396 | 0.73 | 0.7728 | 1.3811 | 0.7300 | 0.6961 | 0.5806 | 0.0963 |
| 0.0373 | 70.0 | 1750 | 0.0396 | 0.735 | 0.7727 | 1.3812 | 0.735 | 0.7081 | 0.5765 | 0.0952 |
| 0.0373 | 71.0 | 1775 | 0.0396 | 0.73 | 0.7730 | 1.4263 | 0.7300 | 0.6961 | 0.5739 | 0.0953 |
| 0.0373 | 72.0 | 1800 | 0.0396 | 0.73 | 0.7731 | 1.4280 | 0.7300 | 0.6953 | 0.5803 | 0.0956 |
| 0.0373 | 73.0 | 1825 | 0.0396 | 0.735 | 0.7729 | 1.3676 | 0.735 | 0.6988 | 0.5889 | 0.0953 |
| 0.0373 | 74.0 | 1850 | 0.0396 | 0.735 | 0.7727 | 1.4358 | 0.735 | 0.6985 | 0.5828 | 0.0940 |
| 0.0373 | 75.0 | 1875 | 0.0396 | 0.735 | 0.7727 | 1.4306 | 0.735 | 0.6965 | 0.5786 | 0.0940 |
| 0.0373 | 76.0 | 1900 | 0.0396 | 0.73 | 0.7729 | 1.4343 | 0.7300 | 0.6957 | 0.5802 | 0.0958 |
| 0.0373 | 77.0 | 1925 | 0.0396 | 0.73 | 0.7726 | 1.4259 | 0.7300 | 0.6961 | 0.5795 | 0.0962 |
| 0.0373 | 78.0 | 1950 | 0.0396 | 0.74 | 0.7731 | 1.4246 | 0.74 | 0.7080 | 0.5879 | 0.0941 |
| 0.0373 | 79.0 | 1975 | 0.0396 | 0.735 | 0.7730 | 1.4414 | 0.735 | 0.6980 | 0.5914 | 0.0945 |
| 0.0372 | 80.0 | 2000 | 0.0396 | 0.74 | 0.7727 | 1.4285 | 0.74 | 0.7103 | 0.5915 | 0.0939 |
| 0.0372 | 81.0 | 2025 | 0.0396 | 0.735 | 0.7731 | 1.4379 | 0.735 | 0.6980 | 0.5826 | 0.0942 |
| 0.0372 | 82.0 | 2050 | 0.0396 | 0.735 | 0.7729 | 1.4308 | 0.735 | 0.6963 | 0.5827 | 0.0942 |
| 0.0372 | 83.0 | 2075 | 0.0396 | 0.735 | 0.7728 | 1.4329 | 0.735 | 0.6968 | 0.5896 | 0.0946 |
| 0.0372 | 84.0 | 2100 | 0.0396 | 0.735 | 0.7728 | 1.4343 | 0.735 | 0.6948 | 0.5889 | 0.0947 |
| 0.0372 | 85.0 | 2125 | 0.0396 | 0.735 | 0.7727 | 1.4320 | 0.735 | 0.6948 | 0.5988 | 0.0945 |
| 0.0372 | 86.0 | 2150 | 0.0396 | 0.735 | 0.7730 | 1.4366 | 0.735 | 0.6963 | 0.5883 | 0.0949 |
| 0.0372 | 87.0 | 2175 | 0.0396 | 0.73 | 0.7728 | 1.4825 | 0.7300 | 0.6888 | 0.5878 | 0.0945 |
| 0.0372 | 88.0 | 2200 | 0.0396 | 0.735 | 0.7731 | 1.4339 | 0.735 | 0.6945 | 0.5828 | 0.0948 |
| 0.0372 | 89.0 | 2225 | 0.0396 | 0.735 | 0.7729 | 1.4383 | 0.735 | 0.6948 | 0.5917 | 0.0946 |
| 0.0372 | 90.0 | 2250 | 0.0396 | 0.735 | 0.7729 | 1.4471 | 0.735 | 0.6948 | 0.5867 | 0.0944 |
| 0.0372 | 91.0 | 2275 | 0.0396 | 0.735 | 0.7728 | 1.4402 | 0.735 | 0.6948 | 0.5892 | 0.0946 |
| 0.0372 | 92.0 | 2300 | 0.0396 | 0.735 | 0.7729 | 1.4412 | 0.735 | 0.6948 | 0.5952 | 0.0948 |
| 0.0372 | 93.0 | 2325 | 0.0396 | 0.735 | 0.7729 | 1.4709 | 0.735 | 0.6948 | 0.5917 | 0.0948 |
| 0.0372 | 94.0 | 2350 | 0.0396 | 0.735 | 0.7728 | 1.4413 | 0.735 | 0.6948 | 0.5858 | 0.0947 |
| 0.0372 | 95.0 | 2375 | 0.0396 | 0.735 | 0.7729 | 1.4422 | 0.735 | 0.6948 | 0.5917 | 0.0946 |
| 0.0372 | 96.0 | 2400 | 0.0396 | 0.735 | 0.7729 | 1.4527 | 0.735 | 0.6948 | 0.5917 | 0.0946 |
| 0.0372 | 97.0 | 2425 | 0.0396 | 0.735 | 0.7729 | 1.4441 | 0.735 | 0.6948 | 0.5917 | 0.0946 |
| 0.0372 | 98.0 | 2450 | 0.0396 | 0.735 | 0.7729 | 1.4423 | 0.735 | 0.6948 | 0.5917 | 0.0946 |
| 0.0372 | 99.0 | 2475 | 0.0396 | 0.735 | 0.7729 | 1.4457 | 0.735 | 0.6948 | 0.5886 | 0.0948 |
| 0.0372 | 100.0 | 2500 | 0.0396 | 0.735 | 0.7729 | 1.4473 | 0.735 | 0.6948 | 0.5886 | 0.0947 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
a9i/scarlett-7b
|
a9i
| 2023-07-11T21:37:46Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text2text-generation",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-11T20:53:43Z |
---
license: cc-by-nc-nd-4.0
language:
- en
pipeline_tag: text2text-generation
---
|
JuS2/ppo-Huggy
|
JuS2
| 2023-07-11T21:28:22Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T21:28:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JuS2/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ethannhzhouu/gpt2-generator
|
ethannhzhouu
| 2023-07-11T21:11:54Z | 209 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T21:11:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-generator
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 5.3997 |
| No log | 2.0 | 2 | 4.9524 |
| No log | 3.0 | 3 | 4.7855 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Belphegor/dqn-SpaceInvadersNoFrameskip-v4
|
Belphegor
| 2023-07-11T21:11:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T21:10:44Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 581.50 +/- 92.36
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Belphegor -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Belphegor -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Belphegor
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jordyvl/vit-small_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone
|
jordyvl
| 2023-07-11T21:05:03Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T22:41:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0379
- Accuracy: 0.8
- Brier Loss: 0.6938
- Nll: 1.3290
- F1 Micro: 0.8000
- F1 Macro: 0.7859
- Ece: 0.5869
- Aurc: 0.0931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 0.0506 | 0.09 | 0.8991 | 6.5155 | 0.09 | 0.0484 | 0.1622 | 0.8986 |
| No log | 2.0 | 50 | 0.0468 | 0.22 | 0.8982 | 4.6950 | 0.22 | 0.1025 | 0.2491 | 0.7656 |
| No log | 3.0 | 75 | 0.0463 | 0.29 | 0.8969 | 3.3099 | 0.29 | 0.1676 | 0.2924 | 0.6888 |
| No log | 4.0 | 100 | 0.0459 | 0.37 | 0.8954 | 3.2920 | 0.37 | 0.1891 | 0.3517 | 0.4208 |
| No log | 5.0 | 125 | 0.0455 | 0.395 | 0.8929 | 3.2550 | 0.395 | 0.2299 | 0.3759 | 0.3617 |
| No log | 6.0 | 150 | 0.0449 | 0.49 | 0.8885 | 2.9109 | 0.49 | 0.3135 | 0.4396 | 0.2804 |
| No log | 7.0 | 175 | 0.0441 | 0.495 | 0.8796 | 2.8950 | 0.495 | 0.3248 | 0.4360 | 0.2721 |
| No log | 8.0 | 200 | 0.0430 | 0.545 | 0.8619 | 2.5199 | 0.545 | 0.3771 | 0.4777 | 0.2129 |
| No log | 9.0 | 225 | 0.0418 | 0.62 | 0.8382 | 2.2126 | 0.62 | 0.4291 | 0.5298 | 0.1659 |
| No log | 10.0 | 250 | 0.0409 | 0.645 | 0.8137 | 2.2525 | 0.645 | 0.4947 | 0.5293 | 0.1552 |
| No log | 11.0 | 275 | 0.0401 | 0.68 | 0.7863 | 2.4423 | 0.68 | 0.5145 | 0.5433 | 0.1215 |
| No log | 12.0 | 300 | 0.0392 | 0.68 | 0.7628 | 1.9779 | 0.68 | 0.5373 | 0.5402 | 0.1172 |
| No log | 13.0 | 325 | 0.0385 | 0.745 | 0.7350 | 1.8986 | 0.745 | 0.6126 | 0.5806 | 0.0843 |
| No log | 14.0 | 350 | 0.0384 | 0.735 | 0.7268 | 1.9922 | 0.735 | 0.6451 | 0.5466 | 0.0997 |
| No log | 15.0 | 375 | 0.0381 | 0.745 | 0.7180 | 1.6965 | 0.745 | 0.6627 | 0.5586 | 0.0761 |
| No log | 16.0 | 400 | 0.0377 | 0.805 | 0.7031 | 1.2564 | 0.805 | 0.7353 | 0.6034 | 0.0713 |
| No log | 17.0 | 425 | 0.0389 | 0.745 | 0.7303 | 1.5063 | 0.745 | 0.7192 | 0.5779 | 0.0705 |
| No log | 18.0 | 450 | 0.0387 | 0.765 | 0.7219 | 1.5776 | 0.765 | 0.7703 | 0.5815 | 0.0923 |
| No log | 19.0 | 475 | 0.0383 | 0.805 | 0.7213 | 1.3953 | 0.805 | 0.7906 | 0.6159 | 0.0667 |
| 0.0432 | 20.0 | 500 | 0.0377 | 0.835 | 0.6952 | 1.3075 | 0.835 | 0.8271 | 0.6116 | 0.0799 |
| 0.0432 | 21.0 | 525 | 0.0381 | 0.795 | 0.7018 | 1.6184 | 0.795 | 0.7723 | 0.5851 | 0.0880 |
| 0.0432 | 22.0 | 550 | 0.0378 | 0.81 | 0.6984 | 1.4292 | 0.81 | 0.7950 | 0.6103 | 0.0673 |
| 0.0432 | 23.0 | 575 | 0.0380 | 0.805 | 0.6976 | 1.4852 | 0.805 | 0.7951 | 0.5942 | 0.0808 |
| 0.0432 | 24.0 | 600 | 0.0377 | 0.825 | 0.6907 | 1.4501 | 0.825 | 0.8103 | 0.6020 | 0.0774 |
| 0.0432 | 25.0 | 625 | 0.0377 | 0.83 | 0.6920 | 1.4509 | 0.83 | 0.8148 | 0.6038 | 0.0759 |
| 0.0432 | 26.0 | 650 | 0.0377 | 0.825 | 0.6927 | 1.4113 | 0.825 | 0.8114 | 0.6072 | 0.0765 |
| 0.0432 | 27.0 | 675 | 0.0377 | 0.825 | 0.6924 | 1.4044 | 0.825 | 0.8114 | 0.6057 | 0.0757 |
| 0.0432 | 28.0 | 700 | 0.0377 | 0.82 | 0.6932 | 1.4521 | 0.82 | 0.8061 | 0.6017 | 0.0815 |
| 0.0432 | 29.0 | 725 | 0.0377 | 0.82 | 0.6932 | 1.3593 | 0.82 | 0.8080 | 0.5983 | 0.0794 |
| 0.0432 | 30.0 | 750 | 0.0377 | 0.82 | 0.6926 | 1.3437 | 0.82 | 0.8069 | 0.6042 | 0.0819 |
| 0.0432 | 31.0 | 775 | 0.0377 | 0.815 | 0.6932 | 1.3453 | 0.815 | 0.8027 | 0.5988 | 0.0815 |
| 0.0432 | 32.0 | 800 | 0.0377 | 0.82 | 0.6930 | 1.3384 | 0.82 | 0.8029 | 0.6044 | 0.0855 |
| 0.0432 | 33.0 | 825 | 0.0377 | 0.81 | 0.6928 | 1.3969 | 0.81 | 0.7927 | 0.5929 | 0.0835 |
| 0.0432 | 34.0 | 850 | 0.0378 | 0.805 | 0.6927 | 1.3995 | 0.805 | 0.7886 | 0.5961 | 0.0855 |
| 0.0432 | 35.0 | 875 | 0.0377 | 0.81 | 0.6927 | 1.3705 | 0.81 | 0.7979 | 0.5910 | 0.0887 |
| 0.0432 | 36.0 | 900 | 0.0378 | 0.805 | 0.6930 | 1.3566 | 0.805 | 0.7886 | 0.5850 | 0.0817 |
| 0.0432 | 37.0 | 925 | 0.0377 | 0.82 | 0.6927 | 1.3537 | 0.82 | 0.8022 | 0.5936 | 0.0847 |
| 0.0432 | 38.0 | 950 | 0.0377 | 0.815 | 0.6930 | 1.3574 | 0.815 | 0.7978 | 0.5976 | 0.0854 |
| 0.0432 | 39.0 | 975 | 0.0377 | 0.815 | 0.6932 | 1.4599 | 0.815 | 0.7978 | 0.5955 | 0.0864 |
| 0.035 | 40.0 | 1000 | 0.0377 | 0.815 | 0.6926 | 1.4147 | 0.815 | 0.7978 | 0.5990 | 0.0869 |
| 0.035 | 41.0 | 1025 | 0.0377 | 0.81 | 0.6931 | 1.4065 | 0.81 | 0.7943 | 0.5966 | 0.0844 |
| 0.035 | 42.0 | 1050 | 0.0378 | 0.81 | 0.6929 | 1.4678 | 0.81 | 0.7961 | 0.5902 | 0.0891 |
| 0.035 | 43.0 | 1075 | 0.0378 | 0.81 | 0.6927 | 1.4164 | 0.81 | 0.7971 | 0.5951 | 0.0897 |
| 0.035 | 44.0 | 1100 | 0.0378 | 0.81 | 0.6930 | 1.4646 | 0.81 | 0.7961 | 0.5948 | 0.0875 |
| 0.035 | 45.0 | 1125 | 0.0378 | 0.815 | 0.6921 | 1.4660 | 0.815 | 0.8004 | 0.6024 | 0.0895 |
| 0.035 | 46.0 | 1150 | 0.0378 | 0.81 | 0.6929 | 1.4098 | 0.81 | 0.7961 | 0.5987 | 0.0831 |
| 0.035 | 47.0 | 1175 | 0.0378 | 0.815 | 0.6928 | 1.4634 | 0.815 | 0.8004 | 0.5963 | 0.0911 |
| 0.035 | 48.0 | 1200 | 0.0378 | 0.81 | 0.6932 | 1.4648 | 0.81 | 0.7961 | 0.5841 | 0.0877 |
| 0.035 | 49.0 | 1225 | 0.0378 | 0.81 | 0.6928 | 1.4635 | 0.81 | 0.7961 | 0.5955 | 0.0898 |
| 0.035 | 50.0 | 1250 | 0.0378 | 0.805 | 0.6935 | 1.4688 | 0.805 | 0.7882 | 0.5795 | 0.0902 |
| 0.035 | 51.0 | 1275 | 0.0378 | 0.805 | 0.6928 | 1.4665 | 0.805 | 0.7882 | 0.5848 | 0.0916 |
| 0.035 | 52.0 | 1300 | 0.0378 | 0.81 | 0.6925 | 1.4249 | 0.81 | 0.7961 | 0.5869 | 0.0926 |
| 0.035 | 53.0 | 1325 | 0.0378 | 0.815 | 0.6926 | 1.4150 | 0.815 | 0.8021 | 0.5934 | 0.0913 |
| 0.035 | 54.0 | 1350 | 0.0378 | 0.81 | 0.6929 | 1.4155 | 0.81 | 0.7961 | 0.5943 | 0.0913 |
| 0.035 | 55.0 | 1375 | 0.0378 | 0.805 | 0.6928 | 1.4141 | 0.805 | 0.7882 | 0.5934 | 0.0964 |
| 0.035 | 56.0 | 1400 | 0.0378 | 0.805 | 0.6930 | 1.4124 | 0.805 | 0.7882 | 0.5926 | 0.0958 |
| 0.035 | 57.0 | 1425 | 0.0378 | 0.81 | 0.6935 | 1.4116 | 0.81 | 0.7934 | 0.6002 | 0.0895 |
| 0.035 | 58.0 | 1450 | 0.0378 | 0.805 | 0.6928 | 1.4059 | 0.805 | 0.7882 | 0.5890 | 0.0937 |
| 0.035 | 59.0 | 1475 | 0.0378 | 0.805 | 0.6929 | 1.4141 | 0.805 | 0.7882 | 0.5918 | 0.0967 |
| 0.0348 | 60.0 | 1500 | 0.0378 | 0.81 | 0.6935 | 1.4086 | 0.81 | 0.7934 | 0.5915 | 0.0934 |
| 0.0348 | 61.0 | 1525 | 0.0378 | 0.81 | 0.6930 | 1.4105 | 0.81 | 0.7941 | 0.5954 | 0.0961 |
| 0.0348 | 62.0 | 1550 | 0.0378 | 0.81 | 0.6933 | 1.4166 | 0.81 | 0.7941 | 0.5889 | 0.0954 |
| 0.0348 | 63.0 | 1575 | 0.0378 | 0.81 | 0.6933 | 1.4109 | 0.81 | 0.7934 | 0.5963 | 0.0975 |
| 0.0348 | 64.0 | 1600 | 0.0378 | 0.81 | 0.6932 | 1.4131 | 0.81 | 0.7934 | 0.5980 | 0.0953 |
| 0.0348 | 65.0 | 1625 | 0.0378 | 0.81 | 0.6937 | 1.4182 | 0.81 | 0.7934 | 0.5956 | 0.0970 |
| 0.0348 | 66.0 | 1650 | 0.0378 | 0.805 | 0.6933 | 1.4125 | 0.805 | 0.7893 | 0.5905 | 0.0966 |
| 0.0348 | 67.0 | 1675 | 0.0378 | 0.81 | 0.6937 | 1.4136 | 0.81 | 0.7934 | 0.5965 | 0.0975 |
| 0.0348 | 68.0 | 1700 | 0.0379 | 0.81 | 0.6935 | 1.4137 | 0.81 | 0.7934 | 0.5994 | 0.0971 |
| 0.0348 | 69.0 | 1725 | 0.0378 | 0.805 | 0.6935 | 1.4196 | 0.805 | 0.7893 | 0.5913 | 0.0946 |
| 0.0348 | 70.0 | 1750 | 0.0379 | 0.805 | 0.6933 | 1.4129 | 0.805 | 0.7893 | 0.5877 | 0.0945 |
| 0.0348 | 71.0 | 1775 | 0.0379 | 0.805 | 0.6933 | 1.4172 | 0.805 | 0.7893 | 0.5921 | 0.0951 |
| 0.0348 | 72.0 | 1800 | 0.0379 | 0.805 | 0.6931 | 1.4136 | 0.805 | 0.7893 | 0.5851 | 0.0953 |
| 0.0348 | 73.0 | 1825 | 0.0379 | 0.805 | 0.6929 | 1.4168 | 0.805 | 0.7893 | 0.5846 | 0.0971 |
| 0.0348 | 74.0 | 1850 | 0.0379 | 0.805 | 0.6939 | 1.4185 | 0.805 | 0.7893 | 0.5892 | 0.0950 |
| 0.0348 | 75.0 | 1875 | 0.0379 | 0.805 | 0.6935 | 1.4171 | 0.805 | 0.7893 | 0.5946 | 0.0938 |
| 0.0348 | 76.0 | 1900 | 0.0379 | 0.805 | 0.6934 | 1.4217 | 0.805 | 0.7893 | 0.5939 | 0.0959 |
| 0.0348 | 77.0 | 1925 | 0.0379 | 0.8 | 0.6932 | 1.4162 | 0.8000 | 0.7859 | 0.5826 | 0.0954 |
| 0.0348 | 78.0 | 1950 | 0.0379 | 0.8 | 0.6935 | 1.4172 | 0.8000 | 0.7859 | 0.5912 | 0.0950 |
| 0.0348 | 79.0 | 1975 | 0.0379 | 0.8 | 0.6933 | 1.4169 | 0.8000 | 0.7859 | 0.5885 | 0.0964 |
| 0.0348 | 80.0 | 2000 | 0.0379 | 0.8 | 0.6935 | 1.4196 | 0.8000 | 0.7859 | 0.5865 | 0.0957 |
| 0.0348 | 81.0 | 2025 | 0.0379 | 0.8 | 0.6937 | 1.4213 | 0.8000 | 0.7859 | 0.5880 | 0.0962 |
| 0.0348 | 82.0 | 2050 | 0.0379 | 0.8 | 0.6939 | 1.4201 | 0.8000 | 0.7859 | 0.5910 | 0.0962 |
| 0.0348 | 83.0 | 2075 | 0.0379 | 0.8 | 0.6938 | 1.3762 | 0.8000 | 0.7859 | 0.5883 | 0.0945 |
| 0.0348 | 84.0 | 2100 | 0.0379 | 0.8 | 0.6938 | 1.4218 | 0.8000 | 0.7859 | 0.5947 | 0.0950 |
| 0.0348 | 85.0 | 2125 | 0.0379 | 0.8 | 0.6935 | 1.3657 | 0.8000 | 0.7859 | 0.5857 | 0.0912 |
| 0.0348 | 86.0 | 2150 | 0.0379 | 0.8 | 0.6938 | 1.3278 | 0.8000 | 0.7859 | 0.5892 | 0.0929 |
| 0.0348 | 87.0 | 2175 | 0.0379 | 0.8 | 0.6938 | 1.3831 | 0.8000 | 0.7859 | 0.5856 | 0.0946 |
| 0.0348 | 88.0 | 2200 | 0.0379 | 0.8 | 0.6938 | 1.3761 | 0.8000 | 0.7859 | 0.5892 | 0.0955 |
| 0.0348 | 89.0 | 2225 | 0.0379 | 0.8 | 0.6938 | 1.3296 | 0.8000 | 0.7859 | 0.5870 | 0.0947 |
| 0.0348 | 90.0 | 2250 | 0.0379 | 0.8 | 0.6939 | 1.3667 | 0.8000 | 0.7859 | 0.5909 | 0.0926 |
| 0.0348 | 91.0 | 2275 | 0.0379 | 0.8 | 0.6940 | 1.3346 | 0.8000 | 0.7859 | 0.5906 | 0.0930 |
| 0.0348 | 92.0 | 2300 | 0.0379 | 0.8 | 0.6938 | 1.3268 | 0.8000 | 0.7859 | 0.5870 | 0.0936 |
| 0.0348 | 93.0 | 2325 | 0.0379 | 0.8 | 0.6937 | 1.3320 | 0.8000 | 0.7859 | 0.5919 | 0.0939 |
| 0.0348 | 94.0 | 2350 | 0.0379 | 0.8 | 0.6939 | 1.3324 | 0.8000 | 0.7859 | 0.5870 | 0.0928 |
| 0.0348 | 95.0 | 2375 | 0.0379 | 0.8 | 0.6937 | 1.3289 | 0.8000 | 0.7859 | 0.5869 | 0.0932 |
| 0.0348 | 96.0 | 2400 | 0.0379 | 0.8 | 0.6938 | 1.3264 | 0.8000 | 0.7859 | 0.5869 | 0.0931 |
| 0.0348 | 97.0 | 2425 | 0.0379 | 0.8 | 0.6938 | 1.3280 | 0.8000 | 0.7859 | 0.5870 | 0.0932 |
| 0.0348 | 98.0 | 2450 | 0.0379 | 0.8 | 0.6938 | 1.3297 | 0.8000 | 0.7859 | 0.5869 | 0.0930 |
| 0.0348 | 99.0 | 2475 | 0.0379 | 0.8 | 0.6938 | 1.3304 | 0.8000 | 0.7859 | 0.5869 | 0.0929 |
| 0.0347 | 100.0 | 2500 | 0.0379 | 0.8 | 0.6938 | 1.3290 | 0.8000 | 0.7859 | 0.5869 | 0.0931 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Jonathaniu/alpaca-breast-cancer-13b
|
Jonathaniu
| 2023-07-11T21:01:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-11T20:37:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
davidmunechika/coreml-genshin-landscape-diffusion
|
davidmunechika
| 2023-07-11T21:01:14Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-30T17:28:52Z |
---
license: creativeml-openrail-m
---
|
Finnfalter/ppo-LunarLander-v2
|
Finnfalter
| 2023-07-11T20:46:31Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T20:46:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.83 +/- 16.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ALM-AHME/beit-large-patch16-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30
|
ALM-AHME
| 2023-07-11T20:46:08Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-11T12:39:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-large-patch16-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 0.9805094130675526
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0474
- Accuracy: 0.9805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2312 | 0.99 | 93 | 0.1822 | 0.9453 |
| 0.3817 | 1.99 | 187 | 0.2106 | 0.9183 |
| 0.2217 | 3.0 | 281 | 0.1902 | 0.9285 |
| 0.1667 | 4.0 | 375 | 0.1127 | 0.9584 |
| 0.0572 | 4.96 | 465 | 0.0474 | 0.9805 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
carova/q-FrozenLake-v1-4x4-noSlippery
|
carova
| 2023-07-11T20:42:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T20:42:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="carova/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
autopilot-ai/EthicalEye
|
autopilot-ai
| 2023-07-11T20:11:30Z | 269 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"en",
"fr",
"hi",
"gu",
"bn",
"ml",
"mr",
"pa",
"it",
"es",
"kn",
"as",
"af",
"ru",
"ro",
"sq",
"ar",
"am",
"az",
"bs",
"bh",
"bg",
"bo",
"ca",
"ce",
"zh",
"cr",
"hr",
"cs",
"da",
"de",
"nl",
"el",
"et",
"eo",
"fi",
"fj",
"fa",
"gl",
"ga",
"ha",
"ht",
"he",
"hu",
"hy",
"id",
"is",
"ja",
"jv",
"ka",
"kk",
"km",
"ko",
"ks",
"ku",
"ky",
"la",
"lb",
"lt",
"lv",
"mk",
"mn",
"ms",
"mi",
"mt",
"ne",
"no",
"or",
"om",
"ps",
"pl",
"pt",
"qu",
"sa",
"sm",
"gd",
"sr",
"sn",
"sd",
"si",
"sk",
"sl",
"so",
"su",
"sw",
"sv",
"tg",
"ta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-01T10:36:08Z |
---
license: apache-2.0
requirements:
- sentencepiece: >-
(if not installed install using `pip install sentencepiece`, and restart
runtime)
library_name: transformers
pipeline_tag: text-classification
language:
- en
- fr
- hi
- gu
- bn
- ml
- mr
- pa
- it
- es
- kn
- as
- af
- ru
- ro
- sq
- ar
- am
- az
- bs
- bh
- bg
- bo
- ca
- ce
- zh
- cr
- hr
- cs
- da
- de
- nl
- el
- et
- eo
- fi
- fj
- fa
- gl
- ga
- ha
- ht
- he
- hu
- hy
- id
- is
- ja
- jv
- ka
- kk
- km
- ko
- ks
- ku
- ky
- la
- lb
- lt
- lv
- mk
- mn
- ms
- mi
- mt
- ne
- 'no'
- or
- om
- ps
- pl
- pt
- qu
- sa
- sm
- gd
- sr
- sn
- sd
- si
- sk
- sl
- so
- su
- sw
- sv
- tg
- ta
---
## Details
- Model Name: Ethical Eye
- Description: Ethical Eye is an open-source AI model developed by AutopilotAI. It is designed to flag and analyze user-generated content for harmful or unethical behavior, providing a last layer of decision-making to assist AI systems in promoting ethical and moral actions. The model leverages advanced techniques such as text classification, toxicity analysis, and cross-lingual NLP to detect abuse, obscene language, and harmful or unethical comments in multiple languages.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("autopilot-ai/EthicalEye")
model = AutoModelForSequenceClassification.from_pretrained("autopilot-ai/EthicalEye")
```
## Intended Use
- Primary Use Case: The Ethical Eye model is primarily intended to be used as a tool to flag or block users exhibiting harmful or unethical behavior on various platforms. It aims to assist developers, especially those with limited experience in NLP, in enforcing ethical standards and creating a safer environment for users.
- User Expertise: The model is designed to be accessible to developers with various levels of NLP expertise, including those with limited experience in the field.
- Limitations: While Ethical Eye provides valuable insights and analysis, it is essential to note that it should be used as an aid and not as the sole determinant of ethical decision-making. It may have limitations in understanding context-specific nuances and may require continuous improvement and customization for specific domains or languages.
## Model Details
- Architecture: Ethical Eye is built using PyTorch and utilizes the Transformers library. It employs the XLM-Roberta architecture, which enables cross-lingual understanding and transfer learning.
- Developed by: [Khush Patel](https://www.linkedin.com/in/khush-patel-kp/), [Jayveersinh Raj](https://www.linkedin.com/in/jayveersinh-raj-67694222a/)
- License: The Ethical Eye model is released under the Apache 2.0 license, granting users the freedom to use, modify, and distribute the model according to the terms of the license.
## Use Cases
- Content Moderation: Ethical Eye can be integrated into content moderation systems to automatically flag and block user-generated content that contains abusive language, hate speech, or other forms of harmful or unethical behavior. It helps platforms maintain a safe and respectful environment for their users.
- Social Media Platforms: Social media platforms can utilize Ethical Eye to automatically detect and filter out toxic comments, obscenities, and offensive content in multiple languages. This helps to create a more positive and inclusive online community.
- Chatbots and Virtual Assistants: By incorporating Ethical Eye into chatbots and virtual assistants, AI systems can ensure that their responses align with ethical guidelines. It helps prevent AI agents from engaging in inappropriate or offensive conversations with users.
- Online Forums and Discussion Boards: Ethical Eye can be applied to online forums and discussion boards to monitor user interactions and identify potential instances of harassment, bullying, or unethical behavior. This allows moderators to take appropriate actions to maintain a healthy and respectful environment.
- E-commerce Platforms: E-commerce platforms can utilize Ethical Eye to automatically identify and block reviews or comments that contain false information, spam, or unethical practices. This helps maintain the integrity of the platform and ensures honest and reliable user feedback.
- Educational Platforms: Ethical Eye can be used in educational platforms to flag and address instances of cyberbullying, inappropriate language, or offensive content in student discussions and comments. It supports the creation of a safe and respectful learning environment.
- AI Reinforcement Learning: The Ethical Eye model can serve as a critic in reinforcement learning scenarios, providing feedback on the ethical implications of actions taken by AI agents. It assists in developing AI systems that not only optimize for task performance but also align with ethical guidelines and societal norms.
## Considerations for Deployment
- Hardware Requirements: The Ethical Eye model can be deployed on hardware configurations suitable for running deep learning models. Specific requirements may depend on the scale of deployment and the desired performance.
- Dependencies: The model relies on PyTorch, Transformers, and XLM-Roberta libraries. Refer to the model documentation for specific version requirements.
- Integration: Ethical Eye can be integrated into existing AI systems and platforms using the provided APIs and guidelines. Additional customization may be necessary to adapt the model to specific requirements.
- Ethical and Legal Considerations: While Ethical Eye aims to promote ethical behavior, it is important to acknowledge that it may have limitations and biases inherent in its training data. Developers should exercise caution and consider the legal and ethical implications of relying solely on the model's outputs without human oversight.
|
BlueAvenir/model_growth_restructuring_V_1_0
|
BlueAvenir
| 2023-07-11T20:11:27Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-11T20:10:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 95 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 95,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
au2a/whisper-base-zh-20230711
|
au2a
| 2023-07-11T20:10:47Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:-",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-11T12:05:21Z |
---
language:
- zh
license: apache-2.0
tags:
- whisper
- generated_from_trainer
datasets:
- '-'
model-index:
- name: whisper-base-zh-20230711 - au2a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-zh-20230711 - au2a
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the some hakka audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4551
- Cer: 16.9978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.4673 | 0.65 | 1000 | 0.6526 | 25.0548 |
| 0.2203 | 1.29 | 2000 | 0.4985 | 19.8459 |
| 0.1446 | 1.94 | 3000 | 0.4557 | 18.0026 |
| 0.0956 | 2.59 | 4000 | 0.4438 | 16.9676 |
| 0.0527 | 3.24 | 5000 | 0.4450 | 17.0998 |
| 0.0423 | 3.88 | 6000 | 0.4441 | 17.7797 |
| 0.027 | 4.53 | 7000 | 0.4474 | 16.9260 |
| 0.0177 | 5.18 | 8000 | 0.4515 | 16.5861 |
| 0.0165 | 5.83 | 9000 | 0.4537 | 16.8392 |
| 0.0129 | 6.47 | 10000 | 0.4551 | 16.9978 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.11.0+cu113
- Datasets 2.13.1
- Tokenizers 0.13.3
|
datajanko/ppo-LunarLander-v2
|
datajanko
| 2023-07-11T20:08:45Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T20:08:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.35 +/- 20.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
carova/ppo-Huggy
|
carova
| 2023-07-11T20:06:31Z | 27 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T19:17:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: carova/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
davidmunechika/coreml-future-diffusion
|
davidmunechika
| 2023-07-11T20:06:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T19:40:15Z |
---
license: creativeml-openrail-m
---
|
BlueAvenir/model_it_recruit_V_0_1
|
BlueAvenir
| 2023-07-11T20:00:17Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-11T19:59:50Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 100 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
BlueAvenir/model_growth_restructuring_V_0_2
|
BlueAvenir
| 2023-07-11T19:51:05Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-11T19:50:43Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 98 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 98,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
BlueAvenir/model_operations_V_0_2
|
BlueAvenir
| 2023-07-11T19:33:56Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-11T19:33:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 100 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
davidmunechika/oldjourney
|
davidmunechika
| 2023-07-11T19:29:28Z | 31 | 0 |
diffusers
|
[
"diffusers",
"Text-to-image",
"Diffusers",
"stable-diffusion",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-11T17:58:14Z |
---
license: creativeml-openrail-m
language:
- en
tags:
- Text-to-image
- Diffusers
- stable-diffusion
---
<b>Oldjourney</b>
Oldjourney is a finetuned Stable Diffusion 2.1 model trained on images from Midjourney 3 using Dreambooth. That older version of Midjourney was often messy and imprecise, but had a great artistic style. These two versions of Oldjourney can recreate the essence of that art style with added details, precision, and quality.
The two models, Oldjourney Ultra and Oldjourney Lite, are very similar, but they have different strengths. Ultra is better at people, while Lite is better at painterly style images.
Use the keyword <b>Oldjourney</b> to trigger the style, and set the resolution to 768 x 768 or greater. Examples and sample prompts below.
This is a model for Stable Diffusion 2.1, so make sure to download the yaml files.
<b>Rendered with Oldjourney Lite</b>

<b>Rendered with Oldjourney Ultra</b>

<b>Sample Prompts for Oldjourney Lite</b>
<b>Sample 1</b>
Oldjourney the legendary dream vortex and a dreamer, a boy laying on a bed in front of a vortex, ultrafine detailed painting, psychedelic art, watching the stars at night, pulled into the spiral vortex, videogame cover art, ... if only i could sleep, discord profile picture, time travel machine, photoshop render
<b>Negative prompt:</b> pink, ugly, tiling, out of frame, body out of frame, blurry, blurred, grainy, cut off, draft, (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), (too long neck:1:2), (extra limbs:1.2), (less than two arms:1.2), (less than two legs:1.2), disfigured, deformed,(bad anatomy:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 810775161, Size: 768x768, Model: Oldjourney Lite, ENSD: 1</i>
<b>Sample 2</b>
Oldjourney an image of a wizard with a glowing staff turned to the side, black background, light art, full of colors and rich detail, color grunge, profile picture 1024px, glowing liquid, high detailed colors, colorful fire, an old man, blacklight, discord profile picture
<b>Negative prompt:</b> ugly, tiling, out of frame, body out of frame, blurry, blurred, grainy, cut off, draft, (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), (too long neck:1:2), (extra limbs:1.2), (less than two arms:1.2), (less than two legs:1.2), disfigured, deformed,(bad anatomy:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2371590421, Size: 768x768, Model: Oldjourney Lite, ENSD: 1</i>
<b>Sample 3</b>
Oldjourney a dog with a tiny top hat and steampunk goggles on its head and a steampunk collar, matte painting, insanely detailed, ultrafine details, hyperrealism
<b>Negative prompt:</b> (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3142299054, Size: 768x768, Model: Oldjourney Lite, ENSD: 1</i>
<b>Sample Prompts for Oldjourney Ultra</b>
<b>Sample 4</b>
Oldjourney A woman facing the camera dancing aura of cosmic energy vortex of sparkling blue sand and glowing embers ((grunge)) smoke magical eerie noir lighting stars in the sky ethereal dream sandman surreal rembrandt artstation dark atmosphere 8k highly detailed atmospheric
<b>Negative prompt:</b> ugly, tiling, (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), out of frame, extra limbs, less than two arms, less than two legs, disfigured, deformed, body out of frame, blurry, (bad anatomy:1.2), blurred, grainy, cut off, draft, (overexposure:1.2), (high contrast:1.2),(cropped:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2676530026, Size: 768x768, Model: Oldjourney Ultra, ENSD: 1</i>
<b>Sample 5</b>
Oldjourney your fate revealed inside a crystal ball, crystal ball with swirling otherworldly fog reveals your fate, insanely detailed masterpiece Trending on Artstation 8k ray traced volumetric lighting ambient occlusion ultrafine details digital art painting
<b>Negative prompt:</b> ugly, tiling, out of frame, body out of frame, blurry, blurred, grainy, cut off, draft, (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), (too long neck:1:2), (extra limbs:1.2), (less than two arms:1.2), (less than two legs:1.2), disfigured, deformed,(bad anatomy:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2555061923, Size: 768x768, Model: Oldjourney Ultra, ENSD: 1</i>
<b>Sample 6</b>
Oldjourney cosmic queen, ethereal woman with a crown on her head, head and shoulders portrait, fantasy art, star sky, star sky, face illuminated, sparkle, stars, cosmos, paticles
<b>Negative prompt:</b> ugly, tiling, out of frame, body out of frame, blurry, blurred, grainy, cut off, draft, (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), (too long neck:1:2), (extra limbs:1.2), (less than two arms:1.2), (less than two legs:1.2), disfigured, deformed,(bad anatomy:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 868461039, Face restoration: GFPGAN, Size: 768x768, Model: Oldjourney Ultra, ENSD: 1</i>
|
ontel/icaaalor
|
ontel
| 2023-07-11T19:26:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T19:24:54Z |
---
license: creativeml-openrail-m
---
|
jakelcoop/Reinforce-pixelcopter
|
jakelcoop
| 2023-07-11T19:14:55Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T19:14:51Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 13.90 +/- 15.94
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
phatjk/bloomz-lora-vi-QA-NLLB-viquad
|
phatjk
| 2023-07-11T19:10:03Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-08T14:52:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
HiTZ/gpt2-eus-euscrawl
|
HiTZ
| 2023-07-11T18:54:30Z | 169 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"eu",
"dataset:HiTZ/euscrawl",
"arxiv:1910.09700",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-20T15:35:58Z |
---
license: cc
datasets:
- HiTZ/euscrawl
language:
- eu
metrics:
- perplexity
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for GPT2 Eus Euscrawl
<!-- Provide a quick summary of what the model is/does. -->
Pretrained GPT2 small model (124M parameters) on Basque language using a causal language modeling (CLM) objective. The English version of GPT2 was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/). The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model.
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
GPT-2 is a transformers model pretrained on a very large corpus of Basque data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
- **Developed by:** [github.com/juletx](https://github.com/juletx)
- **Model type:** GPT2
- **Language(s) (NLP):** Basque (eu)
- **License:** cc
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [github.com/juletx/phd](https://github.com/juletx/phd)
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model directly with a pipeline for text generation.
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
You can also fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for Basque comprising 12.5 million documents
and 423 million tokens, totalling 2.1 GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to
extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to
general purpose approaches. [Dataset Card](https://huggingface.co/datasets/HiTZ/euscrawl)
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing [optional]
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,304. The inputs are sequences of 1024 consecutive tokens.
### Training Hyperparameters
- **Training regime:** bf16 mixed precission <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
niquito/falcon-7b-instruct-ft-adapters
|
niquito
| 2023-07-11T18:47:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-11T18:47:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Winmodel/a2c-PandaReachDense-v2
|
Winmodel
| 2023-07-11T18:38:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T18:37:34Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.49 +/- 0.17
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gerulata/slovakbert
|
gerulata
| 2023-07-11T18:36:33Z | 4,830 | 19 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"fill-mask",
"SlovakBERT",
"sk",
"dataset:wikipedia",
"dataset:opensubtitles",
"dataset:oscar",
"dataset:gerulatawebcrawl",
"dataset:gerulatamonitoring",
"dataset:blbec.online",
"arxiv:2109.15254",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: sk
tags:
- SlovakBERT
license: mit
datasets:
- wikipedia
- opensubtitles
- oscar
- gerulatawebcrawl
- gerulatamonitoring
- blbec.online
---
# SlovakBERT (base-sized model)
SlovakBERT pretrained model on Slovak language using a masked language modeling (MLM) objective. This model is case-sensitive: it makes a difference between slovensko and Slovensko.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
**IMPORTANT**: The model was not trained on the “ and ” (direct quote) character -> so before tokenizing the text, it is advised to replace all “ and ” (direct quote marks) with a single "(double quote marks).
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='gerulata/slovakbert')
unmasker("Deti sa <mask> na ihrisku.")
[{'sequence': 'Deti sa hrali na ihrisku.',
'score': 0.6355380415916443,
'token': 5949,
'token_str': ' hrali'},
{'sequence': 'Deti sa hrajú na ihrisku.',
'score': 0.14731724560260773,
'token': 9081,
'token_str': ' hrajú'},
{'sequence': 'Deti sa zahrali na ihrisku.',
'score': 0.05016357824206352,
'token': 32553,
'token_str': ' zahrali'},
{'sequence': 'Deti sa stretli na ihrisku.',
'score': 0.041727423667907715,
'token': 5964,
'token_str': ' stretli'},
{'sequence': 'Deti sa učia na ihrisku.',
'score': 0.01886524073779583,
'token': 18099,
'token_str': ' učia'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert')
model = RobertaModel.from_pretrained('gerulata/slovakbert')
text = "Text ktorý sa má embedovať."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert')
model = TFRobertaModel.from_pretrained('gerulata/slovakbert')
text = "Text ktorý sa má embedovať."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
Or extract information from the model like this:
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='gerulata/slovakbert')
unmasker("Slovenské národne povstanie sa uskutočnilo v roku <mask>.")
[{'sequence': 'Slovenske narodne povstanie sa uskutočnilo v roku 1944.',
'score': 0.7383289933204651,
'token': 16621,
'token_str': ' 1944'},...]
```
# Training data
The SlovakBERT model was pretrained on these datasets:
- Wikipedia (326MB of text),
- OpenSubtitles (415MB of text),
- Oscar (4.6GB of text),
- Gerulata WebCrawl (12.7GB of text) ,
- Gerulata Monitoring (214 MB of text),
- blbec.online (4.5GB of text)
The text was then processed with the following steps:
- URL and email addresses were replaced with special tokens ("url", "email").
- Elongated interpunction was reduced (e.g. -- to -).
- Markdown syntax was deleted.
- All text content in braces f.g was eliminated to reduce the amount of markup and programming language text.
We segmented the resulting corpus into sentences and removed duplicates to get 181.6M unique sentences. In total, the final corpus has 19.35GB of text.
# Pretraining
The model was trained in **fairseq** on 4 x Nvidia A100 GPUs for 300K steps with a batch size of 512 and a sequence length of 512. The optimizer used is Adam with a learning rate of 5e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), a weight decay of 0.01, dropout rate 0.1, learning rate warmup for 10k steps and linear decay of the learning rate after. We used 16-bit float precision.
## About us
<a href="https://www.gerulata.com/">
<img width="300px" src="https://www.gerulata.com/assets/images/Logo_Blue.svg">
</a>
Gerulata Technologies is a tech company on a mission to provide tools for fighting disinformation and hostile propaganda.
At Gerulata, we focus on providing state-of-the-art AI-powered tools that empower human analysts and provide them with the ability to make informed decisions.
Our tools allow for the monitoring and analysis of online activity, as well as the detection and tracking of disinformation and hostile propaganda campaigns. With our products, our clients are better equipped to identify and respond to threats in real-time.
### BibTeX entry and citation info
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2109.15254
```
@misc{pikuliak2021slovakbert,
title={SlovakBERT: Slovak Masked Language Model},
author={Matúš Pikuliak and Štefan Grivalský and Martin Konôpka and Miroslav Blšták and Martin Tamajka and Viktor Bachratý and Marián Šimko and Pavol Balážik and Michal Trnka and Filip Uhlárik},
year={2021},
eprint={2109.15254},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
SANTIAGo2005/ppo-LunarLander-v2
|
SANTIAGo2005
| 2023-07-11T18:21:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T18:17:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -116.26 +/- 67.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RogerB/afro-xlmr-large-finetuned-kintweets
|
RogerB
| 2023-07-11T18:13:40Z | 98 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-11T18:07:30Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-finetuned-kintweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-finetuned-kintweets
This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9995 | 1.0 | 90 | 1.6774 |
| 1.9176 | 2.0 | 180 | 1.6880 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Belphegor/Taxi-v3
|
Belphegor
| 2023-07-11T18:12:30Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T18:12:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Belphegor/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Belphegor/q-FrozenLake-v1-4x4-noSlippery
|
Belphegor
| 2023-07-11T18:08:55Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T18:08:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Belphegor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
markjosims/wav2vec2-large-xls-r-300m-tr-colab
|
markjosims
| 2023-07-11T18:00:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-11T00:17:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-tr-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.37473189663977124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4346
- Wer: 0.3747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9005 | 4.26 | 400 | 0.6917 | 0.7251 |
| 0.4032 | 8.51 | 800 | 0.4781 | 0.5286 |
| 0.1863 | 12.77 | 1200 | 0.4682 | 0.4690 |
| 0.1323 | 17.02 | 1600 | 0.4664 | 0.4483 |
| 0.1014 | 21.28 | 2000 | 0.4500 | 0.4124 |
| 0.0749 | 25.53 | 2400 | 0.4510 | 0.3909 |
| 0.0568 | 29.79 | 2800 | 0.4346 | 0.3747 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
RogerB/KinyaBERT-small-finetuned-kintweets
|
RogerB
| 2023-07-11T17:52:20Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-11T17:51:35Z |
---
tags:
- generated_from_trainer
model-index:
- name: KinyaBERT-small-finetuned-kintweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KinyaBERT-small-finetuned-kintweets
This model is a fine-tuned version of [jean-paul/KinyaBERT-small](https://huggingface.co/jean-paul/KinyaBERT-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4855 | 1.0 | 90 | 4.2106 |
| 4.1658 | 2.0 | 180 | 4.1444 |
| 4.0402 | 3.0 | 270 | 4.1616 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
swacks/ql-taxiv3
|
swacks
| 2023-07-11T17:41:32Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T17:41:30Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: ql-taxiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="swacks/ql-taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
swacks/q-FrozenLake-v1-4x4-noSlippery
|
swacks
| 2023-07-11T17:39:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T17:39:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="swacks/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Luke537/videomae-base-finetuned-ucf101-subset
|
Luke537
| 2023-07-11T17:30:17Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-07-11T14:12:56Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 74
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MaitreHibou/a2c-AntBulletEnv-v0
|
MaitreHibou
| 2023-07-11T17:25:59Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T17:24:54Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 732.68 +/- 43.01
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bhaskar-ruthvik/falcon7b-finance-tuned
|
bhaskar-ruthvik
| 2023-07-11T17:23:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-11T14:32:51Z |
# Falcon-7B Model Fine-Tuned on Finance Data
This is a Falcon-7b model fine-tuned on finance data by using Parameter Efficient Fine-Tuning(PEFT) and Quantized Low-Rank Adapters. The data includes stock prices, transaction data, tweets about stocks and their sentiment analysis, frequently asked questions about the Finance industry.
## How was the data created?
* The stock data was taken from the YFinance library which provides up-to-date stock prices. As the model cannot handle realtime data the last known stock prices will be dated 11-07-2023 which is the date of training of the model
* The user transaction data was hand-crafted by a group member to look at the possibilities it would unlock without having to deal with the privacy concerns of using real user data
* The tweets and their sentiments were taken from a kaggle dataset by Rutvik Nelluri
* The faqs about finance were noted down through research on the internet and the prompts were framed using that data
## Why was the training data so small?
* Despite the fact that the actual data collected was of a very large scale including atleast 5000 datapoints for the stocks and the tweets analysis, and that similar prompts for the transaction data and faqs could be generated using the OpenAI, the decision to stick with a much smaller subset of the data is to improve the training time on lower-end GPUs
* This is also why PEFT and QLoRA have been used for the fine-tuning of the model which drastically reduce the trainable weights from 7 Billion to 432k which is significantly smaller
## How was the model trained?
The model was trained by using the built-in transformers trainer with max_steps set to 140 which is approximately equal to 4 epochs of training. The final step training loss was 0.49.
## How to Run?
1. First run the cells to install all the libraries in their required versions (This code snippet uses a custom version of transformers but we can now use the official release):
``` python
pip install -Uqqq pip --progress-bar off
pip install -qqq bitsandbytes==0.39.0 --progress-bar off
pip install -qqq torch==2.0.1 --progress-bar off
pip install -qqq -U git+https://github.com/huggingface/transformers.git@e03a9cc --progress-bar off
pip install -qqq -U git+https://github.com/huggingface/peft.git@42a184f
pip install -qqq -U git+https://github.com/huggingface/accelerate.git@c9fbb71 --progress-bar off
pip install -qqq datasets==2.12.0 --progress-bar off
pip install -qqq loralib==0.1.1 --progress-bar off
pip install -qqq einops==0.6.1 --progress-bar off
```
2. Now import all the necessary libraries and set the default device to the gpu:
``` python
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import load_dataset
from huggingface_hub import notebook_login
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training,
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
```
3. Then load the model using 8-bit preferably if the hardware allows for it, to speed up the inference time:
``` python
PEFT_MODEL = 'bhaskar-ruthvik/falcon7b-finance-tuned'
config = PeftConfig.from_pretrained(PEFT_MODEL)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
return_dict = True,
device_map = 'auto',
trust_remote_code = True,
load_in_8bit = True
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model,PEFT_MODEL)
```
4. Setup the generation configuration:
``` python
generation_config = model.generation_config
generation_config.max_new_tokens = 200
generation_config.temperature = 0.7
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
```
5. Now declare a function to generate prompts so the code can be reused
``` python
def generate_response(question: str)->str:
prompt = f"""
<human>: {question}
<bot>:
""".strip()
encoding = tokenizer(prompt,return_tensors='pt').to(device)
with torch.inference_mode():
outputs = model.generate(input_ids=encoding.input_ids,
attention_mask = encoding.attention_mask,
generation_config = generation_config)
response = tokenizer.decode(outputs[0],skip_special_tokens=True)
assistant_start = "<bot>:"
response_start = response.find(assistant_start)
return response[response_start+ len(assistant_start) :].strip()
```
6. Now provide the prompt to the model and wait for the inference (takes about 40 seconds):
``` python
prompt = 'What is estate planning?'
print('%.300s' % generate_response(prompt))
```
|
Noureldin2303/ProvaImg
|
Noureldin2303
| 2023-07-11T17:22:17Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-11T17:22:17Z |
---
license: bigcode-openrail-m
---
|
sanchit-gandhi/speecht5_tts_vox_nl
|
sanchit-gandhi
| 2023-07-11T17:21:55Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-11T17:19:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5225 | 4.3 | 1000 | 0.4778 |
| 0.5007 | 8.61 | 2000 | 0.4656 |
| 0.493 | 12.91 | 3000 | 0.4602 |
| 0.4902 | 17.21 | 4000 | 0.4587 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/sa_bert_12_layer_modified_complete_training_72
|
gokuls
| 2023-07-11T17:19:06Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-10T16:40:56Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa_bert_12_layer_modified_complete_training_72
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_bert_12_layer_modified_complete_training_72
This model is a fine-tuned version of [gokuls/sa_bert_12_layer_modified_complete_training_48](https://huggingface.co/gokuls/sa_bert_12_layer_modified_complete_training_48) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6236
- Accuracy: 0.5322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0311 | 0.05 | 10000 | 2.8263 | 0.5069 |
| 2.8816 | 0.11 | 20000 | 2.7833 | 0.5126 |
| 2.7734 | 0.16 | 30000 | 2.7565 | 0.5158 |
| 2.7612 | 0.22 | 40000 | 2.7284 | 0.5196 |
| 2.8843 | 0.27 | 50000 | 2.7006 | 0.5229 |
| 2.7809 | 0.33 | 60000 | 2.6765 | 0.5254 |
| 2.6683 | 0.38 | 70000 | 2.6580 | 0.5276 |
| 2.7175 | 0.44 | 80000 | 2.6270 | 0.5316 |
| 2.8903 | 0.49 | 90000 | 2.6236 | 0.5322 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
smerrill726/bank-sub
|
smerrill726
| 2023-07-11T17:15:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T17:15:35Z |
---
license: creativeml-openrail-m
---
|
mark-oppenheim/Taxi-v3-V1
|
mark-oppenheim
| 2023-07-11T17:03:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T17:03:53Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-V1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mark-oppenheim/Taxi-v3-V1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mark-oppenheim/q-FrozenLake-v1-4x4-Slippery
|
mark-oppenheim
| 2023-07-11T16:59:57Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T20:08:34Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.72 +/- 0.45
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mark-oppenheim/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
luzflavio/distilbert-base-uncased-finetuned-cola
|
luzflavio
| 2023-07-11T16:50:38Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T16:45:34Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: luzflavio/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# luzflavio/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1975
- Validation Loss: 0.5266
- Train Matthews Correlation: 0.5279
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5218 | 0.4601 | 0.4776 | 0 |
| 0.3330 | 0.4767 | 0.5113 | 1 |
| 0.1975 | 0.5266 | 0.5279 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
d9021001/mms-1b-l1107-nan
|
d9021001
| 2023-07-11T16:49:35Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-11T15:49:29Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: mms-1b-l1107-nan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: nan-tw
split: test
args: nan-tw
metrics:
- name: Wer
type: wer
value: 1.005720823798627
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-l1107-nan
This model was trained from scratch on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5084
- Wer: 1.0057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.5725 | 2.0 | 100 | 1.8002 | 1.0 |
| 1.5002 | 4.0 | 200 | 1.5084 | 1.0057 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
asenella/mmnist_JNFDccaconfig_resnet_seed_0_ratio_0_c
|
asenella
| 2023-07-11T16:47:28Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-11T16:47:19Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
SHENMU007/neunit_BASE_V11.2
|
SHENMU007
| 2023-07-11T16:45:51Z | 74 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-11T14:03:10Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vluz/MiniAsirraONNX
|
vluz
| 2023-07-11T16:42:29Z | 0 | 0 | null |
[
"onnx",
"license:cc0-1.0",
"region:us"
] | null | 2023-07-11T16:38:04Z |
---
license: cc0-1.0
---
Very small onnx model, trained on Asirra 150 dataset, and intended as an example of Lobe beta
It classifies input images as "Cat" or "Dog"
Untested, do not use for production
|
MaitreHibou/ppo-Pyramids
|
MaitreHibou
| 2023-07-11T16:28:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-11T16:28:19Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MaitreHibou/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
JTStephens/Reinforce-CartPoleV1
|
JTStephens
| 2023-07-11T16:23:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T16:23:15Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPoleV1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 416.60 +/- 38.82
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Desainakut/YPKuatsi
|
Desainakut
| 2023-07-11T16:09:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T06:33:39Z |
---
license: creativeml-openrail-m
---
|
1aurent/ppo-PyramidsRND
|
1aurent
| 2023-07-11T16:08:35Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-11T16:07:10Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 1aurent/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SANTIAGo2005/ppo-Huggy
|
SANTIAGo2005
| 2023-07-11T16:07:41Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T16:07:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SANTIAGo2005/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/sa_bert_12_layer_modified_complete_training_24
|
gokuls
| 2023-07-11T16:03:29Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-10T15:24:07Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa_bert_12_layer_modified_complete_training_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_bert_12_layer_modified_complete_training_24
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7648
- Accuracy: 0.1722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 6.5933 | 0.05 | 10000 | 6.5711 | 0.1226 |
| 6.1523 | 0.11 | 20000 | 6.3425 | 0.1396 |
| 6.1308 | 0.16 | 30000 | 6.2468 | 0.1444 |
| 6.2297 | 0.22 | 40000 | 6.1895 | 0.1468 |
| 6.1484 | 0.27 | 50000 | 6.1483 | 0.1487 |
| 6.0591 | 0.33 | 60000 | 6.1205 | 0.1492 |
| 6.0199 | 0.38 | 70000 | 6.0862 | 0.1501 |
| 5.8666 | 0.44 | 80000 | 5.8875 | 0.1600 |
| 5.9153 | 0.49 | 90000 | 5.7648 | 0.1722 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MUNDOJU/ppo-Huggy
|
MUNDOJU
| 2023-07-11T16:02:28Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T16:02:25Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MUNDOJU/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
caballeroch/FakeNewsClassifierDistilBert-uncased
|
caballeroch
| 2023-07-11T16:01:26Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"dataset:liar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T19:17:04Z |
---
datasets:
- liar
metrics:
- accuracy
- f1
- precision
- recall
---
# Fake News Classifier - Finetuned: 'distilbert-base-uncased'
#### **LIAR Dataset**
***
- This model is finetuned on a large dataset of hand-labeled short statements from politifact.com's API.
- Data went through a series of text cleaning stages such as:
1. Lower-case standardization for improved 'uncased' model performance.
2. Mixed letter/digit word removal.
3. Stopword removal.
4. Extra space trimming.
#### **DistilBERT Uncased Tokenizer**
***
- The text is tokenized using the **'distilbert-base-uncased'** HuggingFace tokenizer.
- For training, the text is cut to a block-size of 200.
- Max length padding is used to maintain consistent input data shape.
#### **DistilBERT Uncased Model**
***
- The model that is finetuned is the DistilBERT model, **'distilbert-base-uncased'**.
- This is a small and fast text classifier, perfect for real-time inference!
- 40% less parameters than the base BERT model.
- 60% faster while preserving 95% performance of the base BERT model.
- This model outperforms the finetuned 'distilbert-base-cased' by over 5% average F1-score.
- This improvement comes mainly from the slower learning rate and improved data preprocessing.
- These modifications allow for a smoother training curve and convergence.
|
caballeroch/FakeNewsClassifierDistilBert-cased
|
caballeroch
| 2023-07-11T16:00:55Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"dataset:liar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-08T20:42:56Z |
---
datasets:
- liar
metrics:
- accuracy
- f1
- precision
- recall
---
# Fake News Classifier - Finetuned: 'distilbert-base-cased'
#### **LIAR Dataset**
***
- This model is finetuned on a large dataset of hand-labeled short statements from politifact.com's API.
- Relevant columns of the data (speaker, statement, etc.) are concatenated and tokenized to create the model input.
#### **DistilBERT Cased Tokenizer**
***
- The text is tokenized using the **'distilbert-base-cased'** HuggingFace tokenizer.
- For training, the text is cut to a block-size of 200.
- Max length padding is used to maintain consistent input data shape.
#### **DistilBERT Cased Model**
***
- The model that is finetuned is the DistilBERT model, **'distilbert-base-cased'**.
- This is a small and fast text classifier, perfect for real-time inference!
- 40% less parameters than the base BERT model.
- 60% faster while preserving 95% performance of the base BERT model.
- The intuition for using the ***cased*** model is to capture some patterns in the writing style (capitalization, punctuation).
- This information may be relevant for detecting fake news sources.
- Writing styles may be relevant (as we see in clickbait titles with capitalization).
- This model performs well in flagging misinformation (fake news), especially if the format is similar to the training distribution.
- Overall, the performance is worse than the finetuned 'distilbert-base-uncased,' as the training data is less clean.
|
fireday/ppo-Huggy
|
fireday
| 2023-07-11T16:00:33Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T16:00:29Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: fireday/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aceituna0813/ppo-Huggy
|
Aceituna0813
| 2023-07-11T15:57:29Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T15:57:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Aceituna0813/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Corran/all-mini-v2-L6-ft
|
Corran
| 2023-07-11T15:50:19Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-11T15:50:16Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Corran/all-mini-v2-L6-ft
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Corran/all-mini-v2-L6-ft")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
TheBloke/open-llama-7B-v2-open-instruct-GGML
|
TheBloke
| 2023-07-11T15:43:30Z | 0 | 6 | null |
[
"license:other",
"region:us"
] | null | 2023-07-11T13:47:05Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# VMware's Open Llama 7B v2 Open Instruct GGML
These files are GGML format model files for [VMware's Open Llama 7B v2 Open Instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| open-llama-7b-v2-open-instruct.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| open-llama-7b-v2-open-instruct.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| open-llama-7b-v2-open-instruct.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| open-llama-7b-v2-open-instruct.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| open-llama-7b-v2-open-instruct.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
| open-llama-7b-v2-open-instruct.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| open-llama-7b-v2-open-instruct.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| open-llama-7b-v2-open-instruct.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| open-llama-7b-v2-open-instruct.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| open-llama-7b-v2-open-instruct.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| open-llama-7b-v2-open-instruct.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| open-llama-7b-v2-open-instruct.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| open-llama-7b-v2-open-instruct.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| open-llama-7b-v2-open-instruct.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m open-llama-7b-v2-open-instruct.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: VMware's Open Llama 7B v2 Open Instruct
# VMware/open-llama-7B-v2-open-instruct
Instruction-tuned version of the fully trained Open LLama 7B v2 model. The model is open for <b>COMMERCIAL USE</b>. <br>
- This model performs better on code compared to v1 due to the improvements made on the base model by the openlm-research team.
- The instruction model is trained on an improved instruction tuning dataset compared to v1
<b> NOTE </b> : The model was trained using the Alpaca prompt template
<b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
## License
- <b>Commercially Viable </b>
- Open-instruct-v1
- Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
- ESNLI - MIT
- ECQA - CDLA 1.0 - Sharing
- Strategy - MIT
- CREAK - MIT
- gsmk8 - MIT
- aqua - MIT
- qasc - Apache 2.0
- Language Model, ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0
## Nomenclature
- Model : Open-llama-v2
- Model Size: 7B parameters
- Dataset: Open-instruct(oasst,dolly, hhrlhf)
## Use in Transformers
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'VMware/open-llama-7b-open-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
prompt = """What is attention mechanism of a transformer model?
Write a python code to illustrate how attention works within a transformer model using numpy library. Donot use pytorch or tensorflow."""
inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output = tokenizer.decode(output1[0])
print(output)
'''
Sure, I can help you with that!
Attention mechanisms in transformer models are typically implemented using the attention mechanism in the self-attention layer. Self-attention allows the model to focus on different parts of the input sequence when processing it. This is achieved by computing a set of attention weights, which are used to weigh the contribution of each input element to the output.
Here's an example code using NumPy to illustrate how attention works in a transformer model:
```python
import numpy as np
def attention_weights(query, key, value, mask):
# Query, key, and value are input tensors. Mask is a tensor of zeros and ones that represents the attention mask.
# It is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
# The attention weights are the element-wise product of the query, key, and mask tensors.
# The result is a tensor of the same shape as the query tensor.
# Compute the dot product between the query tensor and the key tensor
dot = np.matmul(query, key)
# Compute the element-wise softmax of the dot product tensor
exp_dot = np.exp(dot)
# Multiply the dot product and the softmax of the dot product tensors
weights = dot * exp_dot
# Return the attention weights as a NumPy tensor
return weights
# Define the input sequence
query = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
key = np.array([[0.1, 0.2], [0.3, 0.4]])
value = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
mask = np.array([[False, True, True], [False, True, True]])
# Compute the attention weights
weights = attention_weights(query, key, value, mask)
# Print the attention weights
print(weights)
```
In this example, the `attention_weights` function takes as input the query tensor, key tensor, value tensor, and mask tensor. It computes the dot product between the query and key tensors using the `np.matmul` function, and then applies a softmax function using the `np.exp` function to the element-wise dot product tensor. It then multiplies the dot product and softmax tensors using the `np.matmul` function, and returns the result as a NumPy tensor.
The `query`, `key`, and `value` tensors represent the input sequence to the transformer model. The `mask` tensor represents the attention mask, which is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
The output of the `attention_weights` function is a NumPy tensor that represents the attention weights for the input sequence. These weights are used by the transformer model to weigh the contribution of each input element to the output.
I hope this helps!</s>
'''
```
## Finetuning details
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
## Evaluation
<B>TODO</B>
|
parchiev/distilbert-base-uncased-finetuned-imdb
|
parchiev
| 2023-07-11T15:39:16Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-03T11:37:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
espnet/guangzhisun_librispeech100_asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix
|
espnet
| 2023-07-11T15:24:22Z | 2 | 1 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_100",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2023-07-10T23:07:45Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech_100
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/guangzhisun_librispeech100_asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix`
This model was trained by guangzhisun using librispeech_100 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
pip install -e .
cd egs2/librispeech_100/asr1_biasing
./run.sh --skip_data_prep false --skip_train true --download_model espnet/guangzhisun_librispeech100_asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix
```
# TCPGen in RNN-T
# RESULTS
## Environments
- date: `Wed Jul 5 02:01:19 BST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202304`
- pytorch version: `pytorch 2.0.1+cu117`
- Git hash: `6f33b9d9a999d4cd7e9bc0dcfc0ba342bdff7c17`
- Commit date: `Thu Jun 29 02:16:09 2023 +0100`
## exp/asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.loss.ave/dev_clean|2703|54402|95.7|3.9|0.4|0.6|4.9|48.0|
|decode_asr_asr_model_valid.loss.ave/dev_other|2864|50948|85.8|12.6|1.6|1.9|16.1|77.0|
|decode_asr_asr_model_valid.loss.ave/test_clean|2620|52576|95.4|4.1|0.5|0.7|5.2|49.9|
|decode_asr_asr_model_valid.loss.ave/test_other|2939|52343|86.0|12.2|1.7|1.8|15.8|78.4|
|decode_b20_nolm_avebest/test_clean|2620|52576|0.0|0.0|100.0|0.0|100.0|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.loss.ave/dev_clean|2703|288456|98.4|1.0|0.7|0.6|2.3|48.0|
|decode_asr_asr_model_valid.loss.ave/dev_other|2864|265951|93.3|4.2|2.5|2.1|8.8|77.0|
|decode_asr_asr_model_valid.loss.ave/test_clean|2620|281530|98.3|1.0|0.7|0.6|2.3|49.9|
|decode_asr_asr_model_valid.loss.ave/test_other|2939|272758|93.6|3.8|2.6|1.9|8.3|78.4|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.loss.ave/dev_clean|2703|103998|95.3|3.5|1.2|0.6|5.3|48.0|
|decode_asr_asr_model_valid.loss.ave/dev_other|2864|95172|85.2|11.8|3.0|2.5|17.3|77.0|
|decode_asr_asr_model_valid.loss.ave/test_clean|2620|102045|95.3|3.4|1.3|0.6|5.4|49.9|
|decode_asr_asr_model_valid.loss.ave/test_other|2939|98108|85.5|11.0|3.5|2.2|16.7|78.4|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_rnnt.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix
ngpu: 1
seed: 2022
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 6000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe600_spsuffix/train/speech_shape
- exp/asr_stats_raw_en_bpe600_spsuffix/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe600_spsuffix/valid/speech_shape
- exp/asr_stats_raw_en_bpe600_spsuffix/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
biasing: false
deepbiasing: false
biasinglist: ''
battndim: 256
biasingsche: 0
bmaxlen: 100
bdrop: 0.0
biasingGNN: ''
optim: adam
optim_conf:
lr: 0.002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- THE▁
- C
- AND▁
- S
- OF▁
- S▁
- TO▁
- T
- A▁
- G
- I
- ED▁
- E
- RE
- D
- IN▁
- P
- R
- N
- F
- O
- IN
- B
- T▁
- L
- ING▁
- ▁
- W
- I▁
- HE▁
- WAS▁
- A
- THAT▁
- E▁
- IT▁
- AR
- U
- H
- ES▁
- M
- RI
- ''''
- HIS▁
- AN
- D▁
- Y▁
- LY▁
- ON▁
- AS▁
- HAD▁
- WITH▁
- ST
- Y
- EN
- HER▁
- YOU▁
- K
- DE
- AT▁
- FOR▁
- V
- UN
- TH
- SE
- RO
- LI
- LO
- NOT▁
- TI
- AL
- BUT▁
- IS▁
- ER▁
- SI
- OR
- CH
- ONE▁
- SHE▁
- OR▁
- ME▁
- BE▁
- K▁
- LA
- LE
- ALL▁
- HIM▁
- BE
- CON
- HO
- PO
- AT
- THEY▁
- MY▁
- ME
- 'ON'
- BY▁
- AN▁
- VE▁
- DI
- RA
- AC
- MA
- HAVE▁
- SO▁
- WERE▁
- WHICH▁
- TED▁
- AL▁
- THIS▁
- FROM▁
- AD
- SU
- FI
- AS
- SAID▁
- ER
- TH▁
- SE▁
- RY▁
- MO
- EN▁
- FOR
- HE
- EX
- NE
- M▁
- VI
- TS▁
- SH
- BO
- COM
- PRO
- EL
- ARE▁
- FE
- WE▁
- N▁
- NO▁
- ERS▁
- QU
- THERE▁
- THEIR▁
- LE▁
- WHEN▁
- TE
- TA
- TY▁
- PER
- THEM▁
- TER
- WOULD▁
- OLD▁
- PA
- CO
- IR
- IF▁
- WHO▁
- WHAT▁
- TER▁
- MAN▁
- ATION▁
- ST▁
- BEEN▁
- OUR▁
- CA
- UP▁
- OUT▁
- PRE
- AP
- TION▁
- IT
- FA
- US
- AM
- VE
- TUR
- DO
- PAR
- PE
- 'NO'
- LU
- THEN▁
- WI
- SO
- HI
- P▁
- TO
- COULD▁
- RE▁
- Z
- WILL▁
- KING▁
- EAR▁
- DIS
- EST▁
- LL▁
- SP
- HA
- ENCE▁
- TING▁
- IS
- WE
- DU
- AND
- MORE▁
- SOME▁
- US▁
- PI
- ABLE▁
- NOW▁
- VERY▁
- GU
- EM
- ITY▁
- WA
- H▁
- ATE▁
- LL
- DO▁
- NA
- DER
- ANT▁
- LEA
- PLA
- BU
- SA
- CU
- INTO▁
- OWN▁
- ET▁
- KE
- PU
- LITTLE▁
- MENT▁
- VER
- TE▁
- DID▁
- LIKE▁
- IM
- ABOUT▁
- OUR
- TRA
- TIME▁
- THAN▁
- YOUR▁
- RED▁
- MI
- OTHER▁
- HU
- ION▁
- ANCE▁
- STR
- WELL▁
- W▁
- L▁
- ES
- ANY▁
- ITS▁
- MIS
- AB
- AGE▁
- MAR
- UPON▁
- OVER▁
- TU
- DAY▁
- TEN
- CH▁
- ALLY▁
- GRA
- CAME▁
- MEN▁
- STO
- LED▁
- AM▁
- GA
- ONLY▁
- COME▁
- TWO▁
- UG
- HOW▁
- VEN
- INE▁
- NESS▁
- EL▁
- HAS▁
- BA
- LONG▁
- AFTER▁
- IC▁
- WAY▁
- CAR
- SC
- HAR
- MADE▁
- MIN
- STE
- BEFORE▁
- MOST▁
- ILL
- FO
- GE
- DOWN▁
- DER▁
- BL
- IONS▁
- SUCH▁
- THESE▁
- DE▁
- MEN
- KED▁
- TRU
- WHERE▁
- FUL▁
- BI
- CAN▁
- SEE▁
- KNOW▁
- GO▁
- JE
- GREAT▁
- LOW▁
- MUCH▁
- NEVER▁
- MISTER▁
- GOOD▁
- SHOULD▁
- EVEN▁
- ICE▁
- STA
- LESS▁
- JO
- BLE▁
- MUST▁
- AV
- DA
- ISH▁
- MON
- TRI
- KE▁
- BACK▁
- YING▁
- AIR▁
- AU
- IOUS▁
- AGAIN▁
- MU
- FIRST▁
- F▁
- GO
- EVER▁
- VA
- COR
- OUS▁
- ATED▁
- COUNT
- ROUND▁
- OVER
- LING▁
- HERE▁
- HIMSELF▁
- SHED▁
- MIL
- G▁
- THOUGH▁
- SIDE▁
- CL
- MAY▁
- JUST▁
- WENT▁
- SAY▁
- NG▁
- PASS
- HER
- NED▁
- MIGHT▁
- FR
- MAN
- HOUSE▁
- JU
- SON▁
- PEN
- THROUGH▁
- EYES▁
- MAKE▁
- TOO▁
- THOUGHT▁
- WITHOUT▁
- THINK▁
- GEN
- THOSE▁
- MANY▁
- SPEC
- INTER
- WHILE▁
- AWAY▁
- LIFE▁
- HEAD▁
- SUR
- NTLY▁
- RIGHT▁
- DON
- TAKE▁
- PORT
- EVERY▁
- NIGHT▁
- WARD▁
- WAR
- IMP
- ALL
- GET▁
- STILL▁
- BEING▁
- FOUND▁
- NOTHING▁
- LES▁
- LAST▁
- TURNED▁
- ILL▁
- YOUNG▁
- SURE▁
- INGS▁
- PEOPLE▁
- YET▁
- THREE▁
- FACE▁
- CUR
- OFF▁
- ROOM▁
- OUT
- ASKED▁
- SAW▁
- END▁
- FER
- MISSUS▁
- EACH▁
- SAME▁
- SHA
- SENT▁
- OUL
- LET▁
- SOL
- YOU
- PLACE▁
- UNDER▁
- TOOK▁
- LIGHT▁
- LEFT▁
- PER▁
- PRESS
- USE▁
- ANOTHER▁
- ONCE▁
- TELL▁
- SHALL▁
- 'OFF'
- SEEMED▁
- ALWAYS▁
- NEW▁
- ATIONS▁
- J
- CESS
- USED▁
- WHY▁
- HEARD▁
- LOOKED▁
- GIVE▁
- PUT▁
- JA
- BECAUSE▁
- THINGS▁
- BODY▁
- FATHER▁
- SOMETHING▁
- OWING▁
- LOOK▁
- ROW▁
- GOING▁
- MOTHER▁
- MIND▁
- WORK▁
- GOT▁
- CENT
- HAVING▁
- SOON▁
- KNEW▁
- HEART▁
- FAR▁
- AGAINST▁
- WORLD▁
- FEW▁
- ICAL▁
- STOOD▁
- BEGAN▁
- SIR▁
- BETTER▁
- DOOR▁
- CALLED▁
- YEARS▁
- MOMENT▁
- ENOUGH▁
- WOMAN▁
- TOGETHER▁
- LIGHT
- OWED▁
- READ▁
- WHOLE▁
- COURSE▁
- BETWEEN▁
- FELT▁
- LONG
- HALF▁
- FULLY▁
- MORNING▁
- DENT
- WOOD
- HERSELF▁
- OLD
- DAYS▁
- HOWEVER▁
- WATER▁
- WHITE▁
- PERHAPS▁
- REPLIED▁
- GIRL▁
- QUITE▁
- HUNDRED▁
- WORDS▁
- MYSELF▁
- VOICE▁
- EARLY▁
- OUGHT▁
- AIL▁
- WORD▁
- WHOM▁
- EITHER▁
- AMONG▁
- ENDED▁
- TAKEN▁
- UNTIL▁
- ANYTHING▁
- NEXT▁
- POSSIBLE▁
- KIND▁
- BROUGHT▁
- EAST▁
- LOOKING▁
- ROAD▁
- SMALL▁
- RATHER▁
- BELIEVE▁
- SINCE▁
- MONEY▁
- OPEN▁
- INDEED▁
- DOUBT
- CERTAIN▁
- TWENTY▁
- MATTER▁
- HELD▁
- EXPECT
- DIRECT
- ANSWERED▁
- THERE
- WHOSE▁
- SHIP▁
- HIGH▁
- THEMSELVES▁
- APPEARED▁
- BLACK▁
- NATURE▁
- BEHIND▁
- POWER▁
- IZED▁
- CHILD▁
- UNCLE▁
- DEATH▁
- KNOWN▁
- OFTEN▁
- LADY▁
- POSITION▁
- KEEP▁
- CHILDREN▁
- WIFE▁
- JOHN▁
- LARGE▁
- GIVEN▁
- EIGHT▁
- SHORT▁
- SAYS▁
- EVERYTHING▁
- GENERAL▁
- DOCTOR▁
- ABOVE▁
- HAPPY▁
- Q
- X
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf:
joint_space_size: 320
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram600suffix/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe600_spsuffix/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.0
report_cer: false
report_wer: false
biasinglist: data/Blist/rareword_f15.txt
bmaxlen: 500
bdrop: 0.0
battndim: 256
biasing: true
biasingsche: 30
deepbiasing: true
biasingGNN: gcn6
bpemodel: data/en_token_list/bpe_unigram600suffix/bpe.model
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 15
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transducer
decoder_conf:
rnn_type: lstm
num_layers: 1
hidden_size: 256
dropout: 0.1
dropout_embed: 0.2
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202304'
distributed: false
```
</details>
### Citing TCPGen
```BibTex
@INPROCEEDINGS{9687915,
author={Sun, Guangzhi and Zhang, Chao and Woodland, Philip C.},
booktitle={2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
title={Tree-Constrained Pointer Generator for End-to-End Contextual Speech Recognition},
year={2021},
volume={},
number={},
pages={780-787},
doi={10.1109/ASRU51503.2021.9687915}
}
@inproceedings{Sun2022TreeconstrainedPG,
title={Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition},
author={Guangzhi Sun and C. Zhang and Philip C. Woodland},
booktitle={Interspeech},
year={2022}
}
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
|
vk21/a2c-PandaReachDense-v2-unit6
|
vk21
| 2023-07-11T15:23:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T15:06:09Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.39 +/- 0.17
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
grace-pro/afriberta-base-finetuned-igbo
|
grace-pro
| 2023-07-11T15:18:59Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-11T14:32:20Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-base-finetuned-igbo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-base-finetuned-igbo
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2159
- Precision: 0.7242
- Recall: 0.5039
- F1: 0.5943
- Accuracy: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1989 | 1.0 | 2514 | 0.2020 | 0.7134 | 0.4098 | 0.5206 | 0.9285 |
| 0.1759 | 2.0 | 5028 | 0.2125 | 0.7383 | 0.4263 | 0.5405 | 0.9315 |
| 0.1417 | 3.0 | 7542 | 0.2044 | 0.7320 | 0.4736 | 0.5751 | 0.9352 |
| 0.1279 | 4.0 | 10056 | 0.2066 | 0.7341 | 0.4884 | 0.5866 | 0.9363 |
| 0.1132 | 5.0 | 12570 | 0.2159 | 0.7242 | 0.5039 | 0.5943 | 0.9367 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ashnrk/textual_inversion_sealake
|
ashnrk
| 2023-07-11T15:04:24Z | 14 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-11T14:02:17Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_sealake
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
Belphegor/ppo-Huggy
|
Belphegor
| 2023-07-11T14:50:20Z | 46 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T14:50:17Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Belphegor/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
turkish-nlp-suite/tr_vectors_web_lg
|
turkish-nlp-suite
| 2023-07-11T14:42:54Z | 0 | 0 |
spacy
|
[
"spacy",
"floret",
"fasttext",
"feature-extraction",
"token-classification",
"tr",
"arxiv:1910.10683",
"doi:10.57967/hf/0087",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] |
token-classification
| 2022-11-02T17:30:31Z |
---
tags:
- spacy
- floret
- fasttext
- feature-extraction
- token-classification
language:
- tr
license: cc-by-sa-4.0
model-index:
- name: tr_vectors_web_lg
results:
- task:
name: NMT
type: token-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.1112
---
Medium sized Turkish Floret word vectors for spaCy.
The vectors are trained on MC4 corpus using Floret with the following hyperparameters:
```
floret cbow -dim 300 --mode floret --bucket 200000 -minn 4 -maxn5 -minCount 100
-neg 10 -hashCount 2 -thread 12 -epoch 5
```
Vector are published in Floret format.
| Feature | Description |
| --- | --- |
| **Name** | `tr_vectors_web_lg` |
| **Version** | `1.0` |
| **Vectors** | 200000 keys (300 dimensions) |
| **Sources** | [MC4](https://arxiv.org/abs/1910.10683) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [Duygu Altinok](https://www.onlyduygu.com/) |
---
If you'd like to use the vectors in your own work, please kindly cite the paper [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/):
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
|
turkish-nlp-suite/tr_vectors_web_md
|
turkish-nlp-suite
| 2023-07-11T14:42:20Z | 0 | 0 |
spacy
|
[
"spacy",
"floret",
"fasttext",
"feature-extraction",
"token-classification",
"tr",
"arxiv:1910.10683",
"doi:10.57967/hf/0085",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] |
token-classification
| 2022-11-02T17:22:50Z |
---
tags:
- spacy
- floret
- fasttext
- feature-extraction
- token-classification
language:
- tr
license: cc-by-sa-4.0
model-index:
- name: tr_vectors_web_md
results:
- task:
name: NMT
type: token-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.1112
---
Medium sized Turkish Floret word vectors for spaCy.
The vectors are trained on MC4 corpus using Floret with the follwing hyperparameters:
```
floret cbow -dim 300 --mode floret --bucket 50000 -minn 4 -maxn5 -minCount 100
-neg 10 -hashCount 2 -thread 12 -epoch 5
```
Vector are published in Floret format.
| Feature | Description |
| --- | --- |
| **Name** | `tr_vectors_web_md` |
| **Version** | `1.0` |
| **Vectors** | 50000 keys (300 dimensions) |
| **Sources** | [MC4](https://arxiv.org/abs/1910.10683) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [Duygu Altinok](https://www.onlyduygu.com/) |
---
If you'd like to use the vectors in your own work, please kindly cite the paper [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/):
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
|
tyavika/lr1e5-layer2-bs16-Distil-CNN256LSTM128NoBi
|
tyavika
| 2023-07-11T14:42:14Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T11:07:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: lr1e5-layer2-bs16-Distil-CNN256LSTM128NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr1e5-layer2-bs16-Distil-CNN256LSTM128NoBi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.928 | 1.0 | 3290 | 1.5478 |
| 1.1617 | 2.0 | 6580 | 1.1964 |
| 0.8463 | 3.0 | 9870 | 1.2061 |
| 0.6165 | 4.0 | 13160 | 1.2859 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CaliPanni/natcopeter
|
CaliPanni
| 2023-07-11T14:38:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-11T14:02:57Z |
NAT CO PETER OFFICIAL MODEL!!!!! (1.0)
|
gbellamy/rl_course_vizdoom_health_gathering_supreme
|
gbellamy
| 2023-07-11T14:31:42Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T14:31:32Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.87 +/- 4.95
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r gbellamy/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
matuane/distilbert-base-uncased-finetuned-cola
|
matuane
| 2023-07-11T14:22:57Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T03:58:34Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: matuane/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# matuane/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1968
- Validation Loss: 0.5472
- Train Matthews Correlation: 0.5059
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5136 | 0.4554 | 0.4712 | 0 |
| 0.3229 | 0.4651 | 0.5136 | 1 |
| 0.1968 | 0.5472 | 0.5059 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ericNguyen0132/roberta-large-Dep
|
ericNguyen0132
| 2023-07-11T14:20:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-02T12:57:45Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-Dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-Dep
This model is a fine-tuned version of [rafalposwiata/deproberta-large-depression](https://huggingface.co/rafalposwiata/deproberta-large-depression) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8107
- Accuracy: 0.8517
- F1: 0.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3701 | 0.87 | 0.9264 |
| 0.4293 | 2.0 | 938 | 0.4385 | 0.865 | 0.9219 |
| 0.3302 | 3.0 | 1407 | 0.5293 | 0.85 | 0.9109 |
| 0.2784 | 4.0 | 1876 | 0.7077 | 0.8517 | 0.9118 |
| 0.1914 | 5.0 | 2345 | 0.8107 | 0.8517 | 0.9118 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PeterBrendan/Adsdistilgpt2
|
PeterBrendan
| 2023-07-11T14:20:22Z | 140 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-07T14:29:07Z |
---
license: mit
widget:
- text: "Pizza"
- text: "Nike Basketball"
- text: "Used Porche"
---
**Model:** distilgpt2 (GPT-2)
**Model name:** Adsdistilgpt2
**Model description:**
This is a fine-tuned version of the distilgpt2 model trained on a dataset of 10,000+ programmatic ad creatives. This model is designed to generate ad content given a product or a brand. For instance, when given the input "Nike Basketball", it will generate a sample ad and also suggest an ad size. The model's main purpose is to inspire ad creatives and provide a starting point for creating effective marketing content.
**Intended uses:**
This model is designed to be used as a starting point for creating ad creatives. You could use it in the early stages of your ad design process to generate creative ideas and inspiration.
**Limitations:**
This model has the potential to produce unusual or unexpected results, due to the varied and complex nature of advertising language. It should not be relied upon to produce perfect ad copy, but rather as a tool to inspire creative ideas. Also, the model might not have complete understanding of specific brand guidelines and may not adhere to them.
**How to use:**
You can use this model by providing a product or brand name as an input. For example: *Nike Air Force Ones*
**Training data:**
This model was trained on a dataset consisting of over 10,000 programmatic ad creatives, which included a variety of different product and brand advertisements. The data was collected from various ad platforms and represents a wide range of ad styles and formats.
**Training procedure:**
The model was fine-tuned using the distilgpt2 model with the aforementioned training data. The training loss was 0.16540415118743643.
**Evaluation results:**
As this model's primary objective is to generate creative ads, traditional evaluation metrics such as accuracy or F1 score are not applicable. However, the model's performance has been informally assessed based on the relevancy and creativity of the generated ads.
**Safety and bias considerations:**
This model shares the same safety and bias considerations as the distilgpt2 model. It may generate content that is offensive or inappropriate. Also, as the model is trained on data from the internet, it may reflect the biases present in those sources.
Users should carefully review the generated ads to ensure they align with their brand's values and guidelines before using them. The model is not intended to replace the role of a human in creating ad copy, but rather to assist and provide inspiration.
|
parandhamuduchakali/bert-finetuned-ner
|
parandhamuduchakali
| 2023-07-11T14:20:04Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-11T12:41:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: parandhamuduchakali/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# parandhamuduchakali/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1736
- Validation Loss: 0.0682
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1736 | 0.0682 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.