modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-31 18:27:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-31 18:27:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dt-and-vanilla-ardt/ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0900-99
|
dt-and-vanilla-ardt
| 2023-08-25T10:09:44Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T08:01:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0900-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0900-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_cv12_pad_lob100__0055
|
bigmorning
| 2023-08-25T10:03:47Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T10:03:38Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0055
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0055
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0017
- Train Accuracy: 0.0362
- Train Wermet: 0.3475
- Validation Loss: 0.6087
- Validation Accuracy: 0.0238
- Validation Wermet: 0.2213
- Epoch: 54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
| 0.0098 | 0.0362 | 1.1787 | 0.5985 | 0.0236 | 0.8094 | 35 |
| 0.0071 | 0.0362 | 0.9920 | 0.5992 | 0.0236 | 0.8755 | 36 |
| 0.0055 | 0.0362 | 2.4665 | 0.6047 | 0.0236 | 2.0127 | 37 |
| 0.0124 | 0.0362 | 4.2468 | 0.6089 | 0.0236 | 2.8886 | 38 |
| 0.0109 | 0.0362 | 2.0177 | 0.6097 | 0.0236 | 0.3417 | 39 |
| 0.0073 | 0.0362 | 0.9927 | 0.6057 | 0.0237 | 2.5519 | 40 |
| 0.0080 | 0.0362 | 1.7341 | 0.6099 | 0.0236 | 1.3119 | 41 |
| 0.0063 | 0.0362 | 2.4288 | 0.6058 | 0.0237 | 1.3465 | 42 |
| 0.0038 | 0.0362 | 1.4535 | 0.6022 | 0.0237 | 1.6804 | 43 |
| 0.0028 | 0.0362 | 2.2629 | 0.6001 | 0.0238 | 3.4388 | 44 |
| 0.0021 | 0.0362 | 3.5877 | 0.6018 | 0.0238 | 2.6165 | 45 |
| 0.0017 | 0.0362 | 3.0080 | 0.6043 | 0.0238 | 2.6827 | 46 |
| 0.0061 | 0.0362 | 2.5182 | 0.6545 | 0.0235 | 0.2316 | 47 |
| 0.0126 | 0.0362 | 0.2097 | 0.6206 | 0.0236 | 0.6194 | 48 |
| 0.0071 | 0.0362 | 0.3045 | 0.6047 | 0.0237 | 0.7476 | 49 |
| 0.0053 | 0.0362 | 1.2045 | 0.6010 | 0.0238 | 0.6553 | 50 |
| 0.0040 | 0.0362 | 0.2626 | 0.5964 | 0.0238 | 0.7027 | 51 |
| 0.0021 | 0.0362 | 0.5023 | 0.5950 | 0.0238 | 0.3812 | 52 |
| 0.0014 | 0.0362 | 0.7108 | 0.6233 | 0.0237 | 1.4647 | 53 |
| 0.0017 | 0.0362 | 0.3475 | 0.6087 | 0.0238 | 0.2213 | 54 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
jjimdark/distilbert-base-uncased-finetuned-cola
|
jjimdark
| 2023-08-25T10:02:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T04:35:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5290369945616428
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5219
- Matthews Correlation: 0.5290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.5025 | 0.4154 |
| 0.4551 | 2.0 | 536 | 0.5071 | 0.4792 |
| 0.4551 | 3.0 | 804 | 0.5219 | 0.5290 |
| 0.2312 | 4.0 | 1072 | 0.6287 | 0.5089 |
| 0.2312 | 5.0 | 1340 | 0.6631 | 0.5182 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
amritansh/replit_3B
|
amritansh
| 2023-08-25T09:56:50Z | 5 | 0 |
transformers
|
[
"transformers",
"mpt",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-25T09:51:52Z |
---
license: other
---
---
This is a [ggml](https://github.com/ggerganov/ggml/) quantized version of [Replit-v2-CodeInstruct-3B](https://huggingface.co/teknium/Replit-v2-CodeInstruct-3B). Quantized to 4bit -> q4_1.
To run inference you can use ggml directly or [ctransformers](https://github.com/marella/ctransformers).
- Memory usage of model: **2GB~**
- Repo to run the model using ctransformers: https://github.com/abacaj/replit-3B-inference
|
arroyadr/speecht5_finetuned_voxpopuli_it
|
arroyadr
| 2023-08-25T09:53:57Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-25T08:40:42Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_it
results:
- task:
name: text-to-speech
type: text-to-speech
dataset:
name: VOXPOPULI
type: facebook/voxpopuli
config: it
split: train
args: all
metrics:
- name: MSE
type: mse
value: 0.5028
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5198 | 31.37 | 1000 | 0.5028 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
softaken/Softaken-OLM-to-PST-Converter
|
softaken
| 2023-08-25T09:46:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-25T09:15:59Z |
Softaken OLM to PST Converter Software is the best and most advanced software to convert Windows Mac OLM emails to Outlook PST file format. All users do not need to install other software to convert OLM files to the PST file format. Both professional and non-professional users can also use this tool to convert OLM files to the PST file format. The computer application immediately converts OLM files to Outlook PST file format without taking a long time or losing any single file. Users can load any size file or folder that they want to convert to PST file format without any data limitations. Users can use this tool in any Windows version, such as Windows 11, 10, 8.1, 8, 7, Vista, XP, etc. The app also supports all MS Outlook versions, such as 2002, 2003, 2007, 2010, 2013, 2016, and 2019. The app is fully secure to convert OLM files to PST file format. The computer program has multiple advanced and astonishing features that are helpful for non-technical users who want to convert OLM files to the PST file format. To check out the latest features and capabilities of the software, download the free demo version of this app.
Read More: https://www.softaken.com/olm-to-pst-converter/
|
bigmorning/whisper_syl_cv12_pad_lob100__0045
|
bigmorning
| 2023-08-25T09:37:20Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T09:37:12Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0045
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0045
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0028
- Train Accuracy: 0.0362
- Train Wermet: 2.2629
- Validation Loss: 0.6001
- Validation Accuracy: 0.0238
- Validation Wermet: 3.4388
- Epoch: 44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
| 0.0098 | 0.0362 | 1.1787 | 0.5985 | 0.0236 | 0.8094 | 35 |
| 0.0071 | 0.0362 | 0.9920 | 0.5992 | 0.0236 | 0.8755 | 36 |
| 0.0055 | 0.0362 | 2.4665 | 0.6047 | 0.0236 | 2.0127 | 37 |
| 0.0124 | 0.0362 | 4.2468 | 0.6089 | 0.0236 | 2.8886 | 38 |
| 0.0109 | 0.0362 | 2.0177 | 0.6097 | 0.0236 | 0.3417 | 39 |
| 0.0073 | 0.0362 | 0.9927 | 0.6057 | 0.0237 | 2.5519 | 40 |
| 0.0080 | 0.0362 | 1.7341 | 0.6099 | 0.0236 | 1.3119 | 41 |
| 0.0063 | 0.0362 | 2.4288 | 0.6058 | 0.0237 | 1.3465 | 42 |
| 0.0038 | 0.0362 | 1.4535 | 0.6022 | 0.0237 | 1.6804 | 43 |
| 0.0028 | 0.0362 | 2.2629 | 0.6001 | 0.0238 | 3.4388 | 44 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
gvolovskiy/ppo-LunarLander-v2
|
gvolovskiy
| 2023-08-25T09:35:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T09:35:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.05 +/- 21.23
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
larabe/testt
|
larabe
| 2023-08-25T09:21:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-23T22:01:20Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: testt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testt
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
64FC/whisper-tiny-en
|
64FC
| 2023-08-25T09:20:10Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T08:36:01Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.36186540731995276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7102
- Wer Ortho: 0.3646
- Wer: 0.3619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 23
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0005 | 35.71 | 500 | 0.7102 | 0.3646 | 0.3619 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AbeShinzo0708/Voicevox_SugaYoshihide
|
AbeShinzo0708
| 2023-08-25T09:18:05Z | 0 | 1 | null |
[
"菅義偉",
"Former Japanese Prime Minister",
"Suga",
"SugaYoshihide",
"Yoshihide",
"ja",
"license:openrail",
"region:us"
] | null | 2023-03-18T09:38:23Z |
---
license: openrail
language:
- ja
tags:
- 菅義偉
- Former Japanese Prime Minister
- Suga
- SugaYoshihide
- Yoshihide
---
|
caffeinatedwoof/Llama-2-7b-chat-hf-mental_health_counseling_conversations_peft
|
caffeinatedwoof
| 2023-08-25T09:10:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T07:27:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Isotonic/flan-t5-base-trading_candles
|
Isotonic
| 2023-08-25T09:10:33Z | 126 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:0xMaka/trading-candles-subset-qa-format",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-22T16:32:50Z |
---
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-trading_candles
results: []
datasets:
- 0xMaka/trading-candles-subset-qa-format
widget:
- text: "Context: -30811302.00,464.00,-156202.00,309984.00,276.00,7664.00,4174.00,824467.00,19741.12,19798.04,19860.18,19567.9 Question: identify candle"
- text: "Context: 867553.00,-4282049.00,6306.00,4440418.00,13.00,50962.00,101.00,59152496.00,39512.71,39477.49,39512.71,39380.74 Question: identify candle"
- text: "Context: -206.00,626162.00,-35917428.00,-49739.00,6669939.00,64.00,19988.00,7094559.00,17752.71,17752.71,17752.71,17752.71 Question: find candle: Four Price Doji"
pipeline_tag: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-trading_candles
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [0xMaka/trading-candles-subset-qa-format](https://huggingface.co/datasets/0xMaka/trading-candles-subset-qa-format) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0061
- Rouge1: 88.3665
- Rouge2: 86.86
- Rougel: 88.3651
- Rougelsum: 88.3665
- Gen Len: 18.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.019 | 1.0 | 70009 | 0.0089 | 88.0774 | 86.4734 | 88.0734 | 88.0748 | 18.9022 |
| 0.0095 | 2.0 | 140018 | 0.0069 | 88.3636 | 86.8542 | 88.3612 | 88.3625 | 18.9016 |
| 0.0071 | 3.0 | 210027 | 0.0061 | 88.3665 | 86.86 | 88.3651 | 88.3665 | 18.9025 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nishant-glance/model-sd-1-4-priorp-unet-1200
|
nishant-glance
| 2023-08-25T08:55:23Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-25T08:22:30Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - nishant-glance/model-sd-1-4-priorp-unet-1200
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
diana9m/falcon-7b-sharded-bf16-finetuned-mental-health-NUNA_reevaluated
|
diana9m
| 2023-08-25T08:47:26Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-08-24T13:06:28Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-sharded-bf16-finetuned-mental-health-NUNA_reevaluated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-sharded-bf16-finetuned-mental-health-NUNA_reevaluated
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_cv12_pad_lob100__0025
|
bigmorning
| 2023-08-25T08:44:26Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T08:44:18Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0025
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0725
- Train Accuracy: 0.0359
- Train Wermet: 1.0940
- Validation Loss: 0.6102
- Validation Accuracy: 0.0234
- Validation Wermet: 1.0111
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
libin46/llama2-qlora-finetunined-french
|
libin46
| 2023-08-25T08:40:12Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T08:40:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Donnaphat/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
Donnaphat
| 2023-08-25T08:36:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T08:36:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
feliciamj/ppo-Huggy
|
feliciamj
| 2023-08-25T08:32:29Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-25T08:32:23Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: feliciamj/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Vertti/TuumaPEFTDialogue01
|
Vertti
| 2023-08-25T08:29:37Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T08:29:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
arroyadr/speecht5_finetuned_voxpopuli_nl
|
arroyadr
| 2023-08-25T08:26:45Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-08-24T21:59:06Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6482 | 3.14 | 100 | 0.5937 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_cv12_pad_lob100__0015
|
bigmorning
| 2023-08-25T08:18:04Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T08:17:55Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7388
- Train Accuracy: 0.0305
- Train Wermet: 0.2828
- Validation Loss: 0.8773
- Validation Accuracy: 0.0221
- Validation Wermet: 0.3322
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
zimhe/controlnet-wall-constrained-floorplan
|
zimhe
| 2023-08-25T08:06:23Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"dataset:zimhe/wall-constrained-floorplans-10k",
"region:us"
] | null | 2023-08-23T09:05:20Z |
---
datasets:
- zimhe/wall-constrained-floorplans-10k
---
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0648-66
|
dt-and-vanilla-ardt
| 2023-08-25T08:00:01Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T05:49:57Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0648-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0648-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LibrAI/longformer-action-ro
|
LibrAI
| 2023-08-25T07:58:30Z | 2,940 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T12:33:56Z |
---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: longformer-action-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-action-ro
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1084
- Accuracy: 0.964
- Precision: 0.961
- Recall: 0.936
- F1: 0.946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-----:|
| No log | 1.0 | 89 | 0.2301 | 0.926 | 0.933 | 0.861 | 0.883 |
| No log | 2.0 | 178 | 0.1487 | 0.964 | 0.968 | 0.915 | 0.937 |
| No log | 3.0 | 267 | 0.1084 | 0.964 | 0.961 | 0.936 | 0.946 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Koantek/dolly_llama-v4
|
Koantek
| 2023-08-25T07:56:08Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T07:56:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Intel/whisper-base-int8-dynamic-inc
|
Intel
| 2023-08-25T07:55:37Z | 4 | 1 |
transformers
|
[
"transformers",
"onnx",
"whisper",
"automatic-speech-recognition",
"int8",
"ONNX",
"PostTrainingDynamic",
"Intel® Neural Compressor",
"neural-compressor",
"dataset:librispeech_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T07:50:51Z |
---
license: apache-2.0
datasets:
- librispeech_asr
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- automatic-speech-recognition
- int8
- ONNX
- PostTrainingDynamic
- Intel® Neural Compressor
- neural-compressor
library_name: transformers
---
## Model Details: INT8 Whisper base
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning.
This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command:
```shell
optimum-cli export onnx --model openai/whisper-base whisper-base-with-past/ --task automatic-speech-recognition-with-past --opset 13
```
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | August 25, 2023 |
| Version | 1 |
| Type | Speech Recognition |
| Paper or Other Resources | - |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/whisper-base-int8-dynamic/discussions)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the raw model for automatic speech recognition inference |
| Primary intended users | Anyone doing automatic speech recognition inference |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Download the model by cloning the repository:
```shell
git clone https://huggingface.co/Intel/whisper-base-int8-dynamic
```
Evaluate the model with below code:
```python
import os
from evaluate import load
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutoConfig
model_name = 'openai/whisper-base'
model_path = 'whisper-base-int8-dynamic'
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
wer = load("wer")
librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
from transformers import PretrainedConfig
model_config = PretrainedConfig.from_pretrained(model_name)
predictions = []
references = []
sessions = ORTModelForSpeechSeq2Seq.load_model(
os.path.join(model_path, 'encoder_model.onnx'),
os.path.join(model_path, 'decoder_model.onnx'),
os.path.join(model_path, 'decoder_with_past_model.onnx'))
model = ORTModelForSpeechSeq2Seq(sessions[0], sessions[1], model_config, model_path, sessions[2])
for idx, batch in enumerate(librispeech_test_clean):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
reference = processor.tokenizer._normalize(batch['text'])
references.append(reference)
predicted_ids = model.generate(input_features)[0]
transcription = processor.decode(predicted_ids)
prediction = processor.tokenizer._normalize(transcription)
predictions.append(prediction)
wer_result = wer.compute(references=references, predictions=predictions)
print(f"Result wer: {wer_result * 100}")
accuracy = 1 - wer_result
print("Accuracy: %.5f" % accuracy)
```
## Metrics (Model Performance):
| Model | Model Size (GB) | wer |
|---|:---:|:---:|
| FP32 |0.95|5.04|
| INT8 |0.17|5.31|
|
chrisrtt/gbert-multi-class-german-hate
|
chrisrtt
| 2023-08-25T07:53:41Z | 645 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-25T07:39:21Z |
# Model Card for German Hate Speech Classifier
## Model Details
### Introduction
This model was developed to explore the potential of German language models in multi-class classification of hate speech in German online journals. It is a fine-tuned version of the GBERT model from (Chan, Schweter, and Möller, 2020).
### Dataset
The dataset used for training is a consolidation of three pre-existing German hate speech datasets:
- **RP (Assenmacher et al., 2021)**
- **DeTox (Demus et al., 2022)**
- **Twitter dataset (Glasenbach, 2022)**
The combined dataset underwent cleaning to minimize biases and remove redundant data.
## Performance
Our experiments delivered promising results, with the model reliably classifying comments into:
- **No Hate Speech**
- **Other Hate Speech (Threat, Insult, Profanity)**
- **Political Hate Speech**
- **Racist Hate Speech**
- **Sexist Hate Speech**
The model achieved a macro F1-score of 0.775. However, to further reduce misclassifications, improvements are essential. Short comments are overproportionally classified as Sexist Hate Speech.
|
bigmorning/whisper_syl_cv12_pad_lob100__0005
|
bigmorning
| 2023-08-25T07:51:33Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T07:51:23Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6239
- Train Accuracy: 0.0152
- Train Wermet: 0.7866
- Validation Loss: 3.0647
- Validation Accuracy: 0.0136
- Validation Wermet: 0.8282
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
JennnDexter/Translation
|
JennnDexter
| 2023-08-25T07:40:49Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"fr",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-25T07:12:41Z |
---
language:
- en
- fr
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: Translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Translation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books en-fr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LibrAI/bert-action-ro
|
LibrAI
| 2023-08-25T07:39:45Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T12:10:09Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-action-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-action-ro
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1567
- Accuracy: 0.958
- Precision: 0.949
- Recall: 0.941
- F1: 0.944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-----:|
| No log | 1.0 | 89 | 0.3700 | 0.876 | 0.836 | 0.809 | 0.815 |
| No log | 2.0 | 178 | 0.2057 | 0.936 | 0.927 | 0.924 | 0.924 |
| No log | 3.0 | 267 | 0.1567 | 0.958 | 0.949 | 0.941 | 0.944 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
natsusakiyomi/AsagaoMix
|
natsusakiyomi
| 2023-08-25T07:32:49Z | 13 | 7 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-29T03:42:53Z |
---
license: creativeml-openrail-m
language:
- ja
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
library_name: diffusers
---
---
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
|
MateiCv/spa-eng-pos-tagging-v6
|
MateiCv
| 2023-08-25T07:27:55Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-25T07:27:22Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: spa-eng-pos-tagging-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spa-eng-pos-tagging-v6
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3128
- Accuracy: 0.9056
- Precision: 0.9032
- Recall: 0.8293
- F1: 0.8345
- Hamming Loss: 0.0944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Hamming Loss |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:------------:|
| 1.0141 | 1.0 | 1744 | 0.7804 | 0.7158 | 0.7328 | 0.6183 | 0.6345 | 0.2842 |
| 0.6292 | 2.0 | 3488 | 0.5384 | 0.7973 | 0.8111 | 0.7029 | 0.7213 | 0.2027 |
| 0.4438 | 3.0 | 5232 | 0.4236 | 0.8462 | 0.8346 | 0.7762 | 0.7732 | 0.1538 |
| 0.3626 | 4.0 | 6976 | 0.3856 | 0.8651 | 0.8524 | 0.7933 | 0.7903 | 0.1349 |
| 0.3141 | 5.0 | 8720 | 0.3697 | 0.8712 | 0.8688 | 0.7998 | 0.8028 | 0.1288 |
| 0.2575 | 6.0 | 10464 | 0.3689 | 0.8751 | 0.8758 | 0.8003 | 0.8058 | 0.1249 |
| 0.2117 | 7.0 | 12208 | 0.3329 | 0.8890 | 0.8832 | 0.8169 | 0.8184 | 0.1110 |
| 0.1864 | 8.0 | 13952 | 0.3235 | 0.9010 | 0.8946 | 0.8278 | 0.8293 | 0.0990 |
| 0.1555 | 9.0 | 15696 | 0.3128 | 0.9056 | 0.9032 | 0.8293 | 0.8345 | 0.0944 |
| 0.1322 | 10.0 | 17440 | 0.3311 | 0.9088 | 0.9010 | 0.8376 | 0.8377 | 0.0912 |
| 0.1111 | 11.0 | 19184 | 0.3394 | 0.9101 | 0.9081 | 0.8319 | 0.8383 | 0.0899 |
| 0.0874 | 12.0 | 20928 | 0.3472 | 0.9148 | 0.9100 | 0.8407 | 0.8440 | 0.0852 |
| 0.0659 | 13.0 | 22672 | 0.3635 | 0.9131 | 0.9072 | 0.8400 | 0.8422 | 0.0869 |
| 0.0608 | 14.0 | 24416 | 0.3560 | 0.9187 | 0.9140 | 0.8452 | 0.8482 | 0.0813 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
Datactive/BERT_sud_queries_classification
|
Datactive
| 2023-08-25T07:14:42Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T17:02:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Datactive/BERT_sud_queries_classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Datactive/BERT_sud_queries_classification
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0277
- Validation Loss: 0.0188
- Train F1: 0.9958
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1419, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.0277 | 0.0188 | 0.9958 | 0 |
### Framework versions
- Transformers 4.29.0.dev0
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
raygx/BERT-NepSA-domainAdapt
|
raygx
| 2023-08-25T06:55:43Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:raygx/BertClassifier4NepaliNews",
"base_model:finetune:raygx/BertClassifier4NepaliNews",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T14:47:00Z |
---
license: mit
base_model: raygx/BertClassifier4NepaliNews
tags:
- generated_from_keras_callback
model-index:
- name: BERT-NepSA-domainAdapt
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT-NepSA-domainAdapt
This model is a fine-tuned version of [raygx/BertClassifier4NepaliNews](https://huggingface.co/raygx/BertClassifier4NepaliNews) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 9.99e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bogeumkim/polyglot-1.3b-qlora-emotion-classification
|
bogeumkim
| 2023-08-25T06:35:41Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T06:23:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
rachit221195/rachit-trained-xl-colab
|
rachit221195
| 2023-08-25T06:27:16Z | 6 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-25T06:04:55Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks human
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rachit221195/rachit-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks human using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
nishant-glance/model-sd-1-4-priorp-lowlr-unet
|
nishant-glance
| 2023-08-25T06:19:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-25T05:41:42Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - nishant-glance/model-sd-1-4-priorp-lowlr-unet
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
eunbi-jeong/gpt2
|
eunbi-jeong
| 2023-08-25T06:19:07Z | 0 | 0 | null |
[
"translation",
"en",
"dataset:hellaswag",
"region:us"
] |
translation
| 2023-08-25T06:17:58Z |
---
datasets:
- hellaswag
language:
- en
pipeline_tag: translation
---
|
VkStyle/roma
|
VkStyle
| 2023-08-25T06:18:05Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-22T20:25:11Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
54data/xlm-roberta-base-finetuned-panx-de-fr
|
54data
| 2023-08-25T06:16:07Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-25T06:03:26Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- F1: 0.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2908 | 1.0 | 715 | 0.1909 | 0.8125 |
| 0.1466 | 2.0 | 1430 | 0.1613 | 0.8492 |
| 0.0945 | 3.0 | 2145 | 0.1658 | 0.8588 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nithiroj/wav2vec2-base-finetuned-gtzan
|
nithiroj
| 2023-08-25T06:07:33Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-25T03:44:15Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6608
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9578 | 1.0 | 113 | 1.8537 | 0.28 |
| 1.4644 | 2.0 | 226 | 1.5867 | 0.5 |
| 0.9624 | 3.0 | 339 | 1.1706 | 0.66 |
| 0.8329 | 4.0 | 452 | 0.8807 | 0.76 |
| 0.5047 | 5.0 | 565 | 0.9421 | 0.73 |
| 0.4525 | 6.0 | 678 | 0.7879 | 0.73 |
| 0.5111 | 7.0 | 791 | 0.6493 | 0.79 |
| 0.1836 | 8.0 | 904 | 0.5938 | 0.85 |
| 0.1806 | 9.0 | 1017 | 0.5787 | 0.84 |
| 0.1338 | 10.0 | 1130 | 0.6608 | 0.81 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
hihisu1231/mbti_230825_newdata
|
hihisu1231
| 2023-08-25T06:03:01Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-25T04:08:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: polyglot-1.3b-koalpaca-v1.1a-rtx3090_0825_newdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polyglot-1.3b-koalpaca-v1.1a-rtx3090_0825_newdata
This model is a fine-tuned version of [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
cx-olquinjica/angXLMR
|
cx-olquinjica
| 2023-08-25T05:47:00Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"umb",
"lua",
"cjk",
"kmb",
"kg",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-25T03:21:00Z |
---
language:
- umb
- lua
- cjk
- kmb
- kg
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darkbloodevil/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
darkbloodevil
| 2023-08-25T05:43:48Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T05:43:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
64FC/whisper-small-dv
|
64FC
| 2023-08-25T05:30:50Z | 84 | 1 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T03:35:28Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Francois Chaumet
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 11.703237472615363
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Francois Chaumet
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- Wer Ortho: 59.2103
- Wer: 11.7032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1231 | 1.63 | 500 | 0.1689 | 61.9054 | 13.0142 |
| 0.046 | 3.26 | 1000 | 0.1658 | 59.2103 | 11.7032 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkimds/ppo-LunarLander-v2
|
dkimds
| 2023-08-25T05:17:39Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-01T04:24:12Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -132.01 +/- 71.74
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'dkimds/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
ccore/smart-gpt2-test
|
ccore
| 2023-08-25T04:46:08Z | 0 | 3 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-08-21T20:05:01Z |
---
license: gpl-3.0
---
./gpt-2 -m ggml-model.bin -p "[INSTRUCTION] your prompt [RESPONSE]" -n 1000 --top_p 1
GGML model - more information about framework - https://github.com/ggerganov/ggml/tree/master/examples/gpt-2
|
vodkaslime/codellama-7b-hf
|
vodkaslime
| 2023-08-25T04:09:00Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"code",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-25T04:03:43Z |
---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 7B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
Make sure to be using this temporary branch of transformers unit support is fully merged and released.
```bash
pip install git+https://github.com/huggingface/transformers.git@refs/pull/25740/head accelerate
```
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
=All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base model of 7B parameters.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
zhang-yice/spt-absa-bert-10k
|
zhang-yice
| 2023-08-25T04:06:13Z | 33 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T10:36:20Z |
---
license: cc-by-4.0
---
## SPT-ABSA
We continue to pre-train BERT-base via Sentiment-enhance pre-training (SPT).
- Title: An Empirical Study of Sentiment-Enhanced Pre-Training for Aspect-Based Sentiment Analysis
- Author: Yice Zhang, Yifan Yang, Bin Liang, Shiwei Chen, Bing Qin, and Ruifeng Xu
- Conference: ACL-2023 Finding (Long)
GitHub Repository: https://github.com/HITSZ-HLT/SPT-ABSA
### What Did We Do?
Aspect-Based Sentiment Analysis (ABSA) is an important problem in sentiment analysis.
Its goal is to recognize opinions and sentiments towards specific aspects from user-generated content.
Many research efforts leverage pre-training techniques to learn sentiment-aware representations and achieve significant gains in various ABSA tasks.
We conduct an empirical study of SPT-ABSA to systematically investigate and analyze the effectiveness of the existing approaches.
We mainly concentrate on the following questions:
- (a) what impact do different types of sentiment knowledge have on downstream ABSA tasks?;
- (b) which knowledge integration method is most effective?; and
- (c) does injecting non-sentiment-specific linguistic knowledge (e.g., part-of-speech tags and syntactic relations) into pre-training have positive impacts?
Based on the experimental investigation of these questions, we eventually obtain a powerful sentiment-enhanced pre-trained model.
The powerful sentiment-enhanced pre-trained model has two versions, namely [zhang-yice/spt-absa-bert-400k](https://huggingface.co/zhang-yice/spt-absa-bert-400k) and [zhang-yice/spt-absa-bert-10k](https://huggingface.co/zhang-yice/spt-absa-bert-10k), which integrates three types of knowledge:
- aspect words: masking aspects' context and predicting them.
- review's rating score: rating prediction.
- syntax knowledge:
- part-of-speech,
- dependency direction,
- dependency distance.
### Experimental Results
<img width="75%" alt="image" src="https://github.com/HITSZ-HLT/SPT-ABSA/assets/9134454/38fc2db0-6ccf-47a7-a93c-cf54667e1a23">
<img width="75%" alt="image" src="https://github.com/HITSZ-HLT/SPT-ABSA/assets/9134454/20c5a976-014e-433f-a2ec-4bb259e5a382">
|
LarryAIDraw/rosaria1
|
LarryAIDraw
| 2023-08-25T03:58:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-25T03:54:54Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/106421/rosaria-genshin-impact
|
LarryAIDraw/rosaria
|
LarryAIDraw
| 2023-08-25T03:58:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-25T03:52:53Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/101711/rosaria-genshin-impact-or-goofy-ai
|
peteryushunli/codeparrot-ds
|
peteryushunli
| 2023-08-25T03:45:36Z | 143 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-25T02:41:41Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AdanLee/Reinforce-CartPole-v1
|
AdanLee
| 2023-08-25T03:45:01Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T03:44:49Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rohn132/ppo-Pyramids
|
rohn132
| 2023-08-25T03:40:18Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-25T03:37:43Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rohn132/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_train_halfcheetah_high-2508_0228-99
|
dt-and-vanilla-ardt
| 2023-08-25T03:36:57Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T01:29:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_train_halfcheetah_high-2508_0228-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_train_halfcheetah_high-2508_0228-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
spear1/q-Taxi-v3
|
spear1
| 2023-08-25T03:31:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T03:31:56Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="spear1/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
spear1/q-FrozenLake-v1-4x4-noSlippery
|
spear1
| 2023-08-25T03:30:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T03:30:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="spear1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DataMonke/bert-base-uncased-finetuned-review-sentiment-analysis
|
DataMonke
| 2023-08-25T03:27:20Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:amazon_us_reviews",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-21T14:55:06Z |
---
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
datasets:
- amazon_us_reviews
---
# E-Commerce Product Sentiment Analysis
This model classifies texts into stars categories ranging from 1 to 5. This model has a BERT base and further finetuned on Amazon and e-commerce clothing product reviews.
|
AdanLee/dqn-SpaceInvadersNoFrameskip-v4
|
AdanLee
| 2023-08-25T03:15:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T03:15:07Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 690.50 +/- 213.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AdanLee -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AdanLee -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AdanLee
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
DunnBC22/trocr-base-handwritten-OCR-handwriting_recognition_v2
|
DunnBC22
| 2023-08-25T03:15:17Z | 487 | 14 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"image-to-text",
"en",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-04-17T00:13:38Z |
---
tags:
- generated_from_trainer
model-index:
- name: trocr-base-handwritten-OCR-handwriting_recognition_v2
results: []
language:
- en
metrics:
- cer
pipeline_tag: image-to-text
---
# trocr-base-handwritten-OCR-handwriting_recognition_v2
This model is a fine-tuned version of [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten).
It achieves the following results on the evaluation set:
- Loss: 0.2470
- CER: 0.0360
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Optical%20Character%20Recognition%20(OCR)/Handwriting%20Recognition/Handwriting%20Recognition_v2/Mini%20Handwriting%20OCR%20Project.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology. You are welcome to test and experiment with this model, but it is at your own risk/peril.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/ssarkar445/handwriting-recognitionocr
_Character Length for Training Dataset:_
/Handwriting%20Recognition/Images/Input%20Character%20Length%20Distribution%20for%20Training%20Dataset.png)
_Character Length for Evaluation Dataset:_
/Handwriting%20Recognition/Images/Input%20Characgter%20Length%20Distribution%20for%20Evaluation%20Dataset.png)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4292 | 1.0 | 2500 | 0.4332 | 0.0679 |
| 0.2521 | 2.0 | 5000 | 0.2767 | 0.0483 |
| 0.1049 | 3.0 | 7500 | 0.2470 | 0.0360 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.12.1
|
DunnBC22/trocr-large-printed-cmc7_tesseract_MICR_ocr
|
DunnBC22
| 2023-08-25T03:15:01Z | 77 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"image-to-text",
"en",
"base_model:microsoft/trocr-large-printed",
"base_model:finetune:microsoft/trocr-large-printed",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-07-23T18:53:50Z |
---
base_model: microsoft/trocr-large-printed
tags:
- generated_from_trainer
model-index:
- name: trocr-large-printed-cmc7_tesseract_MICR_ocr
results: []
license: bsd-3-clause
language:
- en
metrics:
- cer
pipeline_tag: image-to-text
---
# trocr-large-printed-cmc7_tesseract_MICR_ocr
This model is a fine-tuned version of [microsoft/trocr-large-printed](https://huggingface.co/microsoft/trocr-large-printed).
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Optical%20Character%20Recognition%20(OCR)/Tesseract%20MICR%20(CMC7%20Dataset)/TrOCR_cmc7_tesseractMICR.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology. You are welcome to test and experiment with this model, but it is at your own risk/peril.
## Training and evaluation data
Dataset Source: https://github.com/DoubangoTelecom/tesseractMICR/tree/master/datasets/cmc7
**Histogram of Label Character Lengths**
/Tesseract%20MICR%20(CMC7%20Dataset)/Images/Histogram%20of%20Label%20Character%20Length.png)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
The Character Error Rate (CER) for this model is 0.004970720413999727.
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Chirayu/nl2kql
|
Chirayu
| 2023-08-25T03:06:43Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"code",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-11T23:00:42Z |
---
license: mit
tags:
- code
---
# What does this model do?
This model converts the natural language input to Kusto (KQL) query. It is a fine-tuned CodeT5+ 220M. This model is a part of nl2query repository which is present at https://github.com/Chirayu-Tripathi/nl2query
You can use this model via the github repository or via following code. More information can be found on the repository.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model = AutoModelForSeq2SeqLM.from_pretrained("Chirayu/nl2kql")
tokenizer = AutoTokenizer.from_pretrained("Chirayu/nl2kql")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
textual_query = '''kusto: find the session ids which have duration greater than 10 and having Manoj Raheja as the owner | conferencesessions : conference, sessionid, session_title, session_type, owner, participants, URL, level, session_location, starttime, duration, time_and_duration, kusto_affinity'''
def generate_query(
textual_query: str,
num_beams: int = 10,
max_length: int = 128,
repetition_penalty: int = 2.5,
length_penalty: int = 1,
early_stopping: bool = True,
top_p: int = 0.95,
top_k: int = 50,
num_return_sequences: int = 1,
) -> str:
input_ids = tokenizer.encode(
textual_query, return_tensors="pt", add_special_tokens=True
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_ids = input_ids.to(device)
generated_ids = model.generate(
input_ids=input_ids,
num_beams=num_beams,
max_length=max_length,
repetition_penalty=repetition_penalty,
length_penalty=length_penalty,
early_stopping=early_stopping,
top_p=top_p,
top_k=top_k,
num_return_sequences=num_return_sequences,
)
query = [
tokenizer.decode(
generated_id,
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
for generated_id in generated_ids
][0]
return query
```
|
tyayoi/xlm-roberta-base-finetuned-panx-all
|
tyayoi
| 2023-08-25T03:05:19Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T11:01:34Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1761
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.303 | 1.0 | 835 | 0.1887 | 0.8212 |
| 0.1582 | 2.0 | 1670 | 0.1708 | 0.8409 |
| 0.1034 | 3.0 | 2505 | 0.1761 | 0.8555 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
Nilkanth014/ppo-LunarLander-v2
|
Nilkanth014
| 2023-08-25T03:00:07Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T21:16:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.56 +/- 15.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tyayoi/xlm-roberta-base-finetuned-panx-de-fr
|
tyayoi
| 2023-08-25T02:53:58Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T09:18:03Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1642
- F1: 0.8561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2932 | 1.0 | 715 | 0.1829 | 0.8220 |
| 0.1486 | 2.0 | 1430 | 0.1612 | 0.8463 |
| 0.0925 | 3.0 | 2145 | 0.1642 | 0.8561 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
rohn132/ppo-SnowballTarget
|
rohn132
| 2023-08-25T02:49:11Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-25T02:49:03Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rohn132/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lukelarue/dqn-SpaceInvadersNoFrameskip-v4
|
lukelarue
| 2023-08-25T02:48:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T02:48:25Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 626.50 +/- 166.78
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lukelarue -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lukelarue -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lukelarue
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
debadas/dog
|
debadas
| 2023-08-25T02:35:28Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-25T02:28:07Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - debadas/dog
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ad019el/m2m100_418M-finetuned-tq-to-ar-1
|
ad019el
| 2023-08-25T02:24:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:ad019el/m2m100_418M-finetuned-tq-to-ar",
"base_model:finetune:ad019el/m2m100_418M-finetuned-tq-to-ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-23T02:51:54Z |
---
base_model: ad019el/m2m100_418M-finetuned-tq-to-ar
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-tq-to-ar-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-tq-to-ar-1
This model is a fine-tuned version of [ad019el/m2m100_418M-finetuned-tq-to-ar](https://huggingface.co/ad019el/m2m100_418M-finetuned-tq-to-ar) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2002
- Bleu: 3.6349
- Gen Len: 35.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.7537 | 0.71 | 500 | 2.2710 | 4.2969 | 35.4312 |
| 2.6442 | 1.42 | 1000 | 2.2373 | 4.0784 | 35.1062 |
| 2.6329 | 2.13 | 1500 | 2.2257 | 3.8894 | 36.225 |
| 2.564 | 2.84 | 2000 | 2.2210 | 3.5513 | 36.076 |
| 2.5352 | 3.56 | 2500 | 2.2151 | 3.7339 | 35.0885 |
| 2.4991 | 4.27 | 3000 | 2.2078 | 3.4662 | 36.3333 |
| 2.4782 | 4.98 | 3500 | 2.2100 | 3.3332 | 36.4062 |
| 2.4363 | 5.69 | 4000 | 2.2085 | 3.3587 | 36.3135 |
| 2.4411 | 6.4 | 4500 | 2.2034 | 3.8744 | 34.5073 |
| 2.4002 | 7.11 | 5000 | 2.2036 | 3.6693 | 36.3448 |
| 2.3841 | 7.82 | 5500 | 2.2030 | 3.7486 | 35.076 |
| 2.3619 | 8.53 | 6000 | 2.1970 | 3.5687 | 35.8271 |
| 2.3627 | 9.25 | 6500 | 2.2016 | 3.5394 | 35.3583 |
| 2.3451 | 9.96 | 7000 | 2.1996 | 3.5863 | 34.9271 |
| 2.3323 | 10.67 | 7500 | 2.2002 | 3.6349 | 35.5271 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
tyayoi/xlm-roberta-base-finetuned-panx-de
|
tyayoi
| 2023-08-25T02:13:35Z | 135 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T09:01:45Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8606487530534567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1409
- F1: 0.8606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2572 | 1.0 | 525 | 0.1538 | 0.8187 |
| 0.1233 | 2.0 | 1050 | 0.1475 | 0.8492 |
| 0.0796 | 3.0 | 1575 | 0.1409 | 0.8606 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
TaeLeeKyung/KoSimCSE-roberta-multitask-marketing-lms
|
TaeLeeKyung
| 2023-08-25T02:09:51Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"sentence-similarity",
"marketing",
"sts",
"nli",
"ko",
"dataset:TaeLeeKyung/ko_marketing_lms_dataset",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-25T01:58:20Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- marketing
- sts
- nli
datasets:
- TaeLeeKyung/ko_marketing_lms_dataset
language:
- ko
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 702 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 141,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': True}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Unmand/procare_referrer_org_build2
|
Unmand
| 2023-08-25T02:04:44Z | 0 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-08-25T01:36:02Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_procare_referrer_organisation
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_procare_referrer_organisation` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.4,<3.6.0` |
| **Default Pipeline** | `textcat_multilabel` |
| **Components** | `textcat_multilabel` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (726 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat_multilabel`** | `H D PROJECTS PTY LTD`, `McCabe Curwood`, `Dept of Education, Skills & Employment`, `StateCover Mutual Limited`, `Perth Orthopaedic & Sports Medicine`, `Queensland Child Care Service Pty Ltd Ttee`, `Allianz Australia Insurance Limited c/- Jensen McConaghy Lawyers`, `Catholic Care Diocese of Broken Bay`, `Helping Hand New Aged Care`, `Suncorp Life`, `Qantas Airways Limited`, `Department of Defence`, `Master Builders Association of SA`, `HWL Ebsworth Lawyers`, `Alexander Watson`, `Zoetis`, `RSL Care`, `P&N Bank`, `University of NSW`, `Uber Technologies, Inc.`, `Finlay Plumbing Services Pty Ltd`, `Hays Specialist Recruitment`, `KENNARDS HIRE PTY LIMITED`, `Carer Solutions Australia`, `Unitingcare`, `No. 1 Riverside Quay Proprietary Limited`, `Gallagher Basset`, `Department of the Chief MInister and Cabinet`, `CHEP Australia`, `Minda Incorporated`, `The Star`, `Tas Water`, `Feros Care`, `Roshana Group`, `Atradius Crédito y Caución S.A de Seguros y Reaseguros`, `Services Australia`, `RT Consulting`, `The Australian Electoral Commission`, `Federal Court of Australia`, `NRMA INSURANCE`, `Catholic Education Office`, `Svitzer Australia Pty Ltd`, `QBE acting as the agent of NSW Self Insurance Corporation`, `LAWRENCE & HANSON`, `UnitingCare Queensland`, `LibertyGFG`, `Australian Tax Office`, `Alvaro Transport Pty Ltd`, `GIO Workers Compensation ACT`, `Cso Diocese Of Broken Bay`, `Glencore`, `EASTERN HOSPITAL`, `BOC Limited, a member of the Linde Group`, `INVOCARE AUSTRALIA PTY LIMITED`, `UNITRANS ASIA PACIFIC PTY LTD`, `Services Australia (Dept of Human Services)`, `VEOLIA ENVIRONMENTAL SERVICES (AUSTRALIA) PTY LTD `, `Vickilynn Pty Ltd`, `Coles Team Cover`, `MLC Life Insurance`, `Sparke Helmore Lawyers`, `RSL Lifecare Limited`, `QBE Workers Compensation TAS`, `Kimberley Clark Australia`, `The Personnel Group Ltd`, `Insurance Australia Group`, `Canberra Sand & Gravel`, `Viva Energy Australia Pty Ltd`, `Moran Aged Care Engadine`, `Australian Taxation Office`, `Youis Group Pty Ltd`, `Cleanaway`, `Mosaic Brands (Rockmans)`, `Children Hospital Foundation`, `Civil Aviation Safety Authority`, `QBE Workers Compensation WA`, `United Protestant Association`, `PSC Capital Insurance Brokers`, `Woolworths Group Limited`, `Kilcoy Global Foods`, `American Express Australia Limited`, `Palios Meegan Nicholson`, `Uniting`, `Coles Group Supply Chain Pty Ltd`, `QBE`, `OBE Organic`, `Cyprium Metals Limited`, `Kincare Health Services Pty Ltd`, `StateCover Mutual Ltd`, `FIRE RESCUE VICTORIA`, `N2N Claims Solutions`, `WesFarmers – Group TeamCover`, `NDIS Quality and Safeguards Commission`, `HD Projects Pty Ltd`, `St Finn Barr's Catholic Primary School - Lanceston`, `Power and Water Corporation`, `EML VIC Pty Ltd`, `Wanton Kearney`, `Kmart Australia Ltd`, `Territory Families – Housing & Communities`, `Calvary Community Care`, `Sedgwick`, `Leonora Contracting P/L`, `NSW Health Pathology`, `Kilcoy Pastoral Company Ltd`, `GIO CTP ACT`, `DXC Claims Management Services - VIC`, `Schindler Lifts Australia Pty Ltd`, `Meridian Lawyers`, `GIO Workers Compensation WA`, `AUB Group Limited`, `Coateshire`, `Aurizon`, `JWLand`, `Trusted Support Coordination`, `Gosford Quarries Pty Ltd`, `GIO NSW Workers Compensation`, `DESE`, `Busways Group`, `Gallagher Bassett Workers Compensation NSW`, `Allianz Australia Insurance Limited C/- McInnes Wilson Lawyers`, `oOh!Media`, `West Gate Tunnel Project`, `KOMATSU MARKETING SUPPORT AUST`, `Mills Oakley Lawyers`, `Hall & Wilcox`, `Skybridge Group Pty Limited`, `Retirement Living Business & Financial Services`, `Allianz Workers Compensation NT`, `Environmental Industries Pty Ltd`, `EML Workers Insurance NSW`, `Department of Agriculture, Water and the Environment`, `MS Australia`, `CSIRO`, `Orange Health Service`, `AHI Insurance`, `Bupa`, `Allianz Australia Workers Compensation (Victoria) Ltd`, `Cappello Civil Contracting Services Pty Ltd`, `LAF Group`, `RTozerconsulting`, `St Michaels College`, `Gallagher Bassett for Opal Healthcare`, `Department of Families, Fairness and Housing`, `WESTHAVEN LIMITED`, `Integrity Care`, `GPC Asia Pacific`, `Department of Primary Industries`, `Mosaic Brands Limited`, `QBE Workers Compensation NT`, `Coredev`, `South Western Sydney Local Health District`, `CGU Workers Compensation ACT`, `Tas Prison Service`, `Sonic Healthcare`, `Workcover C/BT Lawyers`, `PSC WCS`, `CPB Contractors Pty Ltd`, `Cookie Steelfixing and Construction`, `Warner Bros`, `CGU Workers Compensation NT`, `CMET`, `AnglicareSA`, `St Vincent’s Care Services Carseldine`, `Tasmanian Catholic Education Office`, `Allianz Australia Insurance Ltd`, `Roussos Legal Advisory`, `BGIS Technical Services`, `AAMI NSW CTP`, `Wotton Kearney`, `Galllgher Bassett Workers Compensation VIC`, `Brisbane Fire Pty Ltd`, `QBE Workers Compensation NSW`, `Sunshine Coast Hospital and Health Service`, `Oaks Hotels & Resorts Limited - 9004`, `Ausgrid`, `Boral Limited`, `Aerison Pty Ltd`, `Cooper Grace Ward Lawyers`, `Hsswa Pty Ltd`, `Weir Minerals Australia Ltd`, `Labour Force Pty Ltd`, `Barry Nilsson Lawyers`, `Liberty Oil Australia Pty Ltd`, `ABPhillips`, `Austral Risk`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer`, `OCEAN GARDENS INC`, `Roshana Group Pty Ltd`, `GIO CTP NSW`, `Lachlan Shire Council`, `Allianz Workers Compensation WA`, `United Equipment Pty Ltd`, `PFD FOOD SERVICES PTY LTD`, `Phoenix Insurance Brokers`, `Blumers`, `Department of Home Affairs`, `Anglo Coal (Grosvenor Management) Pty Ltd c/- Ashurst Australia`, `Anglicare Southern QLD`, `Lifetime Support`, `The Trustee for The Roshana Family Trust`, `Zurich Australian Insurance Ltd`, `Dept of Education & Training - School Cleaners`, `DXC Claims Management Services`, `The Medical Clinic Millicent`, `Melbourne Water`, `COMPASS GROUP AUSTRALIA PTY LTD`, `Andreasens Green NSW Andreasens Green QLD`, `Astridge and Murray`, `EML Plus`, `Philips Electronics P/L`, `ISS Facility Services Australia Ltd`, `Busy Bees Early Learning Australia Pty Ltd`, `Coates Hire`, `Sydney Trains`, `Catholic Schools Parramatta Diocese Limited`, `CGU Workers Compensation TAS`, `Mercer`, `COFFS HARBOUR SUPPORT SERVICES LTD`, `I-MED GROUP`, `One Path`, `Transport Accident Commission`, `Department of Corporate and Digital Development Northern Territory Government`, `Boral Insurance Pty Limited`, `Department of Justice`, `AB Phillips Pty Ltd`, `Irwin & Hartshorn`, `Pacific Labour Facility`, `Suncorp Staff Pty Ltd`, `Vilis Bakery`, `NRMA`, `The Hospitals Contribution Fund Of Australia Ltd`, `SCE Group`, `Our Lady of Mercy College Parramatta`, `DOSER Freight Forwarding`, `Employers Mutual NSW Limited`, `Cappello Hydraulics & Civil Pty Ltd`, `Buderim Kindergarten`, `ACT Recycling Pty Ltd`, `Bupa Medical Visa Services`, `Allianz CTP SA`, `Auspost`, `Liverpool Plains Shire Council`, `Corporate Services Network Pty Ltd`, `DP World Australia Pty Ltd`, `Complete Personnel Recruitment`, `DXC Integrated Services`, `QBE Workers Compensation - ACT`, `BINGO PTY LTD`, `The Arnott’s Group`, `EML Agent for icare Workers Insurance`, `IHG Irwin Hartshorn Group`, `Civilmart`, `ORAMS Agencies`, `Liberty GFG`, `QBE NSW Treasury Managed Fund`, `EML (NSW Treasury Managed Fund)`, `Hays Recruitment`, `Mosaic Group Ltd Pty`, `BlueCare`, `Gallagher Bassett Services`, `Ernst & Young (EY)`, `Cootharinga North Queensland`, `BUPA AGED CARE AUSTRALIA P/L`, `Toll Self Insurance`, `Corporate Services Network`, `ACT GOV`, `SA Health Northern Adelaide Local Health Network`, `Inghams Enterprises Pty Ltd`, `Centrewest Insurance Brokers`, `Department of Foreign Affairs and Trade (DFAT)`, `RSL Life Care`, `Star of the Sea School`, `Chubb`, `Suncorp CTP QLD`, `JACANA ENERGY`, `Toll Group`, `Corporeal Health`, `Mosaic Brands (Noni B Limited)`, `QBE CTP Insurance`, `Q Super`, `Bartier Perry Lawyers`, `Queensland Government`, `Department of Health and Human Services Tasmania`, `Hall and Wilcox Lawyers`, `Griffin Coal`, `Cappello Commercial Hydraulics and Civil Pty Ltd`, `Bolton Clarke`, `Australian Unity`, `Gallagher Bassett Services Pty Ltd`, `St John Ambulance Western Australia Ltd`, `Geocon Group Pty Ltd`, `Allianz Australia Insurance Limited c/ Jensen McConaghy Lawyers`, `UAA Pty Ltd`, `Tamex Transport Services Pty Ltd`, `WFI Insurance Limited`, `Programmed Skilled Workforce Limited`, `Bartier Perry`, `Australian Competition & Consumer Commission`, `Queensland Health`, `Holcim (Australia) Pty Ltd`, `Southern NSW Local Health District`, `Blue Care`, `Gallagher Bassett Workers Compensation VIC`, `Point Insurance`, `Workers Compensation & Risk Specialists (WCRS) services render for Philips electronics P/L`, `Country Wide Insurance Brokers (CWIB)`, `Allianz Australia Insurance Ltd C/ - Moray and Agnew Lawyers`, `CHUBB AUSTRALASIA`, `Sirius Support & Industrious People`, `BORG MANUFACTURING P/L`, `Department of Climate Change, Energy, the Environment and Water`, `Hireup Pty. Ltd.`, `Workcover QLD`, `Greenham Tasmania `, `Fantastic Furniture Ltd`, `CGU Workers Compensation VIC`, `Lawson Risk Management Services Pty Ltd`, `SGP Civil`, `Moray & Agnew`, `Edwards Michael Lawyers`, `Jensen McConarchy`, `Cyprium Metals`, `Hunter New England Local Health District`, `EML TMF, Insurance for NSW`, `RACQ Insurance`, `Blue Care ATF The Uniting Church in Aust. Property Trust (Q)`, `ENERGYAUSTRALIA SERVICES P/L`, `AAMI CTP`, `Bupa Asia Pacific`, `The Good Shepherd Home`, `Department of Corporate and Digital Development`, `Allianz CTP Claims NSW`, `Sedgwick Australia`, `Racing NSW`, `GCI Group`, `Australia Post`, `Coles Group Limited`, `Minter Ellison`, `MCCOLL'S OPERATIONS P/L`, `Apprenticeship Support Australia`, `AIA Australia Limited`, `Ernst & Young Services Pty Limited`, `North Metropolitan Health Service`, `St Vincent de Paul Society Canberra/Goulburn (Inc)`, `DP WORLD AUSTRALIA FREMANTLE TERMINAL`, `Moray and Agnew`, `Mosaic Group`, `Ovato`, `ACT Formwork Pty Ltd`, `DORMAKABA AUSTRALIA PTY LTD`, `Jones Harley Toole`, `QBE Accident and Health`, `Crawford Legal`, `REA Group Ltd`, `Amadeus IT Pacific Pty Ltd`, `DXC Integrated Services Victoria Pty Ltd`, `Vellex Pty Ltd`, `3M Australia`, `RTC Consulting`, `Somerset College Ltd`, `Bupa Care Services`, `IKEA North Lakes`, `Australian Criminal Intelligence Commission`, `McInnes Wilson Lawyers`, `UnitingCare Queensland `, `Anglican Community Care Incorporated (trading as ac.care)`, `Electrolux Home Products Pty Ltd`, `Gen Leads`, `FUSE RECRUITMENT MELBOURNE P/L`, `Zurich Financial Services Australia Limited`, `Wesfarmers Group TeamCover`, `Connect Infrastructure`, `Oji Fibre Solutions (Aus) Pty Ltd`, `Quality Bakers Australia Pty Limited`, `Workers Compensation & Risk Specialists`, `Civil Aviation Safety Authority (CASA)`, `Endeavour Foundation`, `The Territory Boundless Possible`, `Territory Families – Housing & Communities`, `Ampol Australia Petroleum Pty Ltd`, `Seven Network (Operations) Ltd`, `HopgoodGanim Lawyers`, `Coal Mines Insurance`, `QBE Insurance Australia`, `UGL Limited`, `QBE Accident and Health `, `C.INC`, `Ikea Logan`, `VERO`, `Geodis Australia`, `McCabes Lawyers`, `Programmed`, `UNSW Canberra`, `EML, Agent for ReturnToWorkSA`, `TEST ORG 2. EML Workers Insurance NSW`, `Kings Group`, `Maney Transport`, `South Western Sydney Lhd`, `Force Fire and Safety Pty Ltd`, `Astridge & Murray Solicitors `, `Rankin Ellison Lawyers`, `EML Insurance`, `ACCC/AER`, `Facilities First`, `Turks Legal`, `Jenson McConaghy Lawyers`, `CGU Insurance`, `AAI Limited trading as GIO`, `BP Australia Limited C/ Collin Biggers & Paisley Lawyers`, `O’Neill & Brown Electrical Services Pty Ltd`, `St Kilda PCYC`, `Justice Services Pty Ltd`, `American Express International Inc`, `Gillis Delaney Lawyers`, `Cabra Dominican College Ltd.`, `Trident Services Cleaning Pty Ltd`, `Hicksons Lawyers`, `Healthscope Operations Pty Ltd`, `GSK CX Healthcare Pty Ltd`, `ACT Government`, `AJ Bush & Sons Pty Ltd`, `OMB Solicitors`, `EML Self Insurance`, `Cooper Grace Ward`, `GC Legal`, `Centacare Catholic Family Services`, `Etex Australia Pty Ltd`, `Allianz Australia Ltd`, `Envirolab Service`, `Ikea `, `Allianz Australia Insurance Limited`, `WorkCover Queensland`, `Allianz Workers Compensation ACT`, `GIO Workers Compensation NSW`, `GenesisCare`, `Rocklea Pressed Metal Pty Ltd `, `Australian Digital Health Agency`, `HWL Ebsworth`, `Museum and Art Gallery Northern Territory (MAGNT)`, `CSR`, `Connell`, `4cRisk`, `HBA Legal`, `Coles Supermarkets Australia Pty Ltd`, `The University of Queensland`, `VENTIA SERVICES GROUP P/L,VENT`, `Point Underwriting Agency Pty Ltd`, `Youi CTP SA`, `Allianz Workers Compensation NSW`, `Detmold Packaging Pty Ltd`, `KENNARDS HIRE PTY LTD`, `QBE CTP QLD`, `Insurance House Group`, `Kilcoy Pastoral Company Limited`, `SRG Global Mining (Australia) Pty Ltd`, `Hunter Imaging Group`, `Park Hyatt Melbourne`, `Enviro Lab`, `QBE Australia Insurance Limited`, `EML c/o Moray`, `Catholic Church Insurance Limited`, `NV EMPLOYMENT PTY LTD`, `IP Australia`, `Qantas`, `Wesfarmer Limited`, `Melton City Council`, `Workcover Employer For Special Policies`, `Allianz Australia Workers Compensation (NSW) Ltd.`, `Uniting Care Health`, `Staff Australia Payroll Services Pty Ltd`, `WN Group`, `Infrabuild`, `Western NSW Local Health District`, `APS Group`, `DXC Claims Management Services - VIC`, `GIO`, `Northern Adelaide Local Health Network `, `Austbrokers Canberra`, `Department of Treasury and Finance Northern Territory Government`, `PSC Workers Compensation & Consulting`, `Alinta Energy`, `Sunline ACT Pty Ltd`, `Allianz Australia Workers' Compensation (Victoria)`, `Suncorp`, `JW Land Construction`, `Comcare - VIC`, `IKEA Pty Limited`, `KENNARDS HIRE`, `IRI Worldwide`, `RFI Technology Solutions`, `Engage TSS Internal Resources`, `St Vincent’s Care Services Mitchelton`, `Cappello Concreting Services Pty Ltd`, `Correct Care Australasia P/L`, `Coal Services`, `VELLA TRANSPORT ADMINISTRATION PTY LTD`, `CGU Workers Compensation WA`, `CORPORATE SERVICE NETWORK`, `BGIS`, `SCENTRE LIMITED`, `Employers Mutual Limited`, `RAPE & DOMESTIC VIOLENCE SERVICES AUSTRALIA`, `PSC Insurance`, `Allianz Australia Insurance Ltd ACT`, `Big W`, `Coverforce Pty Ltd`, `AAMI SA CTP Claims`, `EML Workers Insurance`, `Emjay Insurance Brokers`, `EML Victoria`, `WorkSafe Claims and Recovery Support team`, `Adcor`, `Territory Families, Housing and Communities (TFHC)`, `Nazareth Catholic Community`, `Gallagher Bassett Workers Compensation SA`, `INVOCARE AUSTRALIA P/L`, `Hardman Risk Management`, `The Sydney Childrens Hospital Network`, `The Junction Works Limited`, `PEM DEMO`, `Queensland Ambulance Service`, `Fel Child Care Centres 1 Pty Ltd`, `Allianz CTP QLD`, `Moray & Agnew Lawyers`, `Programmed Maintenance Services Ltd (Self Insured)`, `iag`, `Barnardos`, `eReports `, `Youi Pty Ltd`, `HM Focus Pty Ltd`, `Allianz Workers Compensation VIC`, `iCare Workers Insurance`, `Procare Group`, `Kemp & Co Lawyers`, `AAMI Insurance`, `Combined Insurance`, `STAWELL GOLD MINES P/L`, `QBE CTP NSW`, `SA Health`, `Gilshenan & Luton Legal Practice`, `Genesis Care`, `SOUTH AUSTRALIA POLICE`, `Wollongong City Council`, `TUTT BRYANT GROUP LTD`, `Endeavour Energy`, `Tasmanian Health Service`, `IC Formwork Services Pty Ltd`, `Humdrum`, `Comcare`, `The Gowrie (Qld) Inc`, `Australian Government Department of Education, Skills and Employment`, `Gair Legal`, `Dept of Territory Families, Housing and Communities`, `McArthur River Mining PTY Ltd`, `Kincare Management Pty Ltd`, `CFA`, `Department of Territory Families, Housing and Communities Division Library & Archives NT`, `Department for Education and Child Development`, `Core Building Group Pty Ltd`, `ACH Group`, `Busy Bees Australia Operations Pty Ltd.`, `Wesfarmers Ltd`, `JBC Corporate`, `NULL`, `No Employer - ADL`, `BT Lawyers`, `InfraBuild Steel Centre`, `Kimberly-Clark`, `Tas TAFE`, `EML National Self Insurance`, `National Disability Insurance Agency`, `Colin Biggers & Paisley Pty`, `DP World Brisbane Pty Ltd`, `Australian Trade and Investment Commission (Austrade)`, `Allianz Australia Limited c/- McInnes Wilson Lawyers`, `Community Solutions`, `RFI`, `RACQ Insurance Limited ABN 50 009 704 152`, `AAI Limited trading as GIO`, `Gallagher Bassett Services Workers Compensation Vic Pty Ltd`, `Department of Infrastructure, Transport and Regional Development`, `PSC Insurance Group`, `Allianz CTP NSW`, `CSR Limited`, `Kimberly-Clark Australia P/L`, `Hall and Willcox Lawyers`, `Page Seager Lawyers`, `Iconic Hotels Management`, `St John Medical Centre`, `Department of Veterans Affairs`, `Allianz QLD CTP`, `Morgan & Agnew Lawyers`, `Bureau of Meteorology`, `Forest Coach Lines Pty / Ltd`, `Shaw's Darwin Transport Pty Ltd`, `Dynamic Diesel Mechanical Services Pty Ltd`, `Hall & Wilcox Lawyers`, `Moran Aged Care`, `DJarvis@shepelectrical.com.au`, `Gallagher Bassett Self Insurance NSW`, `EML as agent for icare Workers Insurance NSW`, `Minter Ellison Lawyers`, `Lee Legal Group`, `Child and Adolescent Health Service (CAHS)`, `Holman Webb Lawyers`, `Dept of Home Affairs`, `QSuper`, `TIO Motor Accidents Compensation `, `Allianz Australia Workers' Compensation (Victoria) Limited`, `Perpetual Limited`, `Barwang Pty Ltd`, `CTP QLD Claims Division`, `InvoCare`, `Australian Border Force`, `I MED Radiology Network`, `Ensure Pty Ltd`, `CITY OF PALMERSTON`, `AKUBRA HATS PTY LTD`, `Secom Australia`, `GIO Workers Compensation NT`, `Pialligo Estate`, `Berry Buddle Wilkins`, `Department of Infrastructure, Transport, Regional Development and Communications`, `Aussie Skip Bins Services P/L`, `BGIS Pty Ltd`, `NSW Police Force`, `GIO Workers Compensation TAS`, `Eighteen33 Pty Ltd`, `Crown Law`, `Paramatta Council`, `Northern Territory Government`, `Australian Electoral Commission`, `Department of Health`, `Hunt & Hunt Lawyers`, `Batemans Bay Soldiers Club`, `Allianz Workers Compensation Tasmania`, `SMK Lawyers`, `Envirolab Group`, `WorkSafe Victoria`, `Allianz Australia Insurance Limited, c/- Moray & Agnew`, `Allianz Australia Insurance Limited ABN 15 000 122 850, c/- Moray & Agnew`, `City of Parramatta`, `UES International Pty Ltd`, `Westpac Group`, `Logistics & Stores (Mailroom, Stores & Transport) Services CHW`, `Device Technologies Australia Pty Ltd`, `Willis Towers Watson`, `Hsswa Pty Ltd & HSS Resources Pty Ltd & Other`, `Kingspan Water & Energy Pty Limited`, `SAPOL`, `Guild Insurance`, `Westpac Banking Group`, `St Hilarion Aged Care`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer ABN 83 564 379 108`, `Roshana Pty Ltd`, `QBE Insurance (Australia) Limited (ABN 78003191035)`, `Service Australia`, `BOC Limited `, `HWLE Lawyers`, `NRMA CTP NSW`, `RACQ Insurance Limited ABN 50009704152/ C- Cooper Grace Ward`, `CALVARY ADMINISTRATION PTY LTD`, `Cappello Group`, `Wesfarmers Limited`, `GIO NSW CTP `, `FK Gardner Services (Qld) Pty Ltd`, `Challenge Implements Holdings`, `Bartier Perry Pty Limited`, `Chubb Insurance Australia Limited`, `EMP Michael Lawyers`, `I-MED RADIOLOGY NETWORK LIMITED`, `Gilchrist Connell Legal`, `Premier Office Relocations`, `Nominal Defendant c/- Jensen McConaghy Lawyers`, `Detmold Mental Health Training`, `EML`, `Premise`, `Balance Rehab`, `Xchanging Workers Compensation - NSW`, `Coogee Chemicals Pty Ltd`, `Safe Work Australia`, `Jensen McConaghy Lawyers`, `Hawkesbury City Council`, `Toll Global Express`, `The Corporation of the Synod of the Diocese of Brisbane`, `NRMA CTP SA`, `Ambulance Victoria`, `APSystems`, `Austbrokers (Finsura)`, `SCENTRE GROUP`, `Ikea Australia`, `Department of Treasury and Finance`, `Gallagher Bassett Services Workers Compensation NSW`, `NONI B HOLDINGS PTY LIMITED`, `QBE Workers Compensation SA`, `The Star Entertainment Group Self Insurance Unit`, `Catholic Care Diocese of Bathurst`, `GAIR LEGAL PTY LIMITED`, `QBE CTP SA`, `Wesfarmers Group`, `Rod Pilon Transport`, `TG Legal`, `Department of the Prime Minister and Cabinet`, `UNSW`, `RACQ Group`, `REMONDIS Australia Pty Ltd`, `Australian Federal Police`, `Marshall & Brougham Constructions `, `Chandler Macleod Group`, `University of Tasmania`, `Goodman Fielder Pty Limited`, `SONIC HEALTHCARE GROUP`, `Hastings Medical Centre`, `Hospitality Employers Mutual`, `HCF`, `Colin Biggers Paisley Lawyers`, `Department Veterans Affairs`, `Maddocks Lawyers`, `SRG Group`, `Australian Personnel Solutions (APS Group)`, `EY Business Solutions Pty Ltd`, `National Indigenous Australians Agency`, `St Catherine's School, Berwick`, `Transport for NSW`, `South Australian Native Titles Services` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 32.28 |
| `CATS_MICRO_P` | 71.89 |
| `CATS_MICRO_R` | 23.49 |
| `CATS_MICRO_F` | 35.41 |
| `CATS_MACRO_P` | 7.06 |
| `CATS_MACRO_R` | 3.40 |
| `CATS_MACRO_F` | 4.32 |
| `CATS_MACRO_AUC` | 32.28 |
| `TEXTCAT_MULTILABEL_LOSS` | 7.88 |
|
hemlataC/llama-2-7b-hindie2-v4
|
hemlataC
| 2023-08-25T02:04:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T02:02:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
Bugsys0302/CharactersLoRA
|
Bugsys0302
| 2023-08-25T01:54:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-25T01:54:36Z |
---
license: creativeml-openrail-m
---
|
Vasanth/idefics-mscoco-captioner
|
Vasanth
| 2023-08-25T01:49:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T01:49:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: ['lm_head', 'embed_tokens']
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
jimmyofdoom/rl_course_vizdoom_health_gathering_supreme
|
jimmyofdoom
| 2023-08-25T01:47:33Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T01:47:25Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.17 +/- 5.17
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jimmyofdoom/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
AdanLee/q-Taxi-v3
|
AdanLee
| 2023-08-25T01:41:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T01:30:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```
from urllib.error import HTTPError
from huggingface_hub import hf_hub_download
def load_from_hub(repo_id: str, filename: str) -> str:
"""
Download a model from Hugging Face Hub.
:param repo_id: id of the model repository from the Hugging Face Hub
:param filename: name of the model zip file from the repository
"""
# Get the model from the Hub, download and cache the model on your local disk
pickle_model = hf_hub_download(repo_id=repo_id, filename=filename)
with open(pickle_model, "rb") as f:
downloaded_model_file = pickle.load(f)
return downloaded_model_file
model = load_from_hub(repo_id="AdanLee/q-Taxi-v3", filename="q-learning.pkl")
```
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
```
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_train_halfcheetah_high-2508_0016-66
|
dt-and-vanilla-ardt
| 2023-08-25T01:27:59Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T23:17:55Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_train_halfcheetah_high-2508_0016-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_train_halfcheetah_high-2508_0016-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Unmand/procare_referrer_organisation
|
Unmand
| 2023-08-25T00:49:54Z | 2 | 1 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-07-11T00:57:04Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_procare_referrer_organisation
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_procare_referrer_organisation` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.4,<3.6.0` |
| **Default Pipeline** | `textcat_multilabel` |
| **Components** | `textcat_multilabel` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (726 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat_multilabel`** | `H D PROJECTS PTY LTD`, `McCabe Curwood`, `Dept of Education, Skills & Employment`, `StateCover Mutual Limited`, `Perth Orthopaedic & Sports Medicine`, `Queensland Child Care Service Pty Ltd Ttee`, `Allianz Australia Insurance Limited c/- Jensen McConaghy Lawyers`, `Catholic Care Diocese of Broken Bay`, `Helping Hand New Aged Care`, `Suncorp Life`, `Qantas Airways Limited`, `Department of Defence`, `Master Builders Association of SA`, `HWL Ebsworth Lawyers`, `Alexander Watson`, `Zoetis`, `RSL Care`, `P&N Bank`, `University of NSW`, `Uber Technologies, Inc.`, `Finlay Plumbing Services Pty Ltd`, `Hays Specialist Recruitment`, `KENNARDS HIRE PTY LIMITED`, `Carer Solutions Australia`, `Unitingcare`, `No. 1 Riverside Quay Proprietary Limited`, `Gallagher Basset`, `Department of the Chief MInister and Cabinet`, `CHEP Australia`, `Minda Incorporated`, `The Star`, `Tas Water`, `Feros Care`, `Roshana Group`, `Atradius Crédito y Caución S.A de Seguros y Reaseguros`, `Services Australia`, `RT Consulting`, `The Australian Electoral Commission`, `Federal Court of Australia`, `NRMA INSURANCE`, `Catholic Education Office`, `Svitzer Australia Pty Ltd`, `QBE acting as the agent of NSW Self Insurance Corporation`, `LAWRENCE & HANSON`, `UnitingCare Queensland`, `LibertyGFG`, `Australian Tax Office`, `Alvaro Transport Pty Ltd`, `GIO Workers Compensation ACT`, `Cso Diocese Of Broken Bay`, `Glencore`, `EASTERN HOSPITAL`, `BOC Limited, a member of the Linde Group`, `INVOCARE AUSTRALIA PTY LIMITED`, `UNITRANS ASIA PACIFIC PTY LTD`, `Services Australia (Dept of Human Services)`, `VEOLIA ENVIRONMENTAL SERVICES (AUSTRALIA) PTY LTD `, `Vickilynn Pty Ltd`, `Coles Team Cover`, `MLC Life Insurance`, `Sparke Helmore Lawyers`, `RSL Lifecare Limited`, `QBE Workers Compensation TAS`, `Kimberley Clark Australia`, `The Personnel Group Ltd`, `Insurance Australia Group`, `Canberra Sand & Gravel`, `Viva Energy Australia Pty Ltd`, `Moran Aged Care Engadine`, `Australian Taxation Office`, `Youis Group Pty Ltd`, `Cleanaway`, `Mosaic Brands (Rockmans)`, `Children Hospital Foundation`, `Civil Aviation Safety Authority`, `QBE Workers Compensation WA`, `United Protestant Association`, `PSC Capital Insurance Brokers`, `Woolworths Group Limited`, `Kilcoy Global Foods`, `American Express Australia Limited`, `Palios Meegan Nicholson`, `Uniting`, `Coles Group Supply Chain Pty Ltd`, `QBE`, `OBE Organic`, `Cyprium Metals Limited`, `Kincare Health Services Pty Ltd`, `StateCover Mutual Ltd`, `FIRE RESCUE VICTORIA`, `N2N Claims Solutions`, `WesFarmers – Group TeamCover`, `NDIS Quality and Safeguards Commission`, `HD Projects Pty Ltd`, `St Finn Barr's Catholic Primary School - Lanceston`, `Power and Water Corporation`, `EML VIC Pty Ltd`, `Wanton Kearney`, `Kmart Australia Ltd`, `Territory Families – Housing & Communities`, `Calvary Community Care`, `Sedgwick`, `Leonora Contracting P/L`, `NSW Health Pathology`, `Kilcoy Pastoral Company Ltd`, `GIO CTP ACT`, `DXC Claims Management Services - VIC`, `Schindler Lifts Australia Pty Ltd`, `Meridian Lawyers`, `GIO Workers Compensation WA`, `AUB Group Limited`, `Coateshire`, `Aurizon`, `JWLand`, `Trusted Support Coordination`, `Gosford Quarries Pty Ltd`, `GIO NSW Workers Compensation`, `DESE`, `Busways Group`, `Gallagher Bassett Workers Compensation NSW`, `Allianz Australia Insurance Limited C/- McInnes Wilson Lawyers`, `oOh!Media`, `West Gate Tunnel Project`, `KOMATSU MARKETING SUPPORT AUST`, `Mills Oakley Lawyers`, `Hall & Wilcox`, `Skybridge Group Pty Limited`, `Retirement Living Business & Financial Services`, `Allianz Workers Compensation NT`, `Environmental Industries Pty Ltd`, `EML Workers Insurance NSW`, `Department of Agriculture, Water and the Environment`, `MS Australia`, `CSIRO`, `Orange Health Service`, `AHI Insurance`, `Bupa`, `Allianz Australia Workers Compensation (Victoria) Ltd`, `Cappello Civil Contracting Services Pty Ltd`, `LAF Group`, `RTozerconsulting`, `St Michaels College`, `Gallagher Bassett for Opal Healthcare`, `Department of Families, Fairness and Housing`, `WESTHAVEN LIMITED`, `Integrity Care`, `GPC Asia Pacific`, `Department of Primary Industries`, `Mosaic Brands Limited`, `QBE Workers Compensation NT`, `Coredev`, `South Western Sydney Local Health District`, `CGU Workers Compensation ACT`, `Tas Prison Service`, `Sonic Healthcare`, `Workcover C/BT Lawyers`, `PSC WCS`, `CPB Contractors Pty Ltd`, `Cookie Steelfixing and Construction`, `Warner Bros`, `CGU Workers Compensation NT`, `CMET`, `AnglicareSA`, `St Vincent’s Care Services Carseldine`, `Tasmanian Catholic Education Office`, `Allianz Australia Insurance Ltd`, `Roussos Legal Advisory`, `BGIS Technical Services`, `AAMI NSW CTP`, `Wotton Kearney`, `Galllgher Bassett Workers Compensation VIC`, `Brisbane Fire Pty Ltd`, `QBE Workers Compensation NSW`, `Sunshine Coast Hospital and Health Service`, `Oaks Hotels & Resorts Limited - 9004`, `Ausgrid`, `Boral Limited`, `Aerison Pty Ltd`, `Cooper Grace Ward Lawyers`, `Hsswa Pty Ltd`, `Weir Minerals Australia Ltd`, `Labour Force Pty Ltd`, `Barry Nilsson Lawyers`, `Liberty Oil Australia Pty Ltd`, `ABPhillips`, `Austral Risk`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer`, `OCEAN GARDENS INC`, `Roshana Group Pty Ltd`, `GIO CTP NSW`, `Lachlan Shire Council`, `Allianz Workers Compensation WA`, `United Equipment Pty Ltd`, `PFD FOOD SERVICES PTY LTD`, `Phoenix Insurance Brokers`, `Blumers`, `Department of Home Affairs`, `Anglo Coal (Grosvenor Management) Pty Ltd c/- Ashurst Australia`, `Anglicare Southern QLD`, `Lifetime Support`, `The Trustee for The Roshana Family Trust`, `Zurich Australian Insurance Ltd`, `Dept of Education & Training - School Cleaners`, `DXC Claims Management Services`, `The Medical Clinic Millicent`, `Melbourne Water`, `COMPASS GROUP AUSTRALIA PTY LTD`, `Andreasens Green NSW Andreasens Green QLD`, `Astridge and Murray`, `EML Plus`, `Philips Electronics P/L`, `ISS Facility Services Australia Ltd`, `Busy Bees Early Learning Australia Pty Ltd`, `Coates Hire`, `Sydney Trains`, `Catholic Schools Parramatta Diocese Limited`, `CGU Workers Compensation TAS`, `Mercer`, `COFFS HARBOUR SUPPORT SERVICES LTD`, `I-MED GROUP`, `One Path`, `Transport Accident Commission`, `Department of Corporate and Digital Development Northern Territory Government`, `Boral Insurance Pty Limited`, `Department of Justice`, `AB Phillips Pty Ltd`, `Irwin & Hartshorn`, `Pacific Labour Facility`, `Suncorp Staff Pty Ltd`, `Vilis Bakery`, `NRMA`, `The Hospitals Contribution Fund Of Australia Ltd`, `SCE Group`, `Our Lady of Mercy College Parramatta`, `DOSER Freight Forwarding`, `Employers Mutual NSW Limited`, `Cappello Hydraulics & Civil Pty Ltd`, `Buderim Kindergarten`, `ACT Recycling Pty Ltd`, `Bupa Medical Visa Services`, `Allianz CTP SA`, `Auspost`, `Liverpool Plains Shire Council`, `Corporate Services Network Pty Ltd`, `DP World Australia Pty Ltd`, `Complete Personnel Recruitment`, `DXC Integrated Services`, `QBE Workers Compensation - ACT`, `BINGO PTY LTD`, `The Arnott’s Group`, `EML Agent for icare Workers Insurance`, `IHG Irwin Hartshorn Group`, `Civilmart`, `ORAMS Agencies`, `Liberty GFG`, `QBE NSW Treasury Managed Fund`, `EML (NSW Treasury Managed Fund)`, `Hays Recruitment`, `Mosaic Group Ltd Pty`, `BlueCare`, `Gallagher Bassett Services`, `Ernst & Young (EY)`, `Cootharinga North Queensland`, `BUPA AGED CARE AUSTRALIA P/L`, `Toll Self Insurance`, `Corporate Services Network`, `ACT GOV`, `SA Health Northern Adelaide Local Health Network`, `Inghams Enterprises Pty Ltd`, `Centrewest Insurance Brokers`, `Department of Foreign Affairs and Trade (DFAT)`, `RSL Life Care`, `Star of the Sea School`, `Chubb`, `Suncorp CTP QLD`, `JACANA ENERGY`, `Toll Group`, `Corporeal Health`, `Mosaic Brands (Noni B Limited)`, `QBE CTP Insurance`, `Q Super`, `Bartier Perry Lawyers`, `Queensland Government`, `Department of Health and Human Services Tasmania`, `Hall and Wilcox Lawyers`, `Griffin Coal`, `Cappello Commercial Hydraulics and Civil Pty Ltd`, `Bolton Clarke`, `Australian Unity`, `Gallagher Bassett Services Pty Ltd`, `St John Ambulance Western Australia Ltd`, `Geocon Group Pty Ltd`, `Allianz Australia Insurance Limited c/ Jensen McConaghy Lawyers`, `UAA Pty Ltd`, `Tamex Transport Services Pty Ltd`, `WFI Insurance Limited`, `Programmed Skilled Workforce Limited`, `Bartier Perry`, `Australian Competition & Consumer Commission`, `Queensland Health`, `Holcim (Australia) Pty Ltd`, `Southern NSW Local Health District`, `Blue Care`, `Gallagher Bassett Workers Compensation VIC`, `Point Insurance`, `Workers Compensation & Risk Specialists (WCRS) services render for Philips electronics P/L`, `Country Wide Insurance Brokers (CWIB)`, `Allianz Australia Insurance Ltd C/ - Moray and Agnew Lawyers`, `CHUBB AUSTRALASIA`, `Sirius Support & Industrious People`, `BORG MANUFACTURING P/L`, `Department of Climate Change, Energy, the Environment and Water`, `Hireup Pty. Ltd.`, `Workcover QLD`, `Greenham Tasmania `, `Fantastic Furniture Ltd`, `CGU Workers Compensation VIC`, `Lawson Risk Management Services Pty Ltd`, `SGP Civil`, `Moray & Agnew`, `Edwards Michael Lawyers`, `Jensen McConarchy`, `Cyprium Metals`, `Hunter New England Local Health District`, `EML TMF, Insurance for NSW`, `RACQ Insurance`, `Blue Care ATF The Uniting Church in Aust. Property Trust (Q)`, `ENERGYAUSTRALIA SERVICES P/L`, `AAMI CTP`, `Bupa Asia Pacific`, `The Good Shepherd Home`, `Department of Corporate and Digital Development`, `Allianz CTP Claims NSW`, `Sedgwick Australia`, `Racing NSW`, `GCI Group`, `Australia Post`, `Coles Group Limited`, `Minter Ellison`, `MCCOLL'S OPERATIONS P/L`, `Apprenticeship Support Australia`, `AIA Australia Limited`, `Ernst & Young Services Pty Limited`, `North Metropolitan Health Service`, `St Vincent de Paul Society Canberra/Goulburn (Inc)`, `DP WORLD AUSTRALIA FREMANTLE TERMINAL`, `Moray and Agnew`, `Mosaic Group`, `Ovato`, `ACT Formwork Pty Ltd`, `DORMAKABA AUSTRALIA PTY LTD`, `Jones Harley Toole`, `QBE Accident and Health`, `Crawford Legal`, `REA Group Ltd`, `Amadeus IT Pacific Pty Ltd`, `DXC Integrated Services Victoria Pty Ltd`, `Vellex Pty Ltd`, `3M Australia`, `RTC Consulting`, `Somerset College Ltd`, `Bupa Care Services`, `IKEA North Lakes`, `Australian Criminal Intelligence Commission`, `McInnes Wilson Lawyers`, `UnitingCare Queensland `, `Anglican Community Care Incorporated (trading as ac.care)`, `Electrolux Home Products Pty Ltd`, `Gen Leads`, `FUSE RECRUITMENT MELBOURNE P/L`, `Zurich Financial Services Australia Limited`, `Wesfarmers Group TeamCover`, `Connect Infrastructure`, `Oji Fibre Solutions (Aus) Pty Ltd`, `Quality Bakers Australia Pty Limited`, `Workers Compensation & Risk Specialists`, `Civil Aviation Safety Authority (CASA)`, `Endeavour Foundation`, `The Territory Boundless Possible`, `Territory Families – Housing & Communities`, `Ampol Australia Petroleum Pty Ltd`, `Seven Network (Operations) Ltd`, `HopgoodGanim Lawyers`, `Coal Mines Insurance`, `QBE Insurance Australia`, `UGL Limited`, `QBE Accident and Health `, `C.INC`, `Ikea Logan`, `VERO`, `Geodis Australia`, `McCabes Lawyers`, `Programmed`, `UNSW Canberra`, `EML, Agent for ReturnToWorkSA`, `TEST ORG 2. EML Workers Insurance NSW`, `Kings Group`, `Maney Transport`, `South Western Sydney Lhd`, `Force Fire and Safety Pty Ltd`, `Astridge & Murray Solicitors `, `Rankin Ellison Lawyers`, `EML Insurance`, `ACCC/AER`, `Facilities First`, `Turks Legal`, `Jenson McConaghy Lawyers`, `CGU Insurance`, `AAI Limited trading as GIO`, `BP Australia Limited C/ Collin Biggers & Paisley Lawyers`, `O’Neill & Brown Electrical Services Pty Ltd`, `St Kilda PCYC`, `Justice Services Pty Ltd`, `American Express International Inc`, `Gillis Delaney Lawyers`, `Cabra Dominican College Ltd.`, `Trident Services Cleaning Pty Ltd`, `Hicksons Lawyers`, `Healthscope Operations Pty Ltd`, `GSK CX Healthcare Pty Ltd`, `ACT Government`, `AJ Bush & Sons Pty Ltd`, `OMB Solicitors`, `EML Self Insurance`, `Cooper Grace Ward`, `GC Legal`, `Centacare Catholic Family Services`, `Etex Australia Pty Ltd`, `Allianz Australia Ltd`, `Envirolab Service`, `Ikea `, `Allianz Australia Insurance Limited`, `WorkCover Queensland`, `Allianz Workers Compensation ACT`, `GIO Workers Compensation NSW`, `GenesisCare`, `Rocklea Pressed Metal Pty Ltd `, `Australian Digital Health Agency`, `HWL Ebsworth`, `Museum and Art Gallery Northern Territory (MAGNT)`, `CSR`, `Connell`, `4cRisk`, `HBA Legal`, `Coles Supermarkets Australia Pty Ltd`, `The University of Queensland`, `VENTIA SERVICES GROUP P/L,VENT`, `Point Underwriting Agency Pty Ltd`, `Youi CTP SA`, `Allianz Workers Compensation NSW`, `Detmold Packaging Pty Ltd`, `KENNARDS HIRE PTY LTD`, `QBE CTP QLD`, `Insurance House Group`, `Kilcoy Pastoral Company Limited`, `SRG Global Mining (Australia) Pty Ltd`, `Hunter Imaging Group`, `Park Hyatt Melbourne`, `Enviro Lab`, `QBE Australia Insurance Limited`, `EML c/o Moray`, `Catholic Church Insurance Limited`, `NV EMPLOYMENT PTY LTD`, `IP Australia`, `Qantas`, `Wesfarmer Limited`, `Melton City Council`, `Workcover Employer For Special Policies`, `Allianz Australia Workers Compensation (NSW) Ltd.`, `Uniting Care Health`, `Staff Australia Payroll Services Pty Ltd`, `WN Group`, `Infrabuild`, `Western NSW Local Health District`, `APS Group`, `DXC Claims Management Services - VIC`, `GIO`, `Northern Adelaide Local Health Network `, `Austbrokers Canberra`, `Department of Treasury and Finance Northern Territory Government`, `PSC Workers Compensation & Consulting`, `Alinta Energy`, `Sunline ACT Pty Ltd`, `Allianz Australia Workers' Compensation (Victoria)`, `Suncorp`, `JW Land Construction`, `Comcare - VIC`, `IKEA Pty Limited`, `KENNARDS HIRE`, `IRI Worldwide`, `RFI Technology Solutions`, `Engage TSS Internal Resources`, `St Vincent’s Care Services Mitchelton`, `Cappello Concreting Services Pty Ltd`, `Correct Care Australasia P/L`, `Coal Services`, `VELLA TRANSPORT ADMINISTRATION PTY LTD`, `CGU Workers Compensation WA`, `CORPORATE SERVICE NETWORK`, `BGIS`, `SCENTRE LIMITED`, `Employers Mutual Limited`, `RAPE & DOMESTIC VIOLENCE SERVICES AUSTRALIA`, `PSC Insurance`, `Allianz Australia Insurance Ltd ACT`, `Big W`, `Coverforce Pty Ltd`, `AAMI SA CTP Claims`, `EML Workers Insurance`, `Emjay Insurance Brokers`, `EML Victoria`, `WorkSafe Claims and Recovery Support team`, `Adcor`, `Territory Families, Housing and Communities (TFHC)`, `Nazareth Catholic Community`, `Gallagher Bassett Workers Compensation SA`, `INVOCARE AUSTRALIA P/L`, `Hardman Risk Management`, `The Sydney Childrens Hospital Network`, `The Junction Works Limited`, `PEM DEMO`, `Queensland Ambulance Service`, `Fel Child Care Centres 1 Pty Ltd`, `Allianz CTP QLD`, `Moray & Agnew Lawyers`, `Programmed Maintenance Services Ltd (Self Insured)`, `iag`, `Barnardos`, `eReports `, `Youi Pty Ltd`, `HM Focus Pty Ltd`, `Allianz Workers Compensation VIC`, `iCare Workers Insurance`, `Procare Group`, `Kemp & Co Lawyers`, `AAMI Insurance`, `Combined Insurance`, `STAWELL GOLD MINES P/L`, `QBE CTP NSW`, `SA Health`, `Gilshenan & Luton Legal Practice`, `Genesis Care`, `SOUTH AUSTRALIA POLICE`, `Wollongong City Council`, `TUTT BRYANT GROUP LTD`, `Endeavour Energy`, `Tasmanian Health Service`, `IC Formwork Services Pty Ltd`, `Humdrum`, `Comcare`, `The Gowrie (Qld) Inc`, `Australian Government Department of Education, Skills and Employment`, `Gair Legal`, `Dept of Territory Families, Housing and Communities`, `McArthur River Mining PTY Ltd`, `Kincare Management Pty Ltd`, `CFA`, `Department of Territory Families, Housing and Communities Division Library & Archives NT`, `Department for Education and Child Development`, `Core Building Group Pty Ltd`, `ACH Group`, `Busy Bees Australia Operations Pty Ltd.`, `Wesfarmers Ltd`, `JBC Corporate`, `NULL`, `No Employer - ADL`, `BT Lawyers`, `InfraBuild Steel Centre`, `Kimberly-Clark`, `Tas TAFE`, `EML National Self Insurance`, `National Disability Insurance Agency`, `Colin Biggers & Paisley Pty`, `DP World Brisbane Pty Ltd`, `Australian Trade and Investment Commission (Austrade)`, `Allianz Australia Limited c/- McInnes Wilson Lawyers`, `Community Solutions`, `RFI`, `RACQ Insurance Limited ABN 50 009 704 152`, `AAI Limited trading as GIO`, `Gallagher Bassett Services Workers Compensation Vic Pty Ltd`, `Department of Infrastructure, Transport and Regional Development`, `PSC Insurance Group`, `Allianz CTP NSW`, `CSR Limited`, `Kimberly-Clark Australia P/L`, `Hall and Willcox Lawyers`, `Page Seager Lawyers`, `Iconic Hotels Management`, `St John Medical Centre`, `Department of Veterans Affairs`, `Allianz QLD CTP`, `Morgan & Agnew Lawyers`, `Bureau of Meteorology`, `Forest Coach Lines Pty / Ltd`, `Shaw's Darwin Transport Pty Ltd`, `Dynamic Diesel Mechanical Services Pty Ltd`, `Hall & Wilcox Lawyers`, `Moran Aged Care`, `DJarvis@shepelectrical.com.au`, `Gallagher Bassett Self Insurance NSW`, `EML as agent for icare Workers Insurance NSW`, `Minter Ellison Lawyers`, `Lee Legal Group`, `Child and Adolescent Health Service (CAHS)`, `Holman Webb Lawyers`, `Dept of Home Affairs`, `QSuper`, `TIO Motor Accidents Compensation `, `Allianz Australia Workers' Compensation (Victoria) Limited`, `Perpetual Limited`, `Barwang Pty Ltd`, `CTP QLD Claims Division`, `InvoCare`, `Australian Border Force`, `I MED Radiology Network`, `Ensure Pty Ltd`, `CITY OF PALMERSTON`, `AKUBRA HATS PTY LTD`, `Secom Australia`, `GIO Workers Compensation NT`, `Pialligo Estate`, `Berry Buddle Wilkins`, `Department of Infrastructure, Transport, Regional Development and Communications`, `Aussie Skip Bins Services P/L`, `BGIS Pty Ltd`, `NSW Police Force`, `GIO Workers Compensation TAS`, `Eighteen33 Pty Ltd`, `Crown Law`, `Paramatta Council`, `Northern Territory Government`, `Australian Electoral Commission`, `Department of Health`, `Hunt & Hunt Lawyers`, `Batemans Bay Soldiers Club`, `Allianz Workers Compensation Tasmania`, `SMK Lawyers`, `Envirolab Group`, `WorkSafe Victoria`, `Allianz Australia Insurance Limited, c/- Moray & Agnew`, `Allianz Australia Insurance Limited ABN 15 000 122 850, c/- Moray & Agnew`, `City of Parramatta`, `UES International Pty Ltd`, `Westpac Group`, `Logistics & Stores (Mailroom, Stores & Transport) Services CHW`, `Device Technologies Australia Pty Ltd`, `Willis Towers Watson`, `Hsswa Pty Ltd & HSS Resources Pty Ltd & Other`, `Kingspan Water & Energy Pty Limited`, `SAPOL`, `Guild Insurance`, `Westpac Banking Group`, `St Hilarion Aged Care`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer ABN 83 564 379 108`, `Roshana Pty Ltd`, `QBE Insurance (Australia) Limited (ABN 78003191035)`, `Service Australia`, `BOC Limited `, `HWLE Lawyers`, `NRMA CTP NSW`, `RACQ Insurance Limited ABN 50009704152/ C- Cooper Grace Ward`, `CALVARY ADMINISTRATION PTY LTD`, `Cappello Group`, `Wesfarmers Limited`, `GIO NSW CTP `, `FK Gardner Services (Qld) Pty Ltd`, `Challenge Implements Holdings`, `Bartier Perry Pty Limited`, `Chubb Insurance Australia Limited`, `EMP Michael Lawyers`, `I-MED RADIOLOGY NETWORK LIMITED`, `Gilchrist Connell Legal`, `Premier Office Relocations`, `Nominal Defendant c/- Jensen McConaghy Lawyers`, `Detmold Mental Health Training`, `EML`, `Premise`, `Balance Rehab`, `Xchanging Workers Compensation - NSW`, `Coogee Chemicals Pty Ltd`, `Safe Work Australia`, `Jensen McConaghy Lawyers`, `Hawkesbury City Council`, `Toll Global Express`, `The Corporation of the Synod of the Diocese of Brisbane`, `NRMA CTP SA`, `Ambulance Victoria`, `APSystems`, `Austbrokers (Finsura)`, `SCENTRE GROUP`, `Ikea Australia`, `Department of Treasury and Finance`, `Gallagher Bassett Services Workers Compensation NSW`, `NONI B HOLDINGS PTY LIMITED`, `QBE Workers Compensation SA`, `The Star Entertainment Group Self Insurance Unit`, `Catholic Care Diocese of Bathurst`, `GAIR LEGAL PTY LIMITED`, `QBE CTP SA`, `Wesfarmers Group`, `Rod Pilon Transport`, `TG Legal`, `Department of the Prime Minister and Cabinet`, `UNSW`, `RACQ Group`, `REMONDIS Australia Pty Ltd`, `Australian Federal Police`, `Marshall & Brougham Constructions `, `Chandler Macleod Group`, `University of Tasmania`, `Goodman Fielder Pty Limited`, `SONIC HEALTHCARE GROUP`, `Hastings Medical Centre`, `Hospitality Employers Mutual`, `HCF`, `Colin Biggers Paisley Lawyers`, `Department Veterans Affairs`, `Maddocks Lawyers`, `SRG Group`, `Australian Personnel Solutions (APS Group)`, `EY Business Solutions Pty Ltd`, `National Indigenous Australians Agency`, `St Catherine's School, Berwick`, `Transport for NSW`, `South Australian Native Titles Services` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 32.28 |
| `CATS_MICRO_P` | 71.89 |
| `CATS_MICRO_R` | 23.49 |
| `CATS_MICRO_F` | 35.41 |
| `CATS_MACRO_P` | 7.06 |
| `CATS_MACRO_R` | 3.40 |
| `CATS_MACRO_F` | 4.32 |
| `CATS_MACRO_AUC` | 32.28 |
| `TEXTCAT_MULTILABEL_LOSS` | 7.88 |
|
dkqjrm/20230825071702
|
dkqjrm
| 2023-08-25T00:32:53Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T22:17:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230825071702'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230825071702
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2804
- Accuracy: 0.7617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6793 | 0.5307 |
| No log | 2.0 | 312 | 0.9039 | 0.4765 |
| No log | 3.0 | 468 | 0.7107 | 0.4729 |
| 0.8982 | 4.0 | 624 | 0.6969 | 0.5199 |
| 0.8982 | 5.0 | 780 | 0.5729 | 0.5560 |
| 0.8982 | 6.0 | 936 | 0.6447 | 0.5596 |
| 0.8495 | 7.0 | 1092 | 0.6093 | 0.5921 |
| 0.8495 | 8.0 | 1248 | 0.4289 | 0.6679 |
| 0.8495 | 9.0 | 1404 | 0.4954 | 0.6282 |
| 0.751 | 10.0 | 1560 | 0.3952 | 0.6715 |
| 0.751 | 11.0 | 1716 | 0.6147 | 0.6462 |
| 0.751 | 12.0 | 1872 | 0.4183 | 0.7004 |
| 0.6407 | 13.0 | 2028 | 0.3743 | 0.6968 |
| 0.6407 | 14.0 | 2184 | 0.3907 | 0.7292 |
| 0.6407 | 15.0 | 2340 | 0.3409 | 0.7148 |
| 0.6407 | 16.0 | 2496 | 0.5288 | 0.6426 |
| 0.6476 | 17.0 | 2652 | 0.4492 | 0.7220 |
| 0.6476 | 18.0 | 2808 | 0.3312 | 0.7220 |
| 0.6476 | 19.0 | 2964 | 0.4062 | 0.6606 |
| 0.6425 | 20.0 | 3120 | 0.3715 | 0.6859 |
| 0.6425 | 21.0 | 3276 | 0.3305 | 0.7256 |
| 0.6425 | 22.0 | 3432 | 0.6557 | 0.6245 |
| 0.5658 | 23.0 | 3588 | 0.3943 | 0.6859 |
| 0.5658 | 24.0 | 3744 | 0.3394 | 0.7040 |
| 0.5658 | 25.0 | 3900 | 0.4640 | 0.6823 |
| 0.5333 | 26.0 | 4056 | 0.3419 | 0.7220 |
| 0.5333 | 27.0 | 4212 | 0.3646 | 0.7112 |
| 0.5333 | 28.0 | 4368 | 0.3626 | 0.7184 |
| 0.5164 | 29.0 | 4524 | 0.3215 | 0.7473 |
| 0.5164 | 30.0 | 4680 | 0.2941 | 0.7581 |
| 0.5164 | 31.0 | 4836 | 0.4957 | 0.6173 |
| 0.5164 | 32.0 | 4992 | 0.3362 | 0.7329 |
| 0.4676 | 33.0 | 5148 | 0.3116 | 0.7437 |
| 0.4676 | 34.0 | 5304 | 0.3344 | 0.7401 |
| 0.4676 | 35.0 | 5460 | 0.4769 | 0.7220 |
| 0.4443 | 36.0 | 5616 | 0.2822 | 0.7509 |
| 0.4443 | 37.0 | 5772 | 0.3748 | 0.6859 |
| 0.4443 | 38.0 | 5928 | 0.2989 | 0.7509 |
| 0.4179 | 39.0 | 6084 | 0.3193 | 0.7292 |
| 0.4179 | 40.0 | 6240 | 0.3725 | 0.6715 |
| 0.4179 | 41.0 | 6396 | 0.3336 | 0.7509 |
| 0.3974 | 42.0 | 6552 | 0.2967 | 0.7365 |
| 0.3974 | 43.0 | 6708 | 0.2908 | 0.7545 |
| 0.3974 | 44.0 | 6864 | 0.2887 | 0.7473 |
| 0.3774 | 45.0 | 7020 | 0.3012 | 0.7401 |
| 0.3774 | 46.0 | 7176 | 0.3437 | 0.7509 |
| 0.3774 | 47.0 | 7332 | 0.3390 | 0.7292 |
| 0.3774 | 48.0 | 7488 | 0.2952 | 0.7473 |
| 0.3419 | 49.0 | 7644 | 0.3116 | 0.7401 |
| 0.3419 | 50.0 | 7800 | 0.2856 | 0.7473 |
| 0.3419 | 51.0 | 7956 | 0.3227 | 0.7256 |
| 0.3275 | 52.0 | 8112 | 0.2861 | 0.7509 |
| 0.3275 | 53.0 | 8268 | 0.3534 | 0.7401 |
| 0.3275 | 54.0 | 8424 | 0.3395 | 0.7256 |
| 0.3225 | 55.0 | 8580 | 0.3113 | 0.7401 |
| 0.3225 | 56.0 | 8736 | 0.2932 | 0.7473 |
| 0.3225 | 57.0 | 8892 | 0.4312 | 0.7112 |
| 0.3104 | 58.0 | 9048 | 0.3085 | 0.7509 |
| 0.3104 | 59.0 | 9204 | 0.3164 | 0.7545 |
| 0.3104 | 60.0 | 9360 | 0.2758 | 0.7473 |
| 0.3164 | 61.0 | 9516 | 0.3183 | 0.7220 |
| 0.3164 | 62.0 | 9672 | 0.3571 | 0.7220 |
| 0.3164 | 63.0 | 9828 | 0.3156 | 0.7365 |
| 0.3164 | 64.0 | 9984 | 0.2756 | 0.7653 |
| 0.2939 | 65.0 | 10140 | 0.2859 | 0.7437 |
| 0.2939 | 66.0 | 10296 | 0.2934 | 0.7545 |
| 0.2939 | 67.0 | 10452 | 0.2977 | 0.7690 |
| 0.2826 | 68.0 | 10608 | 0.2871 | 0.7653 |
| 0.2826 | 69.0 | 10764 | 0.2903 | 0.7653 |
| 0.2826 | 70.0 | 10920 | 0.2974 | 0.7581 |
| 0.2663 | 71.0 | 11076 | 0.2778 | 0.7509 |
| 0.2663 | 72.0 | 11232 | 0.2849 | 0.7365 |
| 0.2663 | 73.0 | 11388 | 0.2970 | 0.7653 |
| 0.2637 | 74.0 | 11544 | 0.3025 | 0.7545 |
| 0.2637 | 75.0 | 11700 | 0.2793 | 0.7617 |
| 0.2637 | 76.0 | 11856 | 0.2778 | 0.7545 |
| 0.2699 | 77.0 | 12012 | 0.2861 | 0.7617 |
| 0.2699 | 78.0 | 12168 | 0.2857 | 0.7690 |
| 0.2699 | 79.0 | 12324 | 0.2774 | 0.7617 |
| 0.2699 | 80.0 | 12480 | 0.2804 | 0.7617 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230825070638
|
dkqjrm
| 2023-08-25T00:19:17Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T22:06:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230825070638'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230825070638
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3456
- Accuracy: 0.7329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.7894 | 0.5271 |
| No log | 2.0 | 312 | 0.6658 | 0.5379 |
| No log | 3.0 | 468 | 0.6408 | 0.5054 |
| 0.886 | 4.0 | 624 | 0.7134 | 0.4729 |
| 0.886 | 5.0 | 780 | 0.6234 | 0.5560 |
| 0.886 | 6.0 | 936 | 0.4782 | 0.6318 |
| 0.7765 | 7.0 | 1092 | 1.1394 | 0.5776 |
| 0.7765 | 8.0 | 1248 | 0.5214 | 0.6534 |
| 0.7765 | 9.0 | 1404 | 0.4206 | 0.6570 |
| 0.7206 | 10.0 | 1560 | 0.5019 | 0.6643 |
| 0.7206 | 11.0 | 1716 | 0.7680 | 0.5343 |
| 0.7206 | 12.0 | 1872 | 0.3433 | 0.7220 |
| 0.6543 | 13.0 | 2028 | 0.3834 | 0.7292 |
| 0.6543 | 14.0 | 2184 | 0.4588 | 0.6751 |
| 0.6543 | 15.0 | 2340 | 0.3413 | 0.7040 |
| 0.6543 | 16.0 | 2496 | 0.4874 | 0.6426 |
| 0.5973 | 17.0 | 2652 | 0.3283 | 0.7256 |
| 0.5973 | 18.0 | 2808 | 0.3605 | 0.7329 |
| 0.5973 | 19.0 | 2964 | 0.3314 | 0.7256 |
| 0.5433 | 20.0 | 3120 | 0.5998 | 0.6606 |
| 0.5433 | 21.0 | 3276 | 0.3489 | 0.6931 |
| 0.5433 | 22.0 | 3432 | 0.4316 | 0.6715 |
| 0.5373 | 23.0 | 3588 | 0.3328 | 0.7076 |
| 0.5373 | 24.0 | 3744 | 0.3379 | 0.7220 |
| 0.5373 | 25.0 | 3900 | 0.3580 | 0.7148 |
| 0.4923 | 26.0 | 4056 | 0.3141 | 0.7329 |
| 0.4923 | 27.0 | 4212 | 0.4341 | 0.7365 |
| 0.4923 | 28.0 | 4368 | 0.3386 | 0.7220 |
| 0.4513 | 29.0 | 4524 | 0.3038 | 0.7220 |
| 0.4513 | 30.0 | 4680 | 0.3775 | 0.7220 |
| 0.4513 | 31.0 | 4836 | 0.4197 | 0.7076 |
| 0.4513 | 32.0 | 4992 | 0.4666 | 0.7220 |
| 0.4041 | 33.0 | 5148 | 0.3355 | 0.7365 |
| 0.4041 | 34.0 | 5304 | 0.3147 | 0.7329 |
| 0.4041 | 35.0 | 5460 | 0.3810 | 0.7184 |
| 0.3705 | 36.0 | 5616 | 0.3184 | 0.7256 |
| 0.3705 | 37.0 | 5772 | 0.3668 | 0.7076 |
| 0.3705 | 38.0 | 5928 | 0.3859 | 0.7220 |
| 0.3556 | 39.0 | 6084 | 0.3010 | 0.7329 |
| 0.3556 | 40.0 | 6240 | 0.3201 | 0.7220 |
| 0.3556 | 41.0 | 6396 | 0.3304 | 0.7329 |
| 0.3089 | 42.0 | 6552 | 0.3634 | 0.7365 |
| 0.3089 | 43.0 | 6708 | 0.3844 | 0.7184 |
| 0.3089 | 44.0 | 6864 | 0.3320 | 0.7220 |
| 0.3015 | 45.0 | 7020 | 0.3696 | 0.7220 |
| 0.3015 | 46.0 | 7176 | 0.3665 | 0.7220 |
| 0.3015 | 47.0 | 7332 | 0.3355 | 0.7256 |
| 0.3015 | 48.0 | 7488 | 0.3568 | 0.7292 |
| 0.2709 | 49.0 | 7644 | 0.3450 | 0.7329 |
| 0.2709 | 50.0 | 7800 | 0.3790 | 0.7148 |
| 0.2709 | 51.0 | 7956 | 0.3516 | 0.7112 |
| 0.2681 | 52.0 | 8112 | 0.3741 | 0.7329 |
| 0.2681 | 53.0 | 8268 | 0.3615 | 0.7220 |
| 0.2681 | 54.0 | 8424 | 0.3479 | 0.7292 |
| 0.2477 | 55.0 | 8580 | 0.3401 | 0.7184 |
| 0.2477 | 56.0 | 8736 | 0.3766 | 0.7329 |
| 0.2477 | 57.0 | 8892 | 0.3562 | 0.7148 |
| 0.2344 | 58.0 | 9048 | 0.3412 | 0.7220 |
| 0.2344 | 59.0 | 9204 | 0.3782 | 0.7437 |
| 0.2344 | 60.0 | 9360 | 0.3723 | 0.7040 |
| 0.2126 | 61.0 | 9516 | 0.3852 | 0.7292 |
| 0.2126 | 62.0 | 9672 | 0.3901 | 0.7256 |
| 0.2126 | 63.0 | 9828 | 0.3698 | 0.7112 |
| 0.2126 | 64.0 | 9984 | 0.3249 | 0.7220 |
| 0.2127 | 65.0 | 10140 | 0.3979 | 0.7004 |
| 0.2127 | 66.0 | 10296 | 0.3705 | 0.7365 |
| 0.2127 | 67.0 | 10452 | 0.3317 | 0.7220 |
| 0.199 | 68.0 | 10608 | 0.3322 | 0.7329 |
| 0.199 | 69.0 | 10764 | 0.3706 | 0.7220 |
| 0.199 | 70.0 | 10920 | 0.3628 | 0.7148 |
| 0.1959 | 71.0 | 11076 | 0.3600 | 0.7437 |
| 0.1959 | 72.0 | 11232 | 0.3349 | 0.7437 |
| 0.1959 | 73.0 | 11388 | 0.3650 | 0.7184 |
| 0.184 | 74.0 | 11544 | 0.3337 | 0.7365 |
| 0.184 | 75.0 | 11700 | 0.3309 | 0.7329 |
| 0.184 | 76.0 | 11856 | 0.3237 | 0.7365 |
| 0.183 | 77.0 | 12012 | 0.3430 | 0.7256 |
| 0.183 | 78.0 | 12168 | 0.3567 | 0.7329 |
| 0.183 | 79.0 | 12324 | 0.3541 | 0.7329 |
| 0.183 | 80.0 | 12480 | 0.3456 | 0.7329 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
IALABS/Arturosfastfood
|
IALABS
| 2023-08-25T00:14:58Z | 0 | 1 | null |
[
"conversational",
"es",
"license:other",
"region:us"
] |
text-generation
| 2023-08-24T23:32:33Z |
---
license: other
language:
- es
pipeline_tag: conversational
---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
|
stevhliu/my_awesome_model
|
stevhliu
| 2023-08-25T00:04:52Z | 16,177 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-28T18:41:57Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: stevhliu/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stevhliu/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0632
- Validation Loss: 0.2355
- Train Accuracy: 0.9295
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2518 | 0.1859 | 0.9261 | 0 |
| 0.1319 | 0.1822 | 0.9318 | 1 |
| 0.0632 | 0.2355 | 0.9295 | 2 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
PivotOrDie/vit-base-patch16-224-finetuned-flower
|
PivotOrDie
| 2023-08-25T00:02:48Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-24T23:46:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
abdiharyadi/IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-10
|
abdiharyadi
| 2023-08-24T23:50:07Z | 291 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Wikidepia/IndoT5-base",
"base_model:finetune:Wikidepia/IndoT5-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-24T23:19:50Z |
---
base_model: Wikidepia/IndoT5-base
tags:
- generated_from_trainer
model-index:
- name: IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-10
This model is a fine-tuned version of [Wikidepia/IndoT5-base](https://huggingface.co/Wikidepia/IndoT5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 331 | 0.6415 |
| 0.4503 | 2.0 | 662 | 0.8102 |
| 0.4503 | 3.0 | 993 | 0.8944 |
| 0.0619 | 4.0 | 1324 | 0.9228 |
| 0.0262 | 5.0 | 1655 | 1.0949 |
| 0.0262 | 6.0 | 1986 | 1.1223 |
| 0.0158 | 7.0 | 2317 | 1.1668 |
| 0.0103 | 8.0 | 2648 | 1.1655 |
| 0.0103 | 9.0 | 2979 | 1.1861 |
| 0.008 | 10.0 | 3310 | 1.1789 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
slmnpl/stable-diffusion-webui-master
|
slmnpl
| 2023-08-24T23:28:46Z | 0 | 0 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2023-08-24T23:18:23Z |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_train_halfcheetah_high-2408_2205-33
|
dt-and-vanilla-ardt
| 2023-08-24T23:16:05Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T21:06:53Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_train_halfcheetah_high-2408_2205-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_train_halfcheetah_high-2408_2205-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jlpan/starcoder-cpp2py-newsnippet1
|
jlpan
| 2023-08-24T23:02:57Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:bigcode/starcoder",
"base_model:finetune:bigcode/starcoder",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-23T00:24:41Z |
---
license: bigcode-openrail-m
base_model: bigcode/starcoder
tags:
- generated_from_trainer
model-index:
- name: starcoder-cpp2py-newsnippet1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoder-cpp2py-newsnippet1
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3875 | 0.17 | 25 | 0.4694 |
| 0.2947 | 0.33 | 50 | 0.2126 |
| 0.2152 | 0.5 | 75 | 0.2016 |
| 0.2054 | 0.67 | 100 | 0.1974 |
| 0.2004 | 0.83 | 125 | 0.1966 |
| 0.1883 | 1.05 | 150 | 0.1964 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Akhilsplendid/T5-model
|
Akhilsplendid
| 2023-08-24T23:00:48Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:philschmid/flan-t5-base-samsum",
"base_model:finetune:philschmid/flan-t5-base-samsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-24T22:59:06Z |
---
license: apache-2.0
base_model: philschmid/flan-t5-base-samsum
tags:
- generated_from_trainer
model-index:
- name: T5-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-model
This model is a fine-tuned version of [philschmid/flan-t5-base-samsum](https://huggingface.co/philschmid/flan-t5-base-samsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9849 | 0.8 | 10 | 0.8062 |
| 0.9748 | 1.61 | 20 | 0.8026 |
| 0.9772 | 2.41 | 30 | 0.7968 |
| 0.979 | 3.22 | 40 | 0.7889 |
| 0.9729 | 4.02 | 50 | 0.7793 |
| 0.9479 | 4.82 | 60 | 0.7687 |
| 0.9111 | 5.63 | 70 | 0.7577 |
| 0.8956 | 6.43 | 80 | 0.7460 |
| 0.8768 | 7.24 | 90 | 0.7338 |
| 0.8566 | 8.04 | 100 | 0.7224 |
| 0.8342 | 8.84 | 110 | 0.7120 |
| 0.8273 | 9.65 | 120 | 0.7013 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ufal/byt5-small-multilexnorm2021-trde
|
ufal
| 2023-08-24T21:40:24Z | 115 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"lexical normalization",
"tr",
"de",
"multilingual",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- tr
- de
- multilingual
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Turkish-German version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
tifa-benchmark/llama2_tifa_question_generation
|
tifa-benchmark
| 2023-08-24T21:28:03Z | 418 | 10 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"llama2",
"text-to-image",
"en",
"dataset:TIFA",
"arxiv:2303.11897",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-16T00:41:50Z |
---
license: apache-2.0
inference: true
widget:
- text: "<s>[INST] <<SYS>>\nGiven an image description, generate one or two multiple-choice questions that verifies if the image description is correct.\nClassify each concept into a type (object, human, animal, food, activity, attribute, counting, color, material, spatial, location, shape, other), and then generate a question for each type.\n\n<</SYS>>\n\nDescription: a blue rabbit and a red plane [/INST] Entities:"
pipeline_tag: text-generation
tags:
- text-generation-inference
- llama2
- text-to-image
datasets:
- TIFA
language:
- en
---
Project page: <https://tifa-benchmark.github.io/>
This is the text parsing and question generation model for the ICCV 2023 paper [TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering](https://arxiv.org/abs/2303.11897)
We introduce TIFA (Text-to-Image Faithfulness evaluation with question Answering), an automatic evaluation metric that measures the faithfulness of a generated image to its text input via visual question answering (VQA). Specifically, given a text input, we automatically generate several question-answer pairs using a language model. We calculate image faithfulness by checking whether existing VQA models can answer these questions using the generated image.
Specifically, this fine-tuned LLaMA 2 model is the substitute for the GPT-3 model in the paper. It can parse an arbitrary prompt into visual entities, attributes, relations, etc. and generate question-answer tuples for each of them. See examples below.
# QuickStart
All codes are from <https://github.com/Yushi-Hu/tifa>. Clone this repo to easily use this model together with other modules (e.g. VQA) provided in TIFA.
Please follow the prompt format, which will give the best performance.
```python
import torch
import transformers
# prepare the LLaMA 2 model
model_name = "tifa-benchmark/llama2_tifa_question_generation"
pipeline = transformers.pipeline(
"text-generation",
model=model_name,
torch_dtype=torch.float16,
device_map="auto",
)
# formating prompt following LLaMA 2 style
def create_qg_prompt(caption):
INTRO_BLURB = "Given an image description, generate one or two multiple-choice questions that verifies if the image description is correct.\nClassify each concept into a type (object, human, animal, food, activity, attribute, counting, color, material, spatial, location, shape, other), and then generate a question for each type.\n"
formated_prompt = f"<s>[INST] <<SYS>>\n{INTRO_BLURB}\n<</SYS>>\n\n"
formated_prompt += f"Description: {caption} [/INST] Entities:"
return formated_prompt
test_caption = "a blue rabbit and a red plane"
# create prompt
prompt = create_qg_prompt(text_caption)
# text completion
sequences = pipeline(
prompt, do_sample=False, num_beams=5, num_return_sequences=1, max_length=512)
output = sequences[0]['generated_text'][len(prompt):]
output = output.split('\n\n')[0]
# output
print(output)
#### Expected output ###
# rabbit, plane
# Activites:
# Colors: blue, red
# Counting:
# Other attributes:
# About rabbit (animal):
# Q: is this a rabbit?
# Choices: yes, no
# A: yes
# About rabbit (animal):
# Q: what animal is in the picture?
# Choices: rabbit, dog, cat, fish
# A: rabbit
# About plane (object):
# Q: is this a plane?
# Choices: yes, no
# A: yes
# About plane (object):
# Q: what type of vehicle is this?
# Choices: plane, car, motorcycle, bus
# A: plane
# About blue (color):
# Q: is the rabbit blue?
# Choices: yes, no
# A: yes
# About blue (color):
# Q: what color is the rabbit?
# Choices: blue, red, yellow, green
# A: blue
# About red (color):
# Q: is the plane red?
# Choices: yes, no
# A: yes
# About red (color):
# Q: what color is the plane?
# Choices: red, blue, yellow, green
# A: red
```
# Use this LM under tifascore package
tifascore provides extra functions to parse this output etc. First install tifascore according to <https://github.com/Yushi-Hu/tifa>. Then the usage is below
```python
from tifascore import get_llama2_pipeline, get_llama2_question_and_answers
pipeline = get_llama2_pipeline("tifa-benchmark/llama2_tifa_question_generation")
print(get_llama2_question_and_answers(pipeline, "a blue rabbit and a red plane"))
#### Expected output ###
# [{'caption': 'a blue rabbit and a red plane', 'element': 'rabbit', 'question': 'what animal is in the picture?', 'choices': ['rabbit', 'dog', 'cat', 'fish'], 'answer': 'rabbit', 'element_type': 'animal/human'}, {'caption': 'a blue rabbit and a red plane', 'element': 'plane', 'question': 'is this a plane?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'object'}, {'caption': 'a blue rabbit and a red plane', 'element': 'plane', 'question': 'what type of vehicle is this?', 'choices': ['plane', 'car', 'motorcycle', 'bus'], 'answer': 'plane', 'element_type': 'object'}, {'caption': 'a blue rabbit and a red plane', 'element': 'blue', 'question': 'is the rabbit blue?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'blue', 'question': 'what color is the rabbit?', 'choices': ['blue', 'red', 'yellow', 'green'], 'answer': 'blue', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'red', 'question': 'is the plane red?', 'choices': ['yes', 'no'], 'answer': 'yes', 'element_type': 'color'}, {'caption': 'a blue rabbit and a red plane', 'element': 'red', 'question': 'what color is the plane?', 'choices': ['red', 'blue', 'yellow', 'green'], 'answer': 'red', 'element_type': 'color'}]
```
## Bibtex
```
@article{hu2023tifa,
title={Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering},
author={Hu, Yushi and Liu, Benlin and Kasai, Jungo and Wang, Yizhong and Ostendorf, Mari and Krishna, Ranjay and Smith, Noah A},
journal={arXiv preprint arXiv:2303.11897},
year={2023}
}
```
|
magooie/Reinforce-Cartpole-v1
|
magooie
| 2023-08-24T21:09:31Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T21:09:22Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
anth0nyhak1m/SS_model
|
anth0nyhak1m
| 2023-08-24T20:52:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T20:51:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SS_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SS_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3980
- Accuracy: 0.9587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.153 | 1.0 | 4301 | 0.1472 | 0.9526 |
| 0.1165 | 2.0 | 8602 | 0.1376 | 0.9562 |
| 0.0951 | 3.0 | 12903 | 0.1462 | 0.9596 |
| 0.0851 | 4.0 | 17204 | 0.1550 | 0.9602 |
| 0.0709 | 5.0 | 21505 | 0.1848 | 0.9596 |
| 0.069 | 6.0 | 25806 | 0.2027 | 0.9586 |
| 0.0591 | 7.0 | 30107 | 0.2266 | 0.9582 |
| 0.047 | 8.0 | 34408 | 0.2110 | 0.9573 |
| 0.0391 | 9.0 | 38709 | 0.2405 | 0.9577 |
| 0.0333 | 10.0 | 43010 | 0.2865 | 0.9566 |
| 0.0336 | 11.0 | 47311 | 0.2671 | 0.9588 |
| 0.0226 | 12.0 | 51612 | 0.2743 | 0.9567 |
| 0.0266 | 13.0 | 55913 | 0.3281 | 0.9577 |
| 0.0191 | 14.0 | 60214 | 0.3062 | 0.9572 |
| 0.0232 | 15.0 | 64515 | 0.3479 | 0.9585 |
| 0.0149 | 16.0 | 68816 | 0.3542 | 0.9587 |
| 0.0099 | 17.0 | 73117 | 0.3646 | 0.9587 |
| 0.0123 | 18.0 | 77418 | 0.3721 | 0.9584 |
| 0.0091 | 19.0 | 81719 | 0.3896 | 0.9590 |
| 0.0086 | 20.0 | 86020 | 0.3980 | 0.9587 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
kejolong/racequeen
|
kejolong
| 2023-08-24T20:44:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T20:43:34Z |
---
license: creativeml-openrail-m
---
|
zehralx/distilbert-base-uncased-finetuned-emotion
|
zehralx
| 2023-08-24T20:44:54Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-20T13:38:45Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9234578926922112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8077 | 1.0 | 250 | 0.3196 | 0.9065 | 0.9049 |
| 0.2488 | 2.0 | 500 | 0.2208 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.3
- Tokenizers 0.13.3
|
dimitarrskv/ppo-SnowballTarget
|
dimitarrskv
| 2023-08-24T20:40:06Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-24T20:40:04Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dimitarrskv/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
camenduru/StableSR
|
camenduru
| 2023-08-24T20:36:48Z | 0 | 3 | null |
[
"image-to-image",
"arxiv:2305.07015",
"license:other",
"region:us"
] |
image-to-image
| 2023-08-24T20:29:35Z |
---
license: other
pipeline_tag: image-to-image
---
# StableSR Model Card
This model card focuses on the models associated with the StableSR, available [here](https://github.com/IceClear/StableSR).
## Model Details
- **Developed by:** Jianyi Wang
- **Model type:** Diffusion-based image super-resolution model
- **License:** [S-Lab License 1.0](https://github.com/IceClear/StableSR/blob/main/LICENSE.txt)
- **Model Description:** This is the model used in [Paper](https://arxiv.org/abs/2305.07015).
- **Resources for more information:** [GitHub Repository](https://github.com/IceClear/StableSR).
- **Cite as:**
@InProceedings{wang2023exploiting,
author = {Wang, Jianyi and Yue, Zongsheng and Zhou, Shangchen and Chan, Kelvin CK and Loy, Chen Change},
title = {Exploiting Diffusion Prior for Real-World Image Super-Resolution},
booktitle = {arXiv preprint arXiv:2305.07015},
year = {2023},
}
# Uses
Please refer to [S-Lab License 1.0](https://github.com/IceClear/StableSR/blob/main/LICENSE.txt)
## Limitations and Bias
### Limitations
- StableSR still requires multiple steps for generating an image, which is much slower than GAN-based approaches, especially for large images beyond 512 or 768.
- StableSR sometimes cannot keep 100% fidelity due to its generative nature.
- StableSR sometimes cannot generate perfect details under complex real-world scenarios.
### Bias
While our model is based on a pre-trained Stable Diffusion model, currently we do not observe obvious bias in generated results.
We conjecture the main reason is that our model does not rely on text prompts but on low-resolution images.
Such strong conditions make our model less likely to be affected.
## Training
**Training Data**
The model developer used the following dataset for training the model:
- Our diffusion model is finetuned on DF2K (DIV2K and Flickr2K) + OST datasets, available [here](https://github.com/xinntao/Real-ESRGAN/blob/master/docs/Training.md).
- We further generate 100k synthetic LR-HR pairs on DF2K_OST using the finetuned diffusion model for training the CFW module.
**Training Procedure**
StableSR is an image super-resolution model finetuned on [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), further equipped with a time-aware encoder and a controllable feature wrapping (CFW) module.
- Following Stable Diffusion, images are encoded through the fixed autoencoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4.
- The latent representations are fed to the time-aware encoder as guidance.
- The loss is the same as Stable Diffusion.
- After finetuning the diffusion model, we further train the CFW module using the data generated by the finetuned diffusion model.
- The autoencoder model is fixed and only CFW is trainable.
- The loss is similar to training an autoencoder, except that we use a fixed adversarial loss weight of 0.025 rather than a self-adjustable one.
We currently provide the following checkpoints:
- [stablesr_000117.ckpt](https://huggingface.co/Iceclear/StableSR/resolve/main/stablesr_000117.ckpt): Diffusion model finetuned on [SD2.1-512base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) with DF2K_OST dataset for 117 epochs.
- [vqgan_cfw_00011.ckpt](https://huggingface.co/Iceclear/StableSR/resolve/main/vqgan_cfw_00011.ckpt): CFW module with fixed autoencoder trained on synthetic paired data for 11 epochs.
- [stablesr_768v_000139.ckpt](https://huggingface.co/Iceclear/StableSR/blob/main/stablesr_768v_000139.ckpt): Diffusion model finetuned on [SD2.1-768v](https://huggingface.co/stabilityai/stable-diffusion-2-1) with DF2K_OST dataset for 139 epochs.
## Evaluation Results
See [Paper](https://arxiv.org/abs/2305.07015) for details.
|
AmelieSchreiber/esm2_t6_8M_UR50D_cafa5_lora
|
AmelieSchreiber
| 2023-08-24T20:35:14Z | 5 | 1 |
peft
|
[
"peft",
"pytorch",
"esm",
"esm2",
"ESM-2",
"protein language model",
"LoRA",
"Low Rank Adaptation",
"biology",
"CAFA-5",
"protein function prediction",
"en",
"dataset:AmelieSchreiber/cafa_5",
"license:mit",
"region:us"
] | null | 2023-08-22T05:29:29Z |
---
library_name: peft
tags:
- esm
- esm2
- ESM-2
- protein language model
- LoRA
- Low Rank Adaptation
- biology
- CAFA-5
- protein function prediction
datasets:
- AmelieSchreiber/cafa_5
license: mit
language:
- en
---
# ESM-2 LoRA for CAFA-5 Protein Function Prediction
This is a Low Rank Adaptation (LoRA) of [cafa_5_protein_function_prediction](https://huggingface.co/AmelieSchreiber/cafa_5_protein_function_prediction),
which is a fine-tuned (without LoRA) version of `facebook/esm2_t6_8M_UR50D`, for the same task. For more information
on training a sequence classifier langauge model with LoRA [see here](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/LoRA.ipynb).
Note, this is for natural language processing and must be adapted to our use case using a protein language model like ESM-2.
## Training procedure
Using Hugging Face's Parameter Efficient Fine-Tuning (PEFT) library, a Low Rank Adaptation was trained for
3 epochs on the CAFA-5 protein sequences dataset at an 80/20 train/test split. The dataset can be
[found here](https://huggingface.co/datasets/AmelieSchreiber/cafa_5). Somewhat naively, the model was trained on
the `train_sequences.fasta` file of protein sequences, with the `train_terms.tsv` file serving as the labels.
The gene ontology used is a hierarchy, and so the labels lower in the hierchay should be weighted more, or the
graph structure should be taken into account. The model achieved the following metrics:
```
Epoch: 3,
Validation Loss: 0.0031,
Validation Micro F1: 0.3752,
Validation Macro F1: 0.9968,
Validation Micro Precision: 0.5287,
Validation Macro Precision: 0.9992,
Validation Micro Recall: 0.2911,
Validation Macro Recall: 0.9968
```
Future iterations of this model will likely need to take into account class weighting.
### Framework versions
- PEFT 0.4.0
## Using the Model
To use the model, try downloading the data [from here](https://huggingface.co/datasets/AmelieSchreiber/cafa_5),
adjust the paths to the files in the code below to their local paths on your machine, and try running:
```python
import os
import numpy as np
import torch
from transformers import AutoTokenizer, EsmForSequenceClassification, AdamW
from torch.nn.functional import binary_cross_entropy_with_logits
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, precision_score, recall_score
from accelerate import Accelerator
from Bio import SeqIO
# Step 1: Data Preprocessing
fasta_file = "data/Train/train_sequences.fasta"
tsv_file = "data/Train/train_terms.tsv"
fasta_data = {}
tsv_data = {}
for record in SeqIO.parse(fasta_file, "fasta"):
fasta_data[record.id] = str(record.seq)
with open(tsv_file, 'r') as f:
for line in f:
parts = line.strip().split("\t")
tsv_data[parts[0]] = parts[1:]
unique_terms = list(set(term for terms in tsv_data.values() for term in terms))
def parse_fasta(file_path):
"""
Parses a FASTA file and returns a list of sequences.
"""
with open(file_path, 'r') as f:
content = f.readlines()
sequences = []
current_sequence = ""
for line in content:
if line.startswith(">"):
if current_sequence:
sequences.append(current_sequence)
current_sequence = ""
else:
current_sequence += line.strip()
if current_sequence:
sequences.append(current_sequence)
return sequences
# Parse the provided FASTA file
fasta_file_path = "data/Test/testsuperset.fasta"
protein_sequences = parse_fasta(fasta_file_path)
# protein_sequences[:3] # Displaying the first 3 sequences for verification
import torch
from transformers import AutoTokenizer, EsmForSequenceClassification
from sklearn.metrics import precision_recall_fscore_support
# 1. Parsing the go-basic.obo file (Assuming this is still needed)
def parse_obo_file(file_path):
with open(file_path, 'r') as f:
data = f.read().split("[Term]")
terms = []
for entry in data[1:]:
lines = entry.strip().split("\n")
term = {}
for line in lines:
if line.startswith("id:"):
term["id"] = line.split("id:")[1].strip()
elif line.startswith("name:"):
term["name"] = line.split("name:")[1].strip()
elif line.startswith("namespace:"):
term["namespace"] = line.split("namespace:")[1].strip()
elif line.startswith("def:"):
term["definition"] = line.split("def:")[1].split('"')[1]
terms.append(term)
return terms
# Let's assume the path to go-basic.obo is as follows (please modify if different)
obo_file_path = "data/Train/go-basic.obo"
parsed_terms = parse_obo_file("data/Train/go-basic.obo") # Replace with your path
# 2. Load the saved model and tokenizer
# Assuming the model path provided is correct
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel, PeftConfig
# Load the tokenizer and model
model_id = "AmelieSchreiber/esm2_t6_8M_UR50D_cafa5_lora" # Replace with your Hugging Face hub model name
tokenizer = AutoTokenizer.from_pretrained(model_id)
# First, we load the underlying base model
base_model = AutoModelForSequenceClassification.from_pretrained(model_id)
# Then, we load the model with PEFT
model = PeftModel.from_pretrained(base_model, model_id)
loaded_model = model
loaded_tokenizer = AutoTokenizer.from_pretrained(model_id)
# 3. The predict_protein_function function
def predict_protein_function(sequence, model, tokenizer, go_terms):
inputs = tokenizer(sequence, return_tensors="pt", padding=True, truncation=True, max_length=1022)
model.eval()
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.sigmoid(outputs.logits)
predicted_indices = torch.where(predictions > 0.05)[1].tolist()
functions = []
for idx in predicted_indices:
term_id = unique_terms[idx] # Use the unique_terms list from your training script
for term in go_terms:
if term["id"] == term_id:
functions.append(term["name"])
break
return functions
# 4. Predicting protein function for the sequences in the FASTA file
protein_functions = {}
for seq in protein_sequences[:20]: # Using only the first 3 sequences for demonstration
predicted_functions = predict_protein_function(seq, loaded_model, loaded_tokenizer, parsed_terms)
protein_functions[seq[:20] + "..."] = predicted_functions # Using first 20 characters as key
protein_functions
```
|
sidroy/bloom-560m
|
sidroy
| 2023-08-24T20:33:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T20:25:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.