modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-07 18:30:29
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-07 18:30:28
card
stringlengths
11
1.01M
ParallelnoMinded/distilbert-base-uncased-finetuned-squad
ParallelnoMinded
2023-07-17T15:36:00Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-16T14:22:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1562 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2273 | 1.0 | 5533 | 1.1657 | | 0.9589 | 2.0 | 11066 | 1.1226 | | 0.7485 | 3.0 | 16599 | 1.1562 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1+cu116 - Datasets 2.13.1 - Tokenizers 0.13.3
roa7n/gpt2-human_nontata_promoters-last_layer_1
roa7n
2023-07-17T15:35:28Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T15:35:26Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
wanderer2k1/T5-LawsQA
wanderer2k1
2023-07-17T15:35:23Z
103
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-30T14:49:29Z
--- widget: - text: "Trả lời câu hỏi: tại trụ sở tổ chức trợ giúp pháp lý thì có cần niêm yết lịch và nội quy không? Trong ngữ cảnh: 11/2017/qh14 điều 28. địa điểm tiếp người được trợ giúp pháp lý. 1. tổ chức thực hiện trợ giúp pháp lý bố trí nơi tiếp người được trợ giúp pháp lý tại trụ sở của tổ chức thực hiện trợ giúp pháp lý hoặc tại địa điểm khác ngoài trụ sở của tổ chức bảo đảm điều kiện để việc trình bày yêu cầu được dễ dàng, thuận lợi. 2. tại trụ sở của tổ chức thực hiện trợ giúp pháp lý phải niêm yết lịch tiếp, nội quy tiếp người được trợ giúp pháp lý. " example_title: "Example #1" inference: parameters: temperature: 0.0, min_length: 32, max_length: 256 - text: "Trả lời câu hỏi: chuyến bay công vụ được định nghĩa như thế nào? Trong ngữ cảnh: 194/2016/tt-btc điều 2. giải thích từ ngữ. trong thông tư này, các từ ngữ dưới đây được hiểu như sau: 1. chuyến bay công vụ: là chuyến bay của tàu bay quân sự, tàu bay chuyên dụng của lực lượng hải quan, công an và chuyến bay của tàu bay dân dụng sử dụng hoàn toàn cho mục đích công vụ nhà nước. 2. chuyến bay chuyên cơ: là chuyến bay được sử dụng hoàn toàn riêng biệt hoặc kết hợp vận chuyển thương mại và được cơ quan nhà nước có thẩm quyền xác nhận hoặc thông báo theo quy định tại nghị định số 03/2009/nđ-cp ngày 09 tháng 01 năm 2009 của chính phủ về công tác đảm bảo an toàn cho chuyến bay chuyên cơ. " example_title: "Example #2" inference: parameters: temperature: 0.0, min_length: 32, max_length: 256 - text: "Trả lời câu hỏi: có được cho thuê ô tô đang bị thế chấp cho ngân hàng không? Trong ngữ cảnh: 91/2015/qh13 điều 321. quyền của bên thế chấp. 1. khai thác công dụng, hưởng hoa lợi, lợi tức từ tài sản thế chấp, trừ trường hợp hoa lợi, lợi tức cũng là tài sản thế chấp theo thỏa thuận. 2. đầu tư để làm tăng giá trị của tài sản thế chấp. 3. nhận lại tài sản thế chấp do người thứ ba giữ và giấy tờ liên quan đến tài sản thế chấp do bên nhận thế chấp giữ khi nghĩa vụ được bảo đảm bằng thế chấp chấm dứt hoặc được thay thế bằng biện pháp bảo đảm khác. 4. được bán, thay thế, trao đổi tài sản thế chấp, nếu tài sản đó là hàng hóa luân chuyển trong quá trình sản xuất, kinh doanh. trong trường hợp này, quyền yêu cầu bên mua thanh toán tiền, số tiền thu được, tài sản hình thành từ số tiền thu được, tài sản được thay thế hoặc được trao đổi trở thành tài sản thế chấp. trường hợp tài sản thế chấp là kho hàng thì bên thế chấp được quyền thay thế hàng hóa trong kho, nhưng phải bảo đảm giá trị của hàng hóa trong kho đúng như thỏa thuận. 5. được bán, trao đổi, tặng cho tài sản thế chấp không phải là hàng hóa luân chuyển trong quá trình sản xuất, kinh doanh, nếu được bên nhận thế chấp đồng ý hoặc theo quy định của luật. 6. được cho thuê, cho mượn tài sản thế chấp nhưng phải thông báo cho bên thuê, bên mượn biết về việc tài sản cho thuê, cho mượn đang được dùng để thế chấp và phải thông báo cho bên nhận thế chấp biết. " example_title: "Example #3" inference: parameters: temperature: 0.0, min_length: 32, max_length: 256 ---
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e2_s108_v3
KingKazma
2023-07-17T15:29:17Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-17T00:55:15Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s108_v3
KingKazma
2023-07-17T15:22:19Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T00:47:41Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
dereklvlv/ILM_400
dereklvlv
2023-07-17T15:20:40Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T15:13:54Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e0_s108_v3
KingKazma
2023-07-17T15:15:20Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-17T00:40:07Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
boostcamp-5th-nlp07/polyglot-ko-5.8b-finetuning_0717
boostcamp-5th-nlp07
2023-07-17T15:04:52Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-17T15:04:48Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e-1_s6789_v3
KingKazma
2023-07-17T14:58:40Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T01:37:50Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v4
hafidikhsan
2023-07-17T14:56:20Z
101
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-17T14:53:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v4 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3181 - Accuracy: 0.79 - F1: 0.7920 - Precision: 0.7954 - Recall: 0.79 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.14 | 1.0 | 2000 | 0.9878 | 0.604 | 0.5956 | 0.6041 | 0.604 | | 1.3551 | 2.0 | 4000 | 1.0238 | 0.636 | 0.6261 | 0.6489 | 0.636 | | 0.7984 | 3.0 | 6000 | 1.0629 | 0.748 | 0.7475 | 0.7494 | 0.748 | | 0.6879 | 4.0 | 8000 | 1.2007 | 0.772 | 0.7733 | 0.7750 | 0.772 | | 0.0593 | 5.0 | 10000 | 1.2298 | 0.796 | 0.7979 | 0.8011 | 0.796 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
peterdamn/whisper-tiny-en
peterdamn
2023-07-17T14:46:28Z
78
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-17T14:22:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-en results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train[450:] args: en-US metrics: - name: Wer type: wer value: 0.34415584415584416 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-en This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6314 - Wer Ortho: 0.3473 - Wer: 0.3442 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.001 | 17.86 | 500 | 0.6314 | 0.3473 | 0.3442 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
magnustragardh/speecht5_finetuned_voxpopuli_fi
magnustragardh
2023-07-17T14:36:59Z
81
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "fi", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-07-08T12:45:10Z
--- language: - fi license: mit tags: - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_fi results: [] pipeline_tag: text-to-speech --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_fi This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 8000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5006 | 12.05 | 1000 | 0.4627 | | 0.4822 | 24.1 | 2000 | 0.4498 | | 0.4725 | 36.14 | 3000 | 0.4452 | | 0.4653 | 48.19 | 4000 | 0.4427 | | 0.4652 | 60.24 | 5000 | 0.4411 | | 0.4635 | 72.29 | 6000 | 0.4404 | | 0.4583 | 84.34 | 7000 | 0.4403 | | 0.4558 | 96.39 | 8000 | 0.4403 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
michaelee0407/path-to-save-model
michaelee0407
2023-07-17T14:36:19Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-17T14:07:28Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - michaelee0407/path-to-save-model These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
nored355/finetuning-sentiment-model-6000-samples
nored355
2023-07-17T14:12:17Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-17T14:02:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-6000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9066666666666666 - name: F1 type: f1 value: 0.9060402684563759 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-6000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.5302 - Accuracy: 0.9067 - F1: 0.9060 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
nored355/finetuning-sentiment-model-3000-samples
nored355
2023-07-17T13:55:08Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-17T13:52:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.8766233766233766 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3163 - Accuracy: 0.8733 - F1: 0.8766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
GreenBitAI/LLaMA-7B-2bit-alpaca
GreenBitAI
2023-07-17T13:54:51Z
0
2
null
[ "license:apache-2.0", "region:us" ]
null
2023-07-17T13:51:47Z
--- license: apache-2.0 --- # GreenBit LLaMA This is GreenBitAI's instruction-tuned LoRA parameters for our [*2-bit 7B LLaMA model*](https://huggingface.co/GreenBitAI/LLaMA-7B-2bit) trained on the Alpaca-clean 50k dataset. Please refer to our [Github page](https://github.com/GreenBitAI/low_bit_llama) for the code to run the model and more information.
NasimB/guten-rarity-all-end-2p5k-ctx-256
NasimB
2023-07-17T13:46:57Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-17T11:43:04Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: guten-rarity-all-end-2p5k-ctx-256 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # guten-rarity-all-end-2p5k-ctx-256 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.2359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.8066 | 0.24 | 200 | 6.2314 | | 5.9001 | 0.47 | 400 | 5.6837 | | 5.4844 | 0.71 | 600 | 5.3762 | | 5.2082 | 0.94 | 800 | 5.1424 | | 4.9601 | 1.18 | 1000 | 5.0000 | | 4.793 | 1.41 | 1200 | 4.8477 | | 4.6671 | 1.65 | 1400 | 4.7224 | | 4.5538 | 1.88 | 1600 | 4.6129 | | 4.3657 | 2.12 | 1800 | 4.5395 | | 4.2426 | 2.36 | 2000 | 4.4747 | | 4.2096 | 2.59 | 2200 | 4.4096 | | 4.1617 | 2.83 | 2400 | 4.3599 | | 4.0429 | 3.06 | 2600 | 4.3204 | | 3.8875 | 3.3 | 2800 | 4.2940 | | 3.8782 | 3.53 | 3000 | 4.2656 | | 3.864 | 3.77 | 3200 | 4.2348 | | 3.8267 | 4.0 | 3400 | 4.2081 | | 3.6034 | 4.24 | 3600 | 4.2149 | | 3.5941 | 4.48 | 3800 | 4.1924 | | 3.5872 | 4.71 | 4000 | 4.1779 | | 3.577 | 4.95 | 4200 | 4.1648 | | 3.4386 | 5.18 | 4400 | 4.1722 | | 3.3996 | 5.42 | 4600 | 4.1702 | | 3.3987 | 5.65 | 4800 | 4.1679 | | 3.3866 | 5.89 | 5000 | 4.1670 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Jonathaniu/alpaca-breast-cancer-13b-mix_data_3
Jonathaniu
2023-07-17T13:42:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T13:42:22Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False ### Framework versions - PEFT 0.4.0.dev0
NasimB/all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k
NasimB
2023-07-17T13:31:00Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-17T11:32:22Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.7559 | 0.31 | 500 | 5.6511 | | 5.4062 | 0.63 | 1000 | 5.2172 | | 5.0687 | 0.94 | 1500 | 4.9678 | | 4.7662 | 1.25 | 2000 | 4.8187 | | 4.628 | 1.57 | 2500 | 4.6878 | | 4.5225 | 1.88 | 3000 | 4.5768 | | 4.3098 | 2.19 | 3500 | 4.5210 | | 4.2125 | 2.51 | 4000 | 4.4508 | | 4.1764 | 2.82 | 4500 | 4.3910 | | 4.0275 | 3.13 | 5000 | 4.3703 | | 3.8912 | 3.45 | 5500 | 4.3383 | | 3.8735 | 3.76 | 6000 | 4.3003 | | 3.7925 | 4.07 | 6500 | 4.2941 | | 3.5917 | 4.39 | 7000 | 4.2879 | | 3.5908 | 4.7 | 7500 | 4.2713 | | 3.577 | 5.01 | 8000 | 4.2617 | | 3.4004 | 5.33 | 8500 | 4.2710 | | 3.3993 | 5.64 | 9000 | 4.2699 | | 3.3898 | 5.95 | 9500 | 4.2692 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Camih/distilbert-base-uncased-finetuned-cola
Camih
2023-07-17T13:27:44Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-17T11:57:30Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Camih/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Camih/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1923 - Validation Loss: 0.5619 - Train Matthews Correlation: 0.5219 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5102 | 0.4731 | 0.4311 | 0 | | 0.3212 | 0.5034 | 0.5079 | 1 | | 0.1923 | 0.5619 | 0.5219 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
llm-toys/falcon-7b-paraphrase-tone-dialogue-summary-topic
llm-toys
2023-07-17T13:26:47Z
15
5
peft
[ "peft", "text-generation", "en", "license:wtfpl", "region:us" ]
text-generation
2023-07-17T09:29:41Z
--- library_name: peft license: wtfpl language: - en pipeline_tag: text-generation --- ## Model description The tiiuae/falcon-7b model finetuned for Paraphrasing, Changing the Tone of the input sentence(to casual/professional/witty), Summary and Topic generation from a dialogue. Data for Paraphrasing and Changing the Tone was generated using gpt-35-turbo and a sample of roughly 1000 data points from the [Dialogsum](https://github.com/cylnlp/dialogsum) dataset was used for Summary and Topic generation. Look at the repo [llm-toys](https://github.com/kuutsav/llm-toys) for usage and other details. Try in colab (you might need the pro version): <a target="_blank" href="https://colab.research.google.com/drive/1hhANNzQkxhrPIIrxtvf0WT_Ste8KrFjh#scrollTo=d6-OJJq_q5Qr"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## Installation ```bash pip install llm-toys ``` ```python from llm_toys.tasks import GeneralTaskAssitant from llm_toys.config import TaskType gta = GeneralTaskAssitant() gta.complete(TaskType.PARAPHRASE_TONE, "Hey, can yuo hepl me cancel my last order?") # "Could you assist me in canceling my previous order?" gta.complete(TaskType.PARAPHRASE_TONE, "Hey, can yuo hepl me cancel my last order?", tone="casual") # "Hey, can you help me cancel my last order?" gta.complete(TaskType.PARAPHRASE_TONE, "Hey, can yuo hepl me cancel my last order?", tone="professional") # "I would appreciate if you could assist me in canceling my previous order." gta.complete(TaskType.PARAPHRASE_TONE, "Hey, can yuo hepl me cancel my last order?", tone="witty") # "Oops! Looks like I got a little carried away with my shopping spree. Can you help me cancel my last order?" chat = """ #Person1#: I'm so excited for the premiere of the latest Studio Ghibli movie! #Person2#: What's got you so hyped? #Person1#: Studio Ghibli movies are pure magic! The animation, storytelling, everything is incredible. #Person2#: Which movie is it? #Person1#: It's called "Whisper of the Wind." It's about a girl on a magical journey to save her village. #Person2#: Sounds amazing! I'm in for the premiere. #Person1#: Great! We're in for a visual masterpiece and a heartfelt story. #Person2#: Can't wait to be transported to their world. #Person1#: It'll be an unforgettable experience, for sure! """.strip() gta.complete(TaskType.DIALOGUE_SUMMARY_TOPIC, chat) # {"summary": "#Person1# tells #Person2# about the upcoming Studio Ghibli movie. # #Person1# thinks it's magical and #Person2#'s excited to watch it.", # "topic": "Movie premiere"} ``` ## Sample training data ```json [ { "original": "If you have any further questions, feel free to ask.", "casual": "Got more questions? Feel free to ask away. I'm here to help!", "professional": "Should you have any additional inquiries, please don't hesitate to ask.", "witty": "Curiosity is always in style! If you have more mysteries to solve, I'm all ears!", "paraphrase": "Don't hesitate to ask if you have any more questions." }, { "fname": "dev_473", "dialogue": "#Person1#: Did you enjoy your weekend at the highland hotel? I heard it's and excellent place to stay and has good facilities.\n#Person2#: I had a wonderful time. The rooms are not very big, but they are well furnished. The restaurant is excellent and reasonably priced. There's a sauna and a Jacuzzi.\n#Person1#: Do they have a swimming pool?\n#Person2#: No, they don't. they have a beauty parlor, but I didn't go there.\n#Person1#: What's the service like?\n#Person2#: It's very good. Check in and check out at the reception only took a few minutes. The wait staff is very good. A waiter recommended their baked fish, which tasted wonderful. The hotel was quite full, so I'd suggest making a reservation if you intend to go there. The hotel offers a discount at the weekends.\n#Person1#: It sounds perfect. Did you have any complaints at all?\n#Person2#: There was a problem with the internet access, so I couldn't check my email, but I didn't complain about it to the management.\n#Person1#: I suppose you were happy to forget about the outside world.\n#Person2#: Yes, I was. Here's their business card.\n#Person1#: Thanks. Was there a mina bar in the room?\n#Person2#: No, there wasn't. There is a bar on the ground floor and of course you can buy drinks in the restaurant to go with your meal.\n#Person1#: One of the things I dislike about hotels is that everyone expects tips.\n#Person2#: I know. At the inland hotel, they have an interesting policy. When you check out, you put some money in a special box at reception. Each evening, the money in the box is shared equally by the hotel staff.", "summary": "#Person2# enjoys #Person2#'s weekend at the highland hotel because of the hotel's excellent and reasonably priced restaurant and good service. #Person2# introduces the hotel's facilities, weekend discount, and its interesting tip policy and suggests #Person1# make a reservation in advance.", "topic": "Experience in hotel" } ] ``` ## Training params ```json { "batch_size": 1, "eval_ratio": 0.05, "eval_steps": 100, "gradient_accumulation_steps": 4, "learning_rate": 0.0001, "logging_steps": 100, "lora_alpha": 32, "lora_dropout": 0.05, "lora_r": 16, "max_length": 1024, "model_name": "tiiuae/falcon-7b", "num_train_epochs": 3, "seed": 10, "task_type": "paraphrase_tone,dialogue_summary_topic", "use_aim": True } ``` ## Training curve ![train_eval_loss](falcon-7b-paraphrase-tone-dialogue-summary-topic.jpeg) ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
llm-toys/RedPajama-INCITE-Base-3B-v1-dialogue-summary-topic
llm-toys
2023-07-17T13:25:51Z
37
3
peft
[ "peft", "text-generation", "en", "license:wtfpl", "region:us" ]
text-generation
2023-07-16T10:46:56Z
--- library_name: peft license: wtfpl language: - en pipeline_tag: text-generation --- ## Model description The togethercomputer/RedPajama-INCITE-Base-3B-v1 model finetuned for `Summary` and `Topic` generation from a dailogue. We use a sample of roughly 1000 data points from the [Dialogsum](https://github.com/cylnlp/dialogsum) dataset for fine-tuning. Look at the repo [llm-toys](https://github.com/kuutsav/llm-toys) for usage and other details. Try in colab: <a target="_blank" href="https://colab.research.google.com/drive/1MSl8IDLjs3rgEv8cPHbJLR8GHh2ucT3_"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## Installation ```bash pip install llm-toys ``` ```python from llm_toys.tasks import SummaryAndTopicGenerator summary_topic_generator = SummaryAndTopicGenerator() summary_topic_generator.generate_summary_and_topic( """ #Person1#: I'm so excited for the premiere of the latest Studio Ghibli movie! #Person2#: What's got you so hyped? #Person1#: Studio Ghibli movies are pure magic! The animation, storytelling, everything is incredible. #Person2#: Which movie is it? #Person1#: It's called "Whisper of the Wind." It's about a girl on a magical journey to save her village. #Person2#: Sounds amazing! I'm in for the premiere. #Person1#: Great! We're in for a visual masterpiece and a heartfelt story. #Person2#: Can't wait to be transported to their world. #Person1#: It'll be an unforgettable experience, for sure! """.strip() ) # {"summary": "#Person1# is excited for the premiere of the latest Studio Ghibli movie. # #Person1# thinks the animation, storytelling, and heartfelt story will be unforgettable. # #Person2# is also excited for the premiere.", # "topic": "Studio ghibli movie"} ``` ## Sample training data ```json { "fname": "train_664", "dialogue": "#Person1#: Hello, Happy Time Catering Services, Vitoria speaking. How can I help you?\n#Person2#: Hello, Victoria. This is Joe Smith from country holidays. I wondered if you could do some catering for us next week, we are having a small reception. It's to launch our summer holiday advertising campaign. Will you be free?\n#Person1#: When exactly is it? Mr. Smith?\n#Person2#: April 21st, that's Thursday. Oh, sorry, no. It should be Friday.\n#Person1#: Oh, yes I can do that where will you be holding it?\n#Person2#: We thought we'd have that at head office and use the conference room, because there is enough room for everyone there.\n#Person1#: Ok. What sort of things would you like?\n#Person2#: Just a light lunch I think, so that people can eat while they move around and talk to each other. You did some thing similar for us last year. We'd be happy to have the same menu again.\n#Person1#: Right. I'll look at my diary and see what you had last time. Oh, I nearly forgot to ask you how many should I cater for?\n#Person2#: Well, I think most people will be able to come, perhaps around 30. No, let's say 35, to be sure.\n#Person1#: Right, thank you for getting in touch, Mr. Smith. I'll send you confirmation of the arrangements by the end of this week.\n#Person2#: Ok.", "summary": "Joe Smith calls Happy Time Catering Service and wants some catering for next week. Victoria asks his requirements and will send him confirmation of the arrangements by the end of this week.", "topic": "Catering service" } ``` ## Training params ```json { "batch_size": 1, "eval_ratio": 0.05, "eval_steps": 100, "gradient_accumulation_steps": 4, "learning_rate": 0.0001, "logging_steps": 100, "lora_alpha": 32, "lora_dropout": 0.05, "lora_r": 16, "max_length": 1024, "model_name": "togethercomputer/RedPajama-INCITE-Base-3B-v1", "num_train_epochs": 2, "seed": 0, "task_type": "dialogue_summary_topic", "use_aim": True } ``` ## Training curve ![train_eval_loss](RedPajama-INCITE-Base-3B-v1-dialogue-summary-topic.jpeg) ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
Serjssv/whisper-tiny-v1
Serjssv
2023-07-17T13:24:04Z
79
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-17T12:59:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-v1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.32762691853600945 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-v1 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6409 - Wer Ortho: 33.1277 - Wer: 0.3276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0009 | 17.86 | 500 | 0.6409 | 33.1277 | 0.3276 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
B0b91/AILearnsToMultiply2
B0b91
2023-07-17T13:24:04Z
0
0
mlconsole
[ "mlconsole", "tabular-regression", "dataset:house_price_prediction", "license:unknown", "model-index", "region:us" ]
tabular-regression
2023-07-17T13:23:58Z
--- license: unknown inference: false tags: - mlconsole - tabular-regression library_name: mlconsole metrics: - mae - loss datasets: - house_price_prediction model-index: - name: AILearnsToMultiply2 results: - task: type: tabular-regression name: tabular-regression dataset: type: house_price_prediction name: house_price_prediction metrics: - type: mae name: Mean absolute error value: 4.996237277984619 - type: loss name: Model loss value: 45.071861267089844 --- # regression model trained on "house_price_prediction" 🤖 [Load and use this model](https://mlconsole.com/model/hf/B0b91/AILearnsToMultiply2) in one click. 🧑‍💻 [Train your own model](https://mlconsole.com) on ML Console.
Oslaw/ppo-Huggy
Oslaw
2023-07-17T13:23:22Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-17T13:23:16Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Oslaw/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
bharadwajkg/finetune-stable-diffusion-v1-4-planogram-lora-data3
bharadwajkg
2023-07-17T13:17:21Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-17T11:47:22Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - bharadwajkg/finetune-stable-diffusion-v1-4-planogram-lora-data3 These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the bharadwajkg/planogram-sample-sd-data3 dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
HamzaFarhan/InvoiceOrNot
HamzaFarhan
2023-07-17T13:06:36Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-17T07:13:40Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HamzaFarhan/InvoiceOrNot This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HamzaFarhan/InvoiceOrNot") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
mnaylor/bigbird-base-mimic-mortality
mnaylor
2023-07-17T13:03:15Z
242
1
transformers
[ "transformers", "pytorch", "safetensors", "big_bird", "text-classification", "license:bigscience-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: bigscience-openrail-m --- # BigBird for Mortality Prediction Starting with Google's base BigBird model, we fine-tuned on binary mortality prediction in MIMIC admission notes. This model seeks to predict whether a certain patient will expire within a given ICU stay, based on the text available upon admission. Data prepared for this task as described in [this project](https://github.com/bvanaken/clinical-outcome-prediction), using the simulated admission notes (taken from discharge summaries). This model will be used in an upcoming submission for IMLH at ICML 2021. ### References * Van Aken, et al., 2021: [Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75/) * Zaheer, et al., 2020: [Big Bird: Transformers for Longer Sequences](https://papers.nips.cc/paper/2020/hash/c8512d142a2d849725f31a9a7a361ab9-Abstract.html)
huarddk/finetuning-sentiment-model-350-samples
huarddk
2023-07-17T13:00:02Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-12T14:50:18Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-350-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-350-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1608 - Accuracy: 0.9619 - F1: 0.9806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
pygospa/distilbert-base-uncased-finetuned-squad
pygospa
2023-07-17T12:59:16Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-17T09:40:00Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: pygospa/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pygospa/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9783 - Train End Logits Accuracy: 0.7290 - Train Start Logits Accuracy: 0.6897 - Validation Loss: 1.1334 - Validation End Logits Accuracy: 0.6997 - Validation Start Logits Accuracy: 0.6622 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5304 | 0.6023 | 0.5658 | 1.1695 | 0.6831 | 0.6468 | 0 | | 0.9783 | 0.7290 | 0.6897 | 1.1334 | 0.6997 | 0.6622 | 1 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
google/flan-t5-large
google
2023-07-17T12:49:05Z
2,292,533
680
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "t5", "text2text-generation", "en", "fr", "ro", "de", "multilingual", "dataset:svakulenk0/qrecc", "dataset:taskmaster2", "dataset:djaym7/wiki_dialog", "dataset:deepmind/code_contests", "dataset:lambada", "dataset:gsm8k", "dataset:aqua_rat", "dataset:esnli", "dataset:quasc", "dataset:qed", "arxiv:2210.11416", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-21T10:07:08Z
--- language: - en - fr - ro - de - multilingual widget: - text: "Translate to German: My name is Arthur" example_title: "Translation" - text: "Please answer to the following question. Who is going to be the next Ballon d'or?" example_title: "Question Answering" - text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering." example_title: "Logical reasoning" - text: "Please answer the following question. What is the boiling point of Nitrogen?" example_title: "Scientific knowledge" - text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?" example_title: "Yes/no question" - text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?" example_title: "Reasoning task" - text: "Q: ( False or not False or False ) is? A: Let's think step by step" example_title: "Boolean Expressions" - text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?" example_title: "Math reasoning" - text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?" example_title: "Premise and hypothesis" tags: - text2text-generation datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed license: apache-2.0 --- # Model Card for FLAN-T5 large <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg" alt="drawing" width="600"/> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) 9. [Model Card Authors](#model-card-authors) # TL;DR If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. As mentioned in the first few lines of the abstract : > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large). # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian - **License:** Apache 2.0 - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5) - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) - **Resources for more information:** - [Research paper](https://arxiv.org/pdf/2210.11416.pdf) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5) # Usage Find below some example scripts on how to use the model in `transformers`: ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> #### INT8 <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", load_in_8bit=True) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> # Uses ## Direct Use and Downstream Use The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that: > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. ## Ethical considerations and risks > Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. ## Known Limitations > Flan-T5 has not been tested in real world applications. ## Sensitive Use: > Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech. # Training Details ## Training Data The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2): ![table.png](https://s3.amazonaws.com/moonup/production/uploads/1666363265279-62441d1d9fdefb55a0b7d12c.png) ## Training Procedure According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf): > These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size. The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax). # Evaluation ## Testing Data, Factors & Metrics The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation: ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1668072995230-62441d1d9fdefb55a0b7d12c.png) For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf). ## Results For full results for FLAN-T5-Large, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4. - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @misc{https://doi.org/10.48550/arxiv.2210.11416, doi = {10.48550/ARXIV.2210.11416}, url = {https://arxiv.org/abs/2210.11416}, author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Scaling Instruction-Finetuned Language Models}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
naimul011/fine_tuned_llama-7b-100-hf
naimul011
2023-07-17T12:48:40Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-16T10:47:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
nbaker/whisper-small-atc-2.0
nbaker
2023-07-17T12:33:20Z
125
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-12T13:48:28Z
--- tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-atc-2.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-atc-2.0 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Wer: 5.0739 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0394 | 1.96 | 1000 | 0.0582 | 3.0730 | | 0.0056 | 3.93 | 2000 | 0.0586 | 4.8881 | | 0.0018 | 5.89 | 3000 | 0.0586 | 4.0355 | | 0.0001 | 7.86 | 4000 | 0.0606 | 5.0739 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.12.1.dev0 - Tokenizers 0.13.3
KoRiF/whisper-tiny-en
KoRiF
2023-07-17T12:26:37Z
76
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-17T11:52:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-en results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train[450:] args: en-US metrics: - name: Wer type: wer value: 0.3252656434474616 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-en This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.8008 - Wer Ortho: 0.3523 - Wer: 0.3253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 1.593 | 1.79 | 50 | 1.0054 | 0.5003 | 0.4185 | | 0.3982 | 3.57 | 100 | 0.7250 | 0.4121 | 0.3554 | | 0.2075 | 5.36 | 150 | 0.6898 | 0.4226 | 0.3518 | | 0.0957 | 7.14 | 200 | 0.6909 | 0.4028 | 0.3371 | | 0.0412 | 8.93 | 250 | 0.7296 | 0.3695 | 0.3300 | | 0.0186 | 10.71 | 300 | 0.7522 | 0.3627 | 0.3270 | | 0.008 | 12.5 | 350 | 0.7703 | 0.3584 | 0.3288 | | 0.0049 | 14.29 | 400 | 0.7756 | 0.3553 | 0.3294 | | 0.0032 | 16.07 | 450 | 0.7889 | 0.3516 | 0.3235 | | 0.0023 | 17.86 | 500 | 0.8008 | 0.3523 | 0.3253 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
RiversHaveWings/open_llama_7b_safetensors
RiversHaveWings
2023-07-17T12:20:41Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:togethercomputer/RedPajama-Data-1T", "arxiv:2302.13971", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-17T11:48:29Z
--- license: apache-2.0 datasets: - togethercomputer/RedPajama-Data-1T --- # OpenLLaMA: An Open Reproduction of LLaMA In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation. ## Dataset and Training We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT | | ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- | | anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 | | anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 | | arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 | | arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 | | ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 | | hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 | | openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 | | piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 | | record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 | | record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 | | rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 | | wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 | | winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 | | Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
yacine-djm/fg-bert-sustainability-15-1.5e-05-0.02-64
yacine-djm
2023-07-17T12:05:07Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-17T11:16:50Z
--- license: mit tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: fg-bert-sustainability-15-1.5e-05-0.02-64 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fg-bert-sustainability-15-1.5e-05-0.02-64 This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0711 - F1: 0.9215 - Roc Auc: 0.9565 - Accuracy: 0.8846 On the validation dataset : - The accuracy with hamming loss is 0.7800788954635107 - The acccuracy as a metric is 0.8326530612244898 - The following is the global precision score: 0.8695652173913043 - The following is the global recall score: 0.8536585365853658 - The following is the global f1-score: 0.8615384615384616 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | No log | 1.0 | 55 | 0.3273 | 0.0 | 0.5 | 0.0956 | | No log | 2.0 | 110 | 0.2344 | 0.3710 | 0.6182 | 0.2328 | | No log | 3.0 | 165 | 0.1464 | 0.8973 | 0.9300 | 0.8441 | | No log | 4.0 | 220 | 0.1143 | 0.9066 | 0.9405 | 0.8617 | | No log | 5.0 | 275 | 0.0998 | 0.9091 | 0.9455 | 0.8659 | | No log | 6.0 | 330 | 0.0901 | 0.9142 | 0.9490 | 0.8732 | | No log | 7.0 | 385 | 0.0854 | 0.9121 | 0.9534 | 0.8721 | | No log | 8.0 | 440 | 0.0778 | 0.9185 | 0.9538 | 0.8825 | | No log | 9.0 | 495 | 0.0775 | 0.9119 | 0.9473 | 0.8763 | | 0.1683 | 10.0 | 550 | 0.0742 | 0.9200 | 0.9535 | 0.8815 | | 0.1683 | 11.0 | 605 | 0.0730 | 0.9196 | 0.9544 | 0.8805 | | 0.1683 | 12.0 | 660 | 0.0716 | 0.9213 | 0.9556 | 0.8825 | | 0.1683 | 13.0 | 715 | 0.0722 | 0.9218 | 0.9585 | 0.8836 | | 0.1683 | 14.0 | 770 | 0.0712 | 0.9222 | 0.9580 | 0.8836 | | 0.1683 | 15.0 | 825 | 0.0711 | 0.9215 | 0.9565 | 0.8846 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
moritzwilke/distilbert-base-uncased-finetuned-squad
moritzwilke
2023-07-17T11:50:41Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-17T09:13:23Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: moritzwilke/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # moritzwilke/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6756 - Train End Logits Accuracy: 0.5691 - Train Start Logits Accuracy: 0.5327 - Validation Loss: 1.2714 - Validation End Logits Accuracy: 0.6582 - Validation Start Logits Accuracy: 0.6184 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2766, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.6756 | 0.5691 | 0.5327 | 1.2714 | 0.6582 | 0.6184 | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
weekcircle/wav2vec2-large-mms-1b-korean-colab_v3
weekcircle
2023-07-17T11:49:30Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:weekcircle/wav2vec2-large-mms-1b-korean-colab_v2", "base_model:finetune:weekcircle/wav2vec2-large-mms-1b-korean-colab_v2", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-17T09:08:44Z
--- license: cc-by-nc-4.0 base_model: weekcircle/wav2vec2-large-mms-1b-korean-colab_v2 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-mms-1b-korean-colab_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-mms-1b-korean-colab_v3 This model is a fine-tuned version of [weekcircle/wav2vec2-large-mms-1b-korean-colab_v2](https://huggingface.co/weekcircle/wav2vec2-large-mms-1b-korean-colab_v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1476 - Wer: 0.3443 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2374 | 0.18 | 100 | 0.1654 | 0.3761 | | 0.2231 | 0.36 | 200 | 0.1648 | 0.3752 | | 0.2263 | 0.53 | 300 | 0.1647 | 0.3859 | | 0.2197 | 0.71 | 400 | 0.1618 | 0.3628 | | 0.223 | 0.89 | 500 | 0.1642 | 0.3792 | | 0.2143 | 1.07 | 600 | 0.1585 | 0.3684 | | 0.2082 | 1.24 | 700 | 0.1589 | 0.3711 | | 0.2166 | 1.42 | 800 | 0.1567 | 0.3647 | | 0.2087 | 1.6 | 900 | 0.1561 | 0.3567 | | 0.2109 | 1.78 | 1000 | 0.1551 | 0.3570 | | 0.2036 | 1.95 | 1100 | 0.1553 | 0.3644 | | 0.1926 | 2.13 | 1200 | 0.1545 | 0.3579 | | 0.1972 | 2.31 | 1300 | 0.1539 | 0.3508 | | 0.2086 | 2.49 | 1400 | 0.1526 | 0.3523 | | 0.2179 | 2.66 | 1500 | 0.1524 | 0.3502 | | 0.2036 | 2.84 | 1600 | 0.1515 | 0.3502 | | 0.2196 | 3.02 | 1700 | 0.1510 | 0.3459 | | 0.2149 | 3.2 | 1800 | 0.1498 | 0.3462 | | 0.2111 | 3.37 | 1900 | 0.1485 | 0.3477 | | 0.2043 | 3.55 | 2000 | 0.1481 | 0.3443 | | 0.2043 | 3.73 | 2100 | 0.1475 | 0.3480 | | 0.2018 | 3.91 | 2200 | 0.1476 | 0.3443 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
DaniloGMatto/distilbert-base-uncased-finetuned-cola
DaniloGMatto
2023-07-17T11:43:06Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-17T11:32:33Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: DaniloGMatto/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # DaniloGMatto/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3235 - Validation Loss: 0.4519 - Train Matthews Correlation: 0.5089 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5136 | 0.4726 | 0.4337 | 0 | | 0.3235 | 0.4519 | 0.5089 | 1 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
ShekDass/donut-base-sroie
ShekDass
2023-07-17T11:36:10Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-16T17:10:51Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Arindamdas70/llora7B-finetuned
Arindamdas70
2023-07-17T11:36:03Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T11:35:59Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
Wyzard1004/TaxiV3
Wyzard1004
2023-07-17T11:35:23Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-17T11:35:21Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: TaxiV3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.72 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Wyzard1004/TaxiV3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
planetk/distilbert-base-uncased-finetuned-squad
planetk
2023-07-17T11:24:35Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-17T09:16:54Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: planetk/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # planetk/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9803 - Train End Logits Accuracy: 0.7295 - Train Start Logits Accuracy: 0.6894 - Validation Loss: 1.0988 - Validation End Logits Accuracy: 0.7002 - Validation Start Logits Accuracy: 0.6626 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5242 | 0.6031 | 0.5649 | 1.1395 | 0.6898 | 0.6537 | 0 | | 0.9803 | 0.7295 | 0.6894 | 1.0988 | 0.7002 | 0.6626 | 1 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.13.0 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/cbt-rarity-all-end-p8k-guten-rarity-all-mixed
NasimB
2023-07-17T11:13:04Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-17T09:15:48Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: cbt-rarity-all-end-p8k-guten-rarity-all-mixed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cbt-rarity-all-end-p8k-guten-rarity-all-mixed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6958 | 0.29 | 500 | 5.6331 | | 5.3364 | 0.58 | 1000 | 5.2041 | | 4.9968 | 0.88 | 1500 | 4.9505 | | 4.7186 | 1.17 | 2000 | 4.8044 | | 4.5561 | 1.46 | 2500 | 4.6841 | | 4.4622 | 1.75 | 3000 | 4.5747 | | 4.3263 | 2.04 | 3500 | 4.4949 | | 4.1311 | 2.33 | 4000 | 4.4481 | | 4.101 | 2.63 | 4500 | 4.3896 | | 4.0645 | 2.92 | 5000 | 4.3353 | | 3.871 | 3.21 | 5500 | 4.3306 | | 3.8006 | 3.5 | 6000 | 4.3048 | | 3.7879 | 3.79 | 6500 | 4.2723 | | 3.6977 | 4.08 | 7000 | 4.2640 | | 3.5167 | 4.38 | 7500 | 4.2617 | | 3.5203 | 4.67 | 8000 | 4.2466 | | 3.5051 | 4.96 | 8500 | 4.2353 | | 3.3506 | 5.25 | 9000 | 4.2461 | | 3.3237 | 5.54 | 9500 | 4.2458 | | 3.3231 | 5.83 | 10000 | 4.2450 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
briziel/distilbert-base-uncased-finetuned-squad
briziel
2023-07-17T11:11:03Z
62
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-17T09:13:32Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: briziel/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # briziel/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9786 - Train End Logits Accuracy: 0.7287 - Train Start Logits Accuracy: 0.6898 - Validation Loss: 1.1064 - Validation End Logits Accuracy: 0.6984 - Validation Start Logits Accuracy: 0.6615 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5081 | 0.6050 | 0.5681 | 1.1607 | 0.6881 | 0.6499 | 0 | | 0.9786 | 0.7287 | 0.6898 | 1.1064 | 0.6984 | 0.6615 | 1 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.13.0 - Datasets 2.13.1 - Tokenizers 0.13.3
u2003158/saved_model
u2003158
2023-07-17T11:10:43Z
15
0
keras
[ "keras", "tf-keras", "resnet", "code", "image-classification", "arxiv:1910.09700", "region:us" ]
image-classification
2023-07-17T09:48:04Z
--- metrics: - accuracy library_name: keras pipeline_tag: image-classification tags: - code --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** .pb - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** BugSenseAI - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ZLOW/ZL_XLSR_MODEL_KATANA
ZLOW
2023-07-17T10:45:42Z
159
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:minds14", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-09T12:02:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - minds14 metrics: - accuracy model-index: - name: ZL_XLSR_MODEL_KATANA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZL_XLSR_MODEL_KATANA This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset. It achieves the following results on the evaluation set: - Loss: 2.6487 - Accuracy: 0.0619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 2.6498 | 0.0619 | | No log | 2.0 | 4 | 2.6447 | 0.1062 | | No log | 3.0 | 6 | 2.6453 | 0.0442 | | No log | 4.0 | 8 | 2.6435 | 0.0973 | | 2.6352 | 5.0 | 10 | 2.6480 | 0.0708 | | 2.6352 | 6.0 | 12 | 2.6500 | 0.0354 | | 2.6352 | 7.0 | 14 | 2.6493 | 0.0885 | | 2.6352 | 8.0 | 16 | 2.6486 | 0.0708 | | 2.6352 | 9.0 | 18 | 2.6489 | 0.0708 | | 2.623 | 10.0 | 20 | 2.6487 | 0.0619 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
FrancescoBonzi/whisper-small-finetuned-gtzan
FrancescoBonzi
2023-07-17T10:38:04Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-17T09:47:49Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: whisper-small-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-finetuned-gtzan This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.4130 - Accuracy: 0.92 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3174 | 1.0 | 45 | 1.1768 | 0.61 | | 0.687 | 2.0 | 90 | 0.7042 | 0.8 | | 0.4524 | 3.0 | 135 | 0.4748 | 0.85 | | 0.197 | 4.0 | 180 | 0.4230 | 0.89 | | 0.2199 | 5.0 | 225 | 0.4980 | 0.88 | | 0.113 | 6.0 | 270 | 0.3381 | 0.91 | | 0.0054 | 7.0 | 315 | 0.3697 | 0.92 | | 0.004 | 8.0 | 360 | 0.2930 | 0.94 | | 0.0632 | 9.0 | 405 | 0.4574 | 0.92 | | 0.0029 | 10.0 | 450 | 0.4130 | 0.92 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
avichr/hebEMO_anger
avichr
2023-07-17T10:12:24Z
255
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br> [Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
avichr/hebEMO_surprise
avichr
2023-07-17T10:12:14Z
191
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br> [Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
avichr/hebEMO_fear
avichr
2023-07-17T10:12:02Z
260
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br> [Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
avichr/hebEMO_trust
avichr
2023-07-17T10:11:17Z
189
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br> [Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
roa7n/gpt2-human_nontata_promoters
roa7n
2023-07-17T10:01:35Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T10:01:33Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
srirammadduri-ts/autotrain-pocnl2keywords-75118139836
srirammadduri-ts
2023-07-17T10:01:23Z
106
1
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain", "translation", "unk", "dataset:srirammadduri-ts/autotrain-data-pocnl2keywords", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2023-07-17T09:58:50Z
--- tags: - autotrain - translation language: - unk - unk datasets: - srirammadduri-ts/autotrain-data-pocnl2keywords co2_eq_emissions: emissions: 1.0731254107530315 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 75118139836 - CO2 Emissions (in grams): 1.0731 ## Validation Metrics - Loss: 0.213 - SacreBLEU: 91.556 - Gen len: 10.739
geolearner/fill-mask-camembert-base
geolearner
2023-07-17T09:53:32Z
101
0
transformers
[ "transformers", "pytorch", "camembert", "fill-mask", "en", "dataset:SetFit/mrpc", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-17T02:45:50Z
--- license: mit datasets: - SetFit/mrpc language: - en metrics: - f1 pipeline_tag: fill-mask --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
msrtoto/Coral_TB_2
msrtoto
2023-07-17T09:50:12Z
237
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-17T09:50:06Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: Coral_TB_2 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9777777791023254 --- # Coral_TB_2 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bear ![bear](images/bear.jpg) #### beaver ![beaver](images/beaver.jpg) #### bird ![bird](images/bird.jpg) #### cat ![cat](images/cat.jpg) #### dog ![dog](images/dog.jpg) #### human ![human](images/human.jpg) #### lynx ![lynx](images/lynx.jpg) #### wolf ![wolf](images/wolf.jpg)
bagassword21/mywa
bagassword21
2023-07-17T09:49:27Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-17T09:48:53Z
--- license: creativeml-openrail-m ---
chunwoolee0/bert-finetuned-mrpc
chunwoolee0
2023-07-17T09:39:20Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-17T06:05:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: bert-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8578431372549019 - name: F1 type: f1 value: 0.8989547038327526 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6577 - Accuracy: 0.8578 - F1: 0.8990 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 459 | 0.3876 | 0.8358 | 0.8878 | | 0.5305 | 2.0 | 918 | 0.5764 | 0.8260 | 0.8838 | | 0.3245 | 3.0 | 1377 | 0.6577 | 0.8578 | 0.8990 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
TheUpperCaseGuy/Guy-Urdu-TTS
TheUpperCaseGuy
2023-07-17T09:34:18Z
203
0
transformers
[ "transformers", "pytorch", "speecht5", "text-to-audio", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-17T09:23:10Z
--- license: mit tags: - generated_from_trainer model-index: - name: Guy-Urdu-TTS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Guy-Urdu-TTS This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Aditya78b/my-awesome-model-new
Aditya78b
2023-07-17T09:28:38Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-17T09:27:56Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
SotirisLegkas/Socratic-GODEL
SotirisLegkas
2023-07-17T09:22:48Z
96
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-14T15:48:21Z
Instruction: given a context, respond using Socratic dialogue principles by asking questions, considering various viewpoints, and promoting critical thinking.
Uminosachi/MobileSAM
Uminosachi
2023-07-17T09:20:51Z
0
2
null
[ "arxiv:2306.14289", "license:apache-2.0", "region:us" ]
null
2023-07-17T09:01:14Z
--- license: apache-2.0 --- TinyViT based Segment Anything Model of [MobileSAM](https://github.com/ChaoningZhang/MobileSAM). **Reference** Zhang, C., Han, D., Qiao, Y., Kim, J. U., Bae, S-H., Lee, S., & Hong, C. S. (2023). [Faster Segment Anything: Towards Lightweight SAM for Mobile Applications](https://arxiv.org/abs/2306.14289). arXiv preprint arXiv:2306.14289.
akdeniz27/q-FrozenLake-v1-4x4-noSlippery
akdeniz27
2023-07-17T09:20:29Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-17T09:20:25Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="akdeniz27/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
avnishkr/falcon-1
avnishkr
2023-07-17T09:17:37Z
5
0
peft
[ "peft", "region:us" ]
null
2023-07-17T09:08:18Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
ykirpichev/speecht5_finetuned_voxpopuli_fr
ykirpichev
2023-07-17T09:02:15Z
84
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "text-to-speech", "generated_from_trainer", "dataset:facebook/voxpopuli-fr", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-07-17T07:04:40Z
--- license: mit tags: - text-to-speech - generated_from_trainer datasets: - facebook/voxpopuli-fr model-index: - name: speecht5_finetuned_voxpopuli_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_fr This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli-fr dataset. It achieves the following results on the evaluation set: - Loss: 0.4623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5294 | 2.99 | 1000 | 0.4842 | | 0.5094 | 5.98 | 2000 | 0.4688 | | 0.5032 | 8.97 | 3000 | 0.4636 | | 0.4981 | 11.96 | 4000 | 0.4623 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
MatthisHoules/t5-large-finetuned-break-qdmr-decomposition
MatthisHoules
2023-07-17T08:56:04Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:break_data", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-02T17:43:28Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - break_data metrics: - bleu model-index: - name: t5-large-finetuned-break-qdmr-decomposition results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: break_data type: break_data config: QDMR split: validation args: QDMR metrics: - name: Bleu type: bleu value: 0.22169382457557757 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-large-finetuned-break-qdmr-decomposition This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the break_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1729 - Bleu: 0.2217 - Brevity Penalty: 0.2926 - Length Ratio: 0.4487 - Translation Length: 108954 - Reference Length: 242845 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Brevity Penalty | Length Ratio | Translation Length | Reference Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------------:|:------------:|:------------------:|:----------------:| | No log | 1.0 | 346 | 0.2217 | 0.2190 | 0.2973 | 0.4519 | 109738 | 242845 | | 0.3597 | 2.0 | 692 | 0.1898 | 0.2213 | 0.2944 | 0.4499 | 109245 | 242845 | | 0.1943 | 3.0 | 1038 | 0.1780 | 0.2213 | 0.2936 | 0.4494 | 109125 | 242845 | | 0.1943 | 4.0 | 1385 | 0.1722 | 0.2209 | 0.2926 | 0.4486 | 108943 | 242845 | | 0.1588 | 5.0 | 1731 | 0.1708 | 0.2221 | 0.2938 | 0.4495 | 109159 | 242845 | | 0.1395 | 6.0 | 2077 | 0.1699 | 0.2209 | 0.2907 | 0.4473 | 108635 | 242845 | | 0.1395 | 7.0 | 2423 | 0.1699 | 0.2219 | 0.2927 | 0.4487 | 108964 | 242845 | | 0.1245 | 8.0 | 2770 | 0.1717 | 0.2215 | 0.2924 | 0.4485 | 108909 | 242845 | | 0.1152 | 9.0 | 3116 | 0.1724 | 0.2215 | 0.2924 | 0.4485 | 108914 | 242845 | | 0.1152 | 9.99 | 3460 | 0.1729 | 0.2217 | 0.2926 | 0.4487 | 108954 | 242845 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
cgr28/CartPole-v1
cgr28
2023-07-17T08:44:20Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-17T08:44:08Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
fadliaulawi/distilbert-base-uncased-finetuned-imdb
fadliaulawi
2023-07-17T08:42:33Z
120
0
transformers
[ "transformers", "pytorch", "tf", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-17T07:19:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7087 | 1.0 | 157 | 2.4899 | | 2.5798 | 2.0 | 314 | 2.4231 | | 2.5271 | 3.0 | 471 | 2.4356 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
uzenhuang/distilgpt2-finetuned-wikitext2
uzenhuang
2023-07-17T08:40:04Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T08:42:11Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7578 | 1.0 | 2334 | 3.6665 | | 3.6405 | 2.0 | 4668 | 3.6480 | | 3.5943 | 3.0 | 7002 | 3.6424 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
ITG/wav2vec2-large-xlsr-gl
ITG
2023-07-17T08:35:55Z
78
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ITG", "PyTorch", "Transformers", "gl", "dataset:openslr", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-17T08:15:40Z
--- license: cc-by-nc-nd-4.0 datasets: - openslr language: - gl pipeline_tag: automatic-speech-recognition tags: - ITG - PyTorch - Transformers - wav2vec2 --- # Wav2Vec2 Large XLSR Galician ## Description This is a fine-tuned version of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model for ASR in galician. --- ## Dataset The dataset used for fine-tuning this model was the [OpenSLR galician](https://huggingface.co/datasets/openslr/viewer/SLR77) dataset, available in the openslr repository. --- ## Example inference script ### Check this example script to run our model in inference mode ```python import torch from transformers import AutoProcessor, AutoModelForCTC filename = "demo.wav" #change this line to the name of your audio file sample_rate = 16_000 processor = AutoProcessor.from_pretrained('ITG/wav2vec2-large-xlsr-gl') model = AutoModelForSpeechSeq2Seq.from_pretrained('ITG/wav2vec2-large-xlsr-gl') device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) speech_array, _ = librosa.load(filename, sr=sample_rate) inputs = processor(speech_array, sampling_rate=sample_rate, return_tensors="pt", padding=True).to(device) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask.to(device)).logits decode_output = processor.batch_decode(torch.argmax(logits, dim=-1))[0] print(f"ASR Galician wav2vec2-large-xlsr output: {decode_output}") ``` --- ## Fine-tuning hyper-parameters | **Hyper-parameter** | **Value** | |:----------------------------------------:|:---------------------------:| | Training batch size | 16 | | Evaluation batch size | 8 | | Learning rate | 3e-4 | | Gradient accumulation steps | 2 | | Group by length | true | | Evaluation strategy | steps | | Max training epochs | 50 | | Max steps | 4000 | | Generate max length | 225 | | FP16 | true | | Metric for best model | wer | | Greater is better | false | ## Fine-tuning in a different dataset or style If you're interested in fine-tuning your own wav2vec2 model, we suggest starting with the [facebook/wav2vec2-large-xlsr-53 model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). Additionally, you may find this [fine-tuning on galician notebook by Diego Fustes](https://github.com/diego-fustes/xlsr-fine-tuning-gl/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Galician.ipynb) to be a valuable resource. This guide served as a helpful reference during the training process of this Galician wav2vec2-large-xlsr model!
nolanaatama/mnnrl
nolanaatama
2023-07-17T08:25:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-15T00:50:33Z
--- license: creativeml-openrail-m ---
peterdamn/distilhubert-finetuned-gtzan-finetuned-gtzan
peterdamn
2023-07-17T08:23:56Z
162
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-16T09:00:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan-finetuned-gtzan This model is a fine-tuned version of [NemesisAlm/distilhubert-finetuned-gtzan](https://huggingface.co/NemesisAlm/distilhubert-finetuned-gtzan) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 1.1748 - Accuracy: 0.81 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0023 | 1.0 | 899 | 1.1748 | 0.81 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
MelindaStudy/sd-class-butterflies-32
MelindaStudy
2023-07-17T08:16:47Z
30
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-07-17T08:16:17Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('MelindaStudy/sd-class-butterflies-32') image = pipeline().images[0] image ```
ykirpichev/speecht5_finetuned_voxpopuli_nl
ykirpichev
2023-07-17T08:13:17Z
83
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-07-17T05:53:12Z
--- license: mit tags: - generated_from_trainer - text-to-speech datasets: - facebook/voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5242 | 4.3 | 1000 | 0.4753 | | 0.5023 | 8.61 | 2000 | 0.4625 | | 0.4941 | 12.91 | 3000 | 0.4577 | | 0.4903 | 17.21 | 4000 | 0.4569 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
thoshan/zeroStores
thoshan
2023-07-17T08:11:01Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-17T08:11:01Z
--- license: creativeml-openrail-m ---
abhinavkashyap92/whisper-tiny-asr-english
abhinavkashyap92
2023-07-17T07:57:56Z
91
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-17T04:15:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-asr-english results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.31582054309327035 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-asr-english This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Wer Ortho: 0.3196 - Wer: 0.3158 - Loss: 0.5223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Wer Ortho | Wer | Validation Loss | |:-------------:|:-----:|:----:|:---------:|:------:|:---------------:| | 0.4862 | 0.89 | 100 | 0.3917 | 0.3719 | 0.5372 | | 0.3213 | 1.79 | 200 | 0.3769 | 0.3571 | 0.4777 | | 0.1822 | 2.68 | 300 | 0.3726 | 0.3589 | 0.4746 | | 0.068 | 3.57 | 400 | 0.3276 | 0.3146 | 0.4819 | | 0.0333 | 4.46 | 500 | 0.3196 | 0.3158 | 0.5223 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
StarRing2022/Dlip-RWKV
StarRing2022
2023-07-17T07:56:21Z
120
0
transformers
[ "transformers", "pytorch", "rwkv", "license:lgpl-3.0", "endpoints_compatible", "region:us" ]
null
2023-07-17T07:32:43Z
--- license: lgpl-3.0 --- 一种基于Clip改进的,通用HF格式的冻结LLM语言模型进行图文对齐训练的方案,以RWKV-4-World-0.4B为例,Cifar10为数据集 共创合作:受到visualrwkv冻结LLM模型启发(https://github.com/howard-hou/VisualRWKV) RWKV-4-World-0.4B模型及训练30个epoch后的checkpoint文件: GIT开源地址:https://github.com/StarRing2022/Dlip-RWKV/
guilleguells/cypher-7b-apoc2
guilleguells
2023-07-17T07:45:38Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-15T10:44:20Z
--- library_name: peft --- ***Settings*** training_args = transformers.TrainingArguments( auto_find_batch_size=True, gradient_accumulation_steps=4, num_train_epochs=1, learning_rate=2e-4, fp16=True, save_total_limit=3, logging_steps=1, max_steps=80, output_dir="/home/gguells/finetuning/apoc/", save_strategy='epoch', optim="paged_adamw_8bit", lr_scheduler_type = 'cosine', warmup_ratio = 0.05, )
Sukmin/a2c-PandaReachDense-v2
Sukmin
2023-07-17T07:43:56Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-17T07:42:00Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.18 +/- 0.37 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ethan1278/WizardLM-Uncensored-Falcon-7b-sharded-bf16
ethan1278
2023-07-17T07:37:34Z
12
0
transformers
[ "transformers", "pytorch", "RefinedWebModel", "text-generation", "custom_code", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-17T06:01:19Z
Copy of [Wizard-Uncensored-Falcon-7b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b) but sharded. Please refer to the original repo for details about license/dataset/etc.
OysterQAQ/DanbooruCLIP
OysterQAQ
2023-07-17T07:22:55Z
127
9
transformers
[ "transformers", "pytorch", "clip", "zero-shot-image-classification", "vision", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2023-05-18T14:06:00Z
--- tags: - vision widget: - src: https://huggingface.co/OysterQAQ/DanbooruCLIP/resolve/main/example.jpg candidate_labels: Azur Lane, 3 girl with sword, cat ear, a dog example_title: Azur Lane - src: https://huggingface.co/OysterQAQ/DanbooruCLIP/resolve/main/example2.jpg candidate_labels: 1 girl with black hair, rabbit ear, big breasts, minato aqua, fate/extra, k-on!, daiyousei, cirno example_title: cirno & daiyousei --- ### 介绍 2023_07_17更新:增加了pixiv数据集进行训练 使用danburoo2021数据集对clip(ViT-L/14)模型进行微调。 0-3 epoch学习率为4e-6,权重衰减为1e-3 4-8 epoch学习率为1e-6,权重衰减为1e-3 标签预处理过程: ```python for i in range(length): # 加载并且缩放图片 if not is_image(data_from_db.path[i]): continue try: img = self.preprocess( Image.open(data_from_db.path[i].replace("./", "/mnt/lvm/danbooru2021/danbooru2021/"))) except Exception as e: #print(e) continue # 处理标签 tags = json.loads(data_from_db.tags[i]) # 优先选择人物和作品标签 category_group = {} for tag in tags: category_group.setdefault(tag["category"], []).append(tag) # category_group=groupby(tags, key=lambda x: (x["category"])) character_list = category_group[4] if 4 in category_group else [] # 作品需要过滤以bad开头的 work_list = list(filter( lambda e: e["name"] != "original" , category_group[3])) if 3 in category_group else [] # work_list= category_group[5] if 5 in category_group else [] general_list = category_group[0] if 0 in category_group else [] caption = "" caption_2 = None for character in character_list: if len(work_list) != 0: # 去除括号内作品内容 character["name"] = re.sub(u"\\(.*?\\)", "", character["name"]) caption += character["name"].replace("_", " ") caption += "," caption = caption[:-1] caption += " " if len(work_list) != 0: caption += "from " for work in work_list: caption += work["name"].replace("_", " ") caption += " " # 普通标签 if len(general_list) != 0: caption += "with " if len(general_list) > 20: general_list_1 = general_list[:int(len(general_list) / 2)] general_list_2 = general_list[int(len(general_list) / 2):] caption_2 = caption for general in general_list_1: if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len( re.findall(is_contain, general["name"])) != 0: caption_2 += general["name"].replace("_", " ") caption_2 += "," caption_2 = caption_2[:-1] for general in general_list_2: if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len( re.findall(is_contain, general["name"])) != 0: caption += general["name"].replace("_", " ") caption += "," caption = caption[:-1] else: for general in general_list: # 如果标签数据目大于20 则拆分成两个caption if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len( re.findall(is_contain, general["name"])) != 0: caption += general["name"].replace("_", " ") caption += "," caption = caption[:-1] # 标签汇总成语句 # tokenize语句 # 返回 # 过长截断 不行的话用huggingface的 text_1 = clip.tokenize(texts=caption, truncate=True) text_2= None if caption_2 is not None: text_2 = clip.tokenize(texts=caption_2, truncate=True) # 处理逻辑 # print(img) yield img, text_1[0] if text_2 is not None: yield img, text_2[0] ``` ### 使用 ```python from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("OysterQAQ/DanbooruCLIP") processor = CLIPProcessor.from_pretrained("OysterQAQ/DanbooruCLIP") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Feedback ### Where to send questions or comments about the model Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
lchen7/FB_week2
lchen7
2023-07-17T07:15:50Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T07:15:45Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
hafidikhsan
2023-07-17T07:14:50Z
103
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-17T07:12:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8697 - Accuracy: 0.78 - F1: 0.7738 - Precision: 0.7735 - Recall: 0.78 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.0774 | 1.0 | 500 | 0.9199 | 0.57 | 0.5728 | 0.6154 | 0.57 | | 0.6526 | 2.0 | 1000 | 0.6857 | 0.7 | 0.6925 | 0.7167 | 0.7 | | 0.3767 | 3.0 | 1500 | 0.5830 | 0.79 | 0.7887 | 0.7884 | 0.79 | | 0.242 | 4.0 | 2000 | 0.7786 | 0.82 | 0.8160 | 0.8163 | 0.82 | | 0.2691 | 5.0 | 2500 | 0.8399 | 0.814 | 0.8113 | 0.8109 | 0.814 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
StarRing2022/RWKV-430M-Pile-Alpaca
StarRing2022
2023-07-17T07:11:34Z
149
0
transformers
[ "transformers", "pytorch", "rwkv", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-22T07:58:07Z
--- license: apache-2.0 --- 使用HF的接口很方便地对RWKV在Alpaca格式数据集上进行全量微调及部署服务 底座模型:RWKV-430M-pile(sgugger/rwkv-430M-pile) 数据集:test.json,测试用 硬件设备:4090单卡,64G内存 训练轮数:100轮 训练耗时:5分钟左右 HF空间:https://huggingface.co/spaces/StarRing2022/Rwkv-430M-pile-Alpaca-Run GIT开源地址:https://github.com/StarRing2022/HF-For-RWKVRaven-Alpaca/
StarRing2022/RWKV-4-World-1.5B-Alpaca
StarRing2022
2023-07-17T07:11:11Z
12
0
transformers
[ "transformers", "pytorch", "rwkv", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-17T02:07:03Z
--- license: apache-2.0 --- 使用HF的接口很方便地对RWKV在Alpaca格式数据集上进行全量微调及部署服务 底座模型:RWKV-4-World-1.5B(StarRing2022/RWKV-4-World-1.5B) 数据集:test.json,测试用 硬件设备:4090单卡,64G内存 训练轮数:1轮 训练耗时:70秒左右 GIT开源地址:https://github.com/StarRing2022/HF-For-RWKVWorld-LoraAlpaca/
PaulineJamin/ppo-Pyramids
PaulineJamin
2023-07-17T07:03:47Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-17T07:01:55Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: PaulineJamin/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Sukmin/a2c-AntBulletEnv-v0
Sukmin
2023-07-17T06:59:49Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T16:13:24Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1407.26 +/- 164.32 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ailabturkiye/shaco
ailabturkiye
2023-07-17T06:35:20Z
0
0
null
[ "music", "tr", "license:openrail", "region:us" ]
null
2023-07-17T06:30:09Z
--- license: openrail language: - tr tags: - music --- League of Legends oyunundaki Shaco adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. -3 ya da -5 Pitch(Transpose) önerilir. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
StarRing2022/RWKV-4-World-7B
StarRing2022
2023-07-17T06:33:26Z
11
7
transformers
[ "transformers", "pytorch", "rwkv", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-07-17T01:08:57Z
--- license: apache-2.0 --- RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配 ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在 Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br> RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br> import torch<br> from ringrwkv.configuration_rwkv_world import RwkvConfig<br> from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br> from ringrwkv.modehf_world import RwkvForCausalLM<br> model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-7B") #或将本模型下载至本地文件夹<br> tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br> text = "你叫什么名字?"<br> question = f'Question: {text.strip()}\n\nAnswer:'<br> input_ids = tokenizer.encode(question)<br> input_ids = torch.tensor(input_ids).unsqueeze(0)<br> out = model.generate(input_ids,max_new_tokens=40)<br><br> outlist = out[0].tolist()<br> for i in outlist:<br> &nbsp;&nbsp;&nbsp;&nbsp;if i==0:&nbsp;#要删除tokenid为0的元素 <br> &nbsp;&nbsp;&nbsp;&nbsp;outlist.remove(i)<br> answer = tokenizer.decode(outlist)<br> print(answer)<br>
StarRing2022/RWKV-4-World-3B
StarRing2022
2023-07-17T06:31:33Z
119
0
transformers
[ "transformers", "pytorch", "rwkv", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-07-17T00:40:44Z
--- license: apache-2.0 --- RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配 ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在 Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br> RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br> import torch<br> from ringrwkv.configuration_rwkv_world import RwkvConfig<br> from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br> from ringrwkv.modehf_world import RwkvForCausalLM<br> model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-3B") #或将本模型下载至本地文件夹<br> tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br> text = "你叫什么名字?"<br> question = f'Question: {text.strip()}\n\nAnswer:'<br> input_ids = tokenizer.encode(question)<br> input_ids = torch.tensor(input_ids).unsqueeze(0)<br> out = model.generate(input_ids,max_new_tokens=40)<br><br> outlist = out[0].tolist()<br> for i in outlist:<br> &nbsp;&nbsp;&nbsp;&nbsp;if i==0:&nbsp;#要删除tokenid为0的元素 <br> &nbsp;&nbsp;&nbsp;&nbsp;outlist.remove(i)<br> answer = tokenizer.decode(outlist)<br> print(answer)<br>
charlieoneill/falcon-abstracts
charlieoneill
2023-07-17T06:29:06Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-17T00:55:24Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: falcon-abstracts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-abstracts This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 2500 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
prognosis/alpaca-cardio-qa
prognosis
2023-07-17T06:27:20Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-17T06:24:24Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
ailabturkiye/rtkamil
ailabturkiye
2023-07-17T06:25:41Z
0
0
null
[ "music", "tr", "license:openrail", "region:us" ]
null
2023-07-17T06:21:55Z
--- license: openrail language: - tr tags: - music --- Rafadan Tayfa adlı çizgi filmde sevilen bir karakter olan Kamil'in yaklaşık 3 dakikalık datasetiyle 1000 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
ailabturkiye/2xciv
ailabturkiye
2023-07-17T06:22:21Z
0
0
null
[ "music", "tr", "license:openrail", "region:us" ]
null
2023-07-17T06:16:23Z
--- license: openrail language: - tr tags: - music --- VALORANT youtuberı olan 2xCIV'in yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
Althhecow/CattleMix
Althhecow
2023-07-17T06:00:04Z
0
0
null
[ "region:us" ]
null
2023-07-16T21:23:09Z
Model based on Anything v3 and a few older models that I've since lost track of. This model was originally mixed over 6 months ago, but has stayed useful for cartoonish / anthropomorphic subjects, despite newer models since releasing.
ailabturkiye/Ceza
ailabturkiye
2023-07-17T05:56:41Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-07-16T15:30:12Z
--- license: openrail --- [![Discord Sunucumuz](https://img.shields.io/badge/Discord.gg%2F-AiLab-ailab )](discord.gg/ailab) ![Static Badge](https://img.shields.io/badge/AI%20LAB%20Hugging%20Face%20Organization-sa?style=plastic&labelColor=blue&color=blue) ![Static Badge](https://img.shields.io/badge/Yap%C4%B1mc%C4%B1%20Bilgisi%20Verilmeden%20Payla%C5%9F%C4%B1lmas%C4%B1%20Yasakt%C4%B1r!-s?style=plastic&labelColor=orange&color=red) # Ceza - RVC V2 500 Epoch **Rapper Ceza'nın ses modelidir, Rvc V2 500 epoch olarak eğitilmiştir.** _Dataset ve Train Benim Tarafımdan yapılmıştır.._ __Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__ ## Credits **Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.** - Discord: barisdark0 - YouTube: Barış (https://www.youtube.com/@barisdark) ![Static Badge](https://img.shields.io/badge/Yap%C4%B1mc%C4%B1%20Bilgisi%20Verilmeden%20Payla%C5%9F%C4%B1lmas%C4%B1%20Yasakt%C4%B1r!-s?style=plastic&labelColor=orange&color=red) [![Discord Sunucumuz](https://img.shields.io/badge/Discord.gg%2F-AiLab-ailab )](discord.gg/ailab) ![Static Badge](https://img.shields.io/badge/AI%20LAB%20Hugging%20Face%20Organization-sa?style=plastic&labelColor=blue&color=blue)
nolanaatama/krtcbnfrmnrvnrvcv2150pchsclbbdsm
nolanaatama
2023-07-17T05:51:48Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-17T05:46:59Z
--- license: creativeml-openrail-m ---
hyeongjin99/vit-base-aihub_model-v2
hyeongjin99
2023-07-17T05:36:33Z
221
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-17T05:21:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: vit-base-aihub_model-v2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.963855421686747 - name: Precision type: precision value: 0.9609609235289817 - name: Recall type: recall value: 0.9613676432460462 - name: F1 type: f1 value: 0.9604284776111401 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-aihub_model-v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3076 - Accuracy: 0.9639 - Precision: 0.9610 - Recall: 0.9614 - F1: 0.9604 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 3 | 1.2753 | 0.8373 | 0.8563 | 0.7993 | 0.8022 | | No log | 2.0 | 6 | 1.1252 | 0.8675 | 0.8895 | 0.8300 | 0.8333 | | No log | 3.0 | 9 | 0.9427 | 0.8976 | 0.9185 | 0.8696 | 0.8760 | | 1.1721 | 4.0 | 12 | 0.7995 | 0.9398 | 0.9474 | 0.9195 | 0.9246 | | 1.1721 | 5.0 | 15 | 0.6820 | 0.9699 | 0.9704 | 0.9613 | 0.9642 | | 1.1721 | 6.0 | 18 | 0.5927 | 0.9639 | 0.9603 | 0.9583 | 0.9587 | | 0.7084 | 7.0 | 21 | 0.5239 | 0.9759 | 0.9725 | 0.9729 | 0.9725 | | 0.7084 | 8.0 | 24 | 0.4743 | 0.9699 | 0.9665 | 0.9671 | 0.9665 | | 0.7084 | 9.0 | 27 | 0.4436 | 0.9578 | 0.9558 | 0.9556 | 0.9544 | | 0.4668 | 10.0 | 30 | 0.4070 | 0.9639 | 0.9610 | 0.9614 | 0.9604 | | 0.4668 | 11.0 | 33 | 0.3817 | 0.9699 | 0.9665 | 0.9671 | 0.9665 | | 0.4668 | 12.0 | 36 | 0.3625 | 0.9699 | 0.9665 | 0.9671 | 0.9665 | | 0.4668 | 13.0 | 39 | 0.3536 | 0.9578 | 0.9558 | 0.9556 | 0.9544 | | 0.3611 | 14.0 | 42 | 0.3384 | 0.9578 | 0.9558 | 0.9556 | 0.9544 | | 0.3611 | 15.0 | 45 | 0.3249 | 0.9699 | 0.9665 | 0.9671 | 0.9665 | | 0.3611 | 16.0 | 48 | 0.3164 | 0.9699 | 0.9665 | 0.9671 | 0.9665 | | 0.3063 | 17.0 | 51 | 0.3142 | 0.9639 | 0.9610 | 0.9614 | 0.9604 | | 0.3063 | 18.0 | 54 | 0.3122 | 0.9639 | 0.9610 | 0.9614 | 0.9604 | | 0.3063 | 19.0 | 57 | 0.3093 | 0.9639 | 0.9610 | 0.9614 | 0.9604 | | 0.294 | 20.0 | 60 | 0.3076 | 0.9639 | 0.9610 | 0.9614 | 0.9604 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
kayteekay/jordan-generator
kayteekay
2023-07-17T05:28:35Z
3
0
diffusers
[ "diffusers", "art", "lora", "text-to-image", "en", "dataset:kayteekay/jordan-generator-dataset", "license:openrail", "region:us" ]
text-to-image
2023-07-17T04:46:12Z
--- license: openrail datasets: - kayteekay/jordan-generator-dataset language: - en library_name: diffusers pipeline_tag: text-to-image tags: - art - lora ---
zwangab91/q-FrozenLake-v1-4x4-noSlippery
zwangab91
2023-07-17T05:19:06Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-17T05:19:04Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="zwangab91/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```