modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 12:32:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 12:31:20
card
stringlengths
11
1.01M
kyleeasterly/openllama-7b_purple-aerospace-v2-200-4
kyleeasterly
2023-08-09T07:47:44Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:43:42Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-200-2
kyleeasterly
2023-08-09T07:47:13Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:43:37Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-200-0
kyleeasterly
2023-08-09T07:46:00Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:43:24Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
annaovesnaatatt/ppo-lunarlander-v2
annaovesnaatatt
2023-08-09T07:43:54Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-09T07:43:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.39 +/- 19.43 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
amu-cai/polemma-base
amu-cai
2023-08-09T07:43:36Z
119
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "T5", "lemmatization", "pl", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-03-15T13:54:33Z
--- language: pl tags: - T5 - lemmatization license: apache-2.0 --- # PoLemma Base PoLemma models are intended for lemmatization of named entities and multi-word expressions in the Polish language. They were fine-tuned from the allegro/plT5 models, e.g.: [allegro/plt5-base](https://huggingface.co/allegro/plt5-base). ## Usage Sample usage: ``` from transformers import pipeline pipe = pipeline(task="text2text-generation", model="amu-cai/polemma-base", tokenizer="amu-cai/polemma-base") hyp = [res['generated_text'] for res in pipe(["federalnego urzędu statystycznego"], clean_up_tokenization_spaces=True, num_beams=5)][0] ``` ## Evaluation results Lemmatization Exact Match was computed on the SlavNER 2021 test set. | Model | Exact Match || | :------ | ------: | ------: | | [polemma-large](https://huggingface.co/amu-cai/polemma-large) | 92.61 | | [polemma-base](https://huggingface.co/amu-cai/polemma-base) | 91.34 | | [polemma-small](https://huggingface.co/amu-cai/polemma-small)| 88.46 | ## Citation If you use the model, please cite the following paper: ``` @inproceedings{palka-nowakowski-2023-exploring, title = "Exploring the Use of Foundation Models for Named Entity Recognition and Lemmatization Tasks in {S}lavic Languages", author = "Pa{\l}ka, Gabriela and Nowakowski, Artur", booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)", month = may, year = "2023", address = "Dubrovnik, Croatia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bsnlp-1.19", pages = "165--171", abstract = "This paper describes Adam Mickiewicz University{'}s (AMU) solution for the 4th Shared Task on SlavNER. The task involves the identification, categorization, and lemmatization of named entities in Slavic languages. Our approach involved exploring the use of foundation models for these tasks. In particular, we used models based on the popular BERT and T5 model architectures. Additionally, we used external datasets to further improve the quality of our models. Our solution obtained promising results, achieving high metrics scores in both tasks. We describe our approach and the results of our experiments in detail, showing that the method is effective for NER and lemmatization in Slavic languages. Additionally, our models for lemmatization will be available at: https://huggingface.co/amu-cai.", } ``` ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
kyleeasterly/openllama-7b_purple-aerospace-v2-300-96
kyleeasterly
2023-08-09T07:40:18Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:34:13Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-88
kyleeasterly
2023-08-09T07:39:38Z
5
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:34:09Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-80
kyleeasterly
2023-08-09T07:39:29Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:33:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-48
kyleeasterly
2023-08-09T07:38:13Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:33:33Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-32
kyleeasterly
2023-08-09T07:37:30Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:33:28Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-30
kyleeasterly
2023-08-09T07:37:20Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:33:24Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-24
kyleeasterly
2023-08-09T07:36:19Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:32:01Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-22
kyleeasterly
2023-08-09T07:35:29Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:31:57Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-18
kyleeasterly
2023-08-09T07:33:13Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:31:49Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-16
kyleeasterly
2023-08-09T07:32:48Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:27:28Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-14
kyleeasterly
2023-08-09T07:32:07Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:27:24Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
Hekenye/lora-trained-xl-with-prior-loss-other
Hekenye
2023-08-09T07:30:02Z
13
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-09T06:07:24Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A flower in melting golden 3d rendering style tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Hekenye/lora-trained-xl-with-prior-loss-other These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on A flower in melting golden 3d rendering style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
kyleeasterly/openllama-7b_purple-aerospace-v2-300-6
kyleeasterly
2023-08-09T07:29:59Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:26:46Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
muhtasham/bert-small-finetuned-glue-rte
muhtasham
2023-08-09T07:29:50Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-27T15:51:58Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-small-finetuned-glue-rte results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: rte split: train args: rte metrics: - name: Accuracy type: accuracy value: 0.631768953068592 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-glue-rte This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 2.8715 - Accuracy: 0.6318 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 2.62 | 50 | 1.8285 | 0.6318 | | No log | 5.26 | 100 | 2.0806 | 0.6462 | | No log | 7.87 | 150 | 2.1598 | 0.6282 | | No log | 10.51 | 200 | 2.2774 | 0.6318 | | No log | 13.15 | 250 | 2.3676 | 0.6245 | | No log | 15.77 | 300 | 2.4581 | 0.6462 | | No log | 18.41 | 350 | 2.6175 | 0.6354 | | No log | 21.05 | 400 | 2.6697 | 0.6354 | | No log | 23.67 | 450 | 2.7717 | 0.6354 | | 0.0101 | 26.31 | 500 | 2.7975 | 0.6462 | | 0.0101 | 28.92 | 550 | 2.8532 | 0.6390 | | 0.0101 | 31.56 | 600 | 2.9054 | 0.6209 | | 0.0101 | 34.21 | 650 | 2.8715 | 0.6318 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
alexiskirke/distilbert-base-uncased-finetuned-customer-reviews
alexiskirke
2023-08-09T07:29:17Z
104
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-24T14:02:28Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-customer-reviews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-customer-reviews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2025 - Accuracy: 0.9282 - F1: 0.8858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6594 | 1.0 | 17 | 0.6247 | 0.6667 | 0.0 | | 0.6259 | 2.0 | 34 | 0.6059 | 0.6667 | 0.0 | | 0.5948 | 3.0 | 51 | 0.5691 | 0.6925 | 0.2074 | | 0.5516 | 4.0 | 68 | 0.5241 | 0.7701 | 0.5238 | | 0.5022 | 5.0 | 85 | 0.4813 | 0.7989 | 0.6111 | | 0.4698 | 6.0 | 102 | 0.4496 | 0.8276 | 0.6629 | | 0.4179 | 7.0 | 119 | 0.4137 | 0.8563 | 0.7423 | | 0.3801 | 8.0 | 136 | 0.3833 | 0.8764 | 0.7817 | | 0.3396 | 9.0 | 153 | 0.3537 | 0.9023 | 0.8317 | | 0.3042 | 10.0 | 170 | 0.3256 | 0.9109 | 0.8517 | | 0.2743 | 11.0 | 187 | 0.3058 | 0.9195 | 0.8692 | | 0.2472 | 12.0 | 204 | 0.2905 | 0.9195 | 0.8692 | | 0.2251 | 13.0 | 221 | 0.2780 | 0.9167 | 0.8638 | | 0.1979 | 14.0 | 238 | 0.2659 | 0.9224 | 0.8744 | | 0.1864 | 15.0 | 255 | 0.2573 | 0.9224 | 0.8756 | | 0.158 | 16.0 | 272 | 0.2470 | 0.9253 | 0.8807 | | 0.1493 | 17.0 | 289 | 0.2357 | 0.9282 | 0.8869 | | 0.1404 | 18.0 | 306 | 0.2352 | 0.9253 | 0.8807 | | 0.1364 | 19.0 | 323 | 0.2294 | 0.9282 | 0.8858 | | 0.1256 | 20.0 | 340 | 0.2225 | 0.9253 | 0.8807 | | 0.12 | 21.0 | 357 | 0.2160 | 0.9253 | 0.8807 | | 0.1177 | 22.0 | 374 | 0.2136 | 0.9253 | 0.8807 | | 0.1115 | 23.0 | 391 | 0.2119 | 0.9253 | 0.8807 | | 0.1084 | 24.0 | 408 | 0.2090 | 0.9310 | 0.8899 | | 0.1062 | 25.0 | 425 | 0.2097 | 0.9253 | 0.8807 | | 0.1078 | 26.0 | 442 | 0.2037 | 0.9253 | 0.8807 | | 0.102 | 27.0 | 459 | 0.2047 | 0.9253 | 0.8807 | | 0.1016 | 28.0 | 476 | 0.2025 | 0.9224 | 0.8756 | | 0.0971 | 29.0 | 493 | 0.2033 | 0.9253 | 0.8807 | | 0.0985 | 30.0 | 510 | 0.2025 | 0.9282 | 0.8858 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
kyleeasterly/openllama-7b_purple-aerospace-v2-300-2
kyleeasterly
2023-08-09T07:28:23Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:26:16Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-1
kyleeasterly
2023-08-09T07:26:25Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:25:44Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
kyleeasterly/openllama-7b_purple-aerospace-v2-300-0
kyleeasterly
2023-08-09T07:25:48Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T07:24:55Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
peterandrew987/results
peterandrew987
2023-08-09T07:25:13Z
104
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "dataset:squad", "base_model:indobenchmark/indobart-v2", "base_model:finetune:indobenchmark/indobart-v2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-08T11:45:45Z
--- license: mit base_model: indobenchmark/indobart-v2 tags: - generated_from_trainer datasets: - squad metrics: - rouge model-index: - name: results results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: squad type: squad config: plain_text split: train[:1000] args: plain_text metrics: - name: Rouge1 type: rouge value: 16.2693 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.5998 - Rouge1: 16.2693 - Rouge2: 14.9952 - Rougel: 16.233 - Rougelsum: 16.2741 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:| | 1.4819 | 1.0 | 200 | 1.5998 | 16.2693 | 14.9952 | 16.233 | 16.2741 | 20.0 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.2 - Tokenizers 0.13.3
jakezou/dqn-SpaceInvadersNoFrameskip-v4
jakezou
2023-08-09T07:11:26Z
9
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-09T07:10:48Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 690.50 +/- 356.79 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jakezou -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jakezou -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jakezou ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Chattiori/SonicFurryMix
Chattiori
2023-08-09T06:58:53Z
0
2
null
[ "doi:10.57967/hf/0617", "license:creativeml-openrail-m", "region:us" ]
null
2023-04-01T01:38:21Z
--- license: creativeml-openrail-m ---
NEO946B/ppo-SnowballTarget
NEO946B
2023-08-09T06:56:18Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-08-09T06:55:57Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: NEO946B/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Anonymou3/sd-class-butterflies-32
Anonymou3
2023-08-09T06:54:46Z
31
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-08-09T06:54:17Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Anonymou3/sd-class-butterflies-32') image = pipeline().images[0] image ```
joycejiang/llama2-13B-qlora-codex-100-rs0
joycejiang
2023-08-09T06:48:52Z
4
0
peft
[ "peft", "region:us" ]
null
2023-08-09T06:48:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
whywynn/Reinforce-CartPole-v1
whywynn
2023-08-09T06:46:02Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-09T06:45:51Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
arminmrm93/Reinforce-Pixelcopter-PLE-v0-v2
arminmrm93
2023-08-09T06:45:01Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-09T06:44:54Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 75.20 +/- 59.72 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
redstonehero/mixprov4_v4
redstonehero
2023-08-09T06:41:40Z
21
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-09T03:57:23Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/facebombmix_v1
redstonehero
2023-08-09T06:41:36Z
21
1
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-09T03:57:17Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/pfg_111
redstonehero
2023-08-09T06:41:31Z
21
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-09T03:57:10Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/sunlightmix
redstonehero
2023-08-09T06:41:15Z
21
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-09T03:49:43Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/yesmix_v20
redstonehero
2023-08-09T06:40:04Z
22
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-09T03:49:32Z
--- license: creativeml-openrail-m library_name: diffusers ---
redstonehero/henmixreal_v40
redstonehero
2023-08-09T06:39:58Z
21
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-09T03:49:07Z
--- license: creativeml-openrail-m library_name: diffusers ---
ariobsessedwithai/axel
ariobsessedwithai
2023-08-09T06:38:14Z
0
0
null
[ "arxiv:1910.09700", "license:unknown", "region:us" ]
null
2023-08-09T06:30:13Z
--- license: unknown --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
peterandrew987/modified-qa
peterandrew987
2023-08-09T06:34:19Z
107
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "dataset:squad", "base_model:indobenchmark/indobart-v2", "base_model:finetune:indobenchmark/indobart-v2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-09T05:55:30Z
--- license: mit base_model: indobenchmark/indobart-v2 tags: - generated_from_trainer datasets: - squad metrics: - rouge model-index: - name: modified-qa results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: squad type: squad config: plain_text split: train[:1000] args: plain_text metrics: - name: Rouge1 type: rouge value: 13.4458 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modified-qa This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 3.9723 - Rouge1: 13.4458 - Rouge2: 6.819 - Rougel: 11.2064 - Rougelsum: 12.5476 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 4.436 | 1.0 | 200 | 3.9723 | 13.4458 | 6.819 | 11.2064 | 12.5476 | 20.0 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.2 - Tokenizers 0.13.3
maikaarda/gte-small-ggml
maikaarda
2023-08-09T06:30:29Z
0
1
null
[ "license:mit", "region:us" ]
null
2023-08-09T05:27:17Z
--- license: mit --- ggml files of [thenlper/gte-small](https://huggingface.co/thenlper/gte-small) You can use this ggml for https://github.com/skeskinen/bert.cpp ### gte-small | Data Type | STSBenchmark | eval time | EmotionClassification | eval time | |-----------|-----------|------------|-----------|------------| | f32 | 0.8554 | 12.40 | 0.4808 | 26.39 | | f16 | 0.8555 | 11.29 | 0.4808 | 18.48 | | q4_0 | 0.8537 | 9.22 | 0.4860 | 43.92 | | q4_1 | 0.8543 | 10.01 | 0.4832 | 38.33 | ### all-MiniLM-L12-v2 | Data Type | STSBenchmark | eval time | EmotionClassification | eval time | |-----------|-----------|------------|-----------|------------| | f32 | 0.8306 | 13.36 | 0.4117 | 21.23 | | f16 | 0.8306 | 11.51 | 0.4119 | 20.08 | | q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 | | q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 | ### all-MiniLM-L6-v2 | Data Type | STSBenchmark | eval time | EmotionClassification | eval time | |-----------|-----------|------------|-----------|------------| | f32 | 0.8201 | 6.83 | 0.4082 | 11.34 | | f16 | 0.8201 | 6.17 | 0.4085 | 10.28 | | q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 | | q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 | ### bert-base-uncased | Data Type | STSBenchmark | eval time | EmotionClassification | eval time | |-----------|-----------|------------|-----------|------------| | f32 | 0.4738 | 52.38 | 0.3361 | 88.56 | | f16 | 0.4739 | 33.24 | 0.3361 | 55.86 | | q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 | | q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
luistakahashi/my-awesome-setfit-2
luistakahashi
2023-08-09T06:30:11Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-09T06:29:58Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # luistakahashi/my-awesome-setfit-2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
joycejiang/llama2-13B-qlora-codex-1k-rs0
joycejiang
2023-08-09T06:28:54Z
4
0
peft
[ "peft", "region:us" ]
null
2023-08-09T06:28:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
luistakahashi/my-awesome-setfit-pear-2
luistakahashi
2023-08-09T06:26:11Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-09T06:15:17Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # luistakahashi/my-awesome-setfit-pear-2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-pear-2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
SaurabhArora/vehicle_defects
SaurabhArora
2023-08-09T06:16:28Z
219
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-08-09T05:54:37Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: vehicle_defects results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.7857142686843872 --- # vehicle_defects Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### broken headlight ![broken headlight](images/broken_headlight.jpg) #### damaged windscreen ![damaged windscreen](images/damaged_windscreen.jpg) #### deflated tire ![deflated tire](images/deflated_tire.jpg) #### vehicle oil leak ![vehicle oil leak](images/vehicle_oil_leak.jpg) #### worn out tire ![worn out tire](images/worn_out_tire.jpg)
Hekenye/lora-trained-xl-with-prior-loss
Hekenye
2023-08-09T06:12:35Z
4
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-09T04:29:59Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A flower in melting golden 3d rendering style tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Hekenye/lora-trained-xl-with-prior-loss These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on A flower in melting golden 3d rendering style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
stanfordnlp/stanza-swl
stanfordnlp
2023-08-09T06:11:18Z
2
0
stanza
[ "stanza", "token-classification", "swl", "license:apache-2.0", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - stanza - token-classification library_name: stanza language: swl license: apache-2.0 --- # Stanza model for Swedish_Sign_Language (swl) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-08-09 06:11:15.351
luistakahashi/my-awesome-setfit-pear-4
luistakahashi
2023-08-09T06:08:59Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-09T05:57:25Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # luistakahashi/my-awesome-setfit-pear-4 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-pear-4") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Shadman-Rohan/llama2-qlora-finetunined-french
Shadman-Rohan
2023-08-09T06:08:56Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T06:08:37Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
CyberHarem/mobius_honkai3
CyberHarem
2023-08-09T06:06:44Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/mobius_honkai3", "license:mit", "region:us" ]
text-to-image
2023-08-09T06:02:52Z
--- license: mit datasets: - CyberHarem/mobius_honkai3 pipeline_tag: text-to-image tags: - art --- # Lora of mobius_honkai3 This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/mobius_honkai3.pt` as the embedding and `1500/mobius_honkai3.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `mobius_honkai3`.** These are available steps: | Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download | |--------:|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------| | 1500 | ![pattern_1-1500](1500/previews/pattern_1.png) | ![pattern_2-1500](1500/previews/pattern_2.png) | ![pattern_3-1500](1500/previews/pattern_3.png) | ![bikini-1500](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/mobius_honkai3.zip) | | 1400 | ![pattern_1-1400](1400/previews/pattern_1.png) | ![pattern_2-1400](1400/previews/pattern_2.png) | ![pattern_3-1400](1400/previews/pattern_3.png) | ![bikini-1400](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/mobius_honkai3.zip) | | 1300 | ![pattern_1-1300](1300/previews/pattern_1.png) | ![pattern_2-1300](1300/previews/pattern_2.png) | ![pattern_3-1300](1300/previews/pattern_3.png) | ![bikini-1300](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/mobius_honkai3.zip) | | 1200 | ![pattern_1-1200](1200/previews/pattern_1.png) | ![pattern_2-1200](1200/previews/pattern_2.png) | ![pattern_3-1200](1200/previews/pattern_3.png) | ![bikini-1200](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/mobius_honkai3.zip) | | 1100 | ![pattern_1-1100](1100/previews/pattern_1.png) | ![pattern_2-1100](1100/previews/pattern_2.png) | ![pattern_3-1100](1100/previews/pattern_3.png) | ![bikini-1100](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/mobius_honkai3.zip) | | 1000 | ![pattern_1-1000](1000/previews/pattern_1.png) | ![pattern_2-1000](1000/previews/pattern_2.png) | ![pattern_3-1000](1000/previews/pattern_3.png) | ![bikini-1000](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/mobius_honkai3.zip) | | 900 | ![pattern_1-900](900/previews/pattern_1.png) | ![pattern_2-900](900/previews/pattern_2.png) | ![pattern_3-900](900/previews/pattern_3.png) | ![bikini-900](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/mobius_honkai3.zip) | | 800 | ![pattern_1-800](800/previews/pattern_1.png) | ![pattern_2-800](800/previews/pattern_2.png) | ![pattern_3-800](800/previews/pattern_3.png) | ![bikini-800](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/mobius_honkai3.zip) | | 700 | ![pattern_1-700](700/previews/pattern_1.png) | ![pattern_2-700](700/previews/pattern_2.png) | ![pattern_3-700](700/previews/pattern_3.png) | ![bikini-700](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/mobius_honkai3.zip) | | 600 | ![pattern_1-600](600/previews/pattern_1.png) | ![pattern_2-600](600/previews/pattern_2.png) | ![pattern_3-600](600/previews/pattern_3.png) | ![bikini-600](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/mobius_honkai3.zip) | | 500 | ![pattern_1-500](500/previews/pattern_1.png) | ![pattern_2-500](500/previews/pattern_2.png) | ![pattern_3-500](500/previews/pattern_3.png) | ![bikini-500](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/mobius_honkai3.zip) | | 400 | ![pattern_1-400](400/previews/pattern_1.png) | ![pattern_2-400](400/previews/pattern_2.png) | ![pattern_3-400](400/previews/pattern_3.png) | ![bikini-400](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/mobius_honkai3.zip) | | 300 | ![pattern_1-300](300/previews/pattern_1.png) | ![pattern_2-300](300/previews/pattern_2.png) | ![pattern_3-300](300/previews/pattern_3.png) | ![bikini-300](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/mobius_honkai3.zip) | | 200 | ![pattern_1-200](200/previews/pattern_1.png) | ![pattern_2-200](200/previews/pattern_2.png) | ![pattern_3-200](200/previews/pattern_3.png) | ![bikini-200](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/mobius_honkai3.zip) | | 100 | ![pattern_1-100](100/previews/pattern_1.png) | ![pattern_2-100](100/previews/pattern_2.png) | ![pattern_3-100](100/previews/pattern_3.png) | ![bikini-100](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/mobius_honkai3.zip) |
divyeshrajpura/speecht5-finetuned-voxpopuli-sl
divyeshrajpura
2023-08-09T05:46:03Z
83
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "sl", "dataset:facebook/voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-08-09T04:29:07Z
--- language: - sl license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer - text-to-speech datasets: - facebook/voxpopuli model-index: - name: speecht5-finetuned-voxpopuli-sl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5-finetuned-voxpopuli-sl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 125 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6473 | 3.39 | 100 | 0.5703 | | 0.5709 | 6.78 | 200 | 0.4998 | | 0.5339 | 10.17 | 300 | 0.4802 | | 0.5158 | 13.56 | 400 | 0.4733 | | 0.5275 | 16.95 | 500 | 0.4691 | | 0.4983 | 20.34 | 600 | 0.4671 | | 0.499 | 23.73 | 700 | 0.4638 | | 0.5003 | 27.12 | 800 | 0.4610 | | 0.496 | 30.51 | 900 | 0.4610 | | 0.4935 | 33.9 | 1000 | 0.4598 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
weav-geng/llama2-qlora-finetuned-midjourney-new-v5
weav-geng
2023-08-09T05:45:31Z
3
0
peft
[ "peft", "region:us" ]
null
2023-08-01T04:50:20Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
revivoskintagremoval/revivoskintagremoval
revivoskintagremoval
2023-08-09T05:34:32Z
0
0
diffusers
[ "diffusers", "Revivo Skin Tag Remover", "en", "license:bsd", "region:us" ]
null
2023-08-09T05:33:51Z
--- license: bsd language: - en library_name: diffusers tags: - Revivo Skin Tag Remover --- [Revivo Skin Tag Remover](https://atozsupplement.com/revivo-skin-tag-remover/) Clinical experts can eliminate skin tags by removing them with sterile scissors or a surgical blade. Prior to endeavoring this strategy, it's critical to counsel a medical services proficient to guarantee protected and clean circumstances. VISIT HERE FOR OFFICIAL WEBSITE:- https://atozsupplement.com/revivo-skin-tag-remover/
polejowska/detr-r50-cd45rb-8ah-12l-corrected
polejowska
2023-08-09T05:33:13Z
161
0
transformers
[ "transformers", "pytorch", "detr", "object-detection", "generated_from_trainer", "dataset:cd45rb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2023-08-08T17:18:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cd45rb model-index: - name: detr-r50-cd45rb-8ah-12l-corrected results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-r50-cd45rb-8ah-12l-corrected This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset. It achieves the following results on the evaluation set: - Loss: 1.7298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.1161 | 1.0 | 4606 | 2.2386 | | 2.7777 | 2.0 | 9212 | 2.0665 | | 2.6042 | 3.0 | 13818 | 1.9954 | | 2.5082 | 4.0 | 18424 | 1.8991 | | 2.4529 | 5.0 | 23030 | 1.9228 | | 2.3944 | 6.0 | 27636 | 1.8829 | | 2.3405 | 7.0 | 32242 | 1.8134 | | 2.3082 | 8.0 | 36848 | 1.7851 | | 2.2684 | 9.0 | 41454 | 1.7471 | | 2.2422 | 10.0 | 46060 | 1.7298 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
nathanhunt/test_fast_whisper_hi
nathanhunt
2023-08-09T05:00:07Z
4
0
peft
[ "peft", "region:us" ]
null
2023-08-09T05:00:03Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
openerotica/Qwen-7B-Chat-GPTQ
openerotica
2023-08-09T04:41:52Z
13
4
transformers
[ "transformers", "pytorch", "qwen", "text-generation", "custom_code", "zh", "en", "arxiv:2305.08322", "arxiv:2009.03300", "arxiv:2305.05280", "arxiv:2210.03629", "autotrain_compatible", "region:us" ]
text-generation
2023-08-09T04:06:36Z
--- language: - zh - en tags: - qwen pipeline_tag: text-generation inference: false --- # Qwen-7B-Chat <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo.jpg" width="400"/> <p> <br> <p align="center"> Qwen-7B <a href="https://modelscope.cn/models/qwen/Qwen-7B/summary">🤖 </a> | <a href="https://huggingface.co/Qwen/Qwen-7B">🤗</a>&nbsp | Qwen-7B-Chat <a href="https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary">🤖 </a>| <a href="https://huggingface.co/Qwen/Qwen-7B-Chat">🤗</a>&nbsp | &nbsp<a href="https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary">Demo</a>&nbsp | &nbsp<a href="https://github.com/QwenLM/Qwen-7B/blob/main/tech_memo.md">Report</a> </p> <br> ## 介绍(Introduction) **通义千问-7B(Qwen-7B)**是阿里云研发的通义千问大模型系列的70亿参数规模的模型。Qwen-7B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-7B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-7B-Chat。本仓库为Qwen-7B-Chat的仓库。 如果您想了解更多关于通义千问-7B开源模型的细节,我们建议您参阅[Github代码库](https://github.com/QwenLM/Qwen-7B)。 **Qwen-7B** is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen-7B`is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. This repository is the one for Qwen-7B-Chat. For more details about the open-source model of Qwen-7B, please refer to the [Github](https://github.com/QwenLM/Qwen-7B) code repository. ## 要求(Requirements) * python 3.8及以上版本 * pytorch 1.12及以上版本,推荐2.0及以上版本 * 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项) * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) ## 依赖项(Dependency) 运行Qwen-7B-Chat,请确保满足上述要求,再执行以下pip命令安装依赖库 To run Qwen-7B-Chat, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. ```bash pip install transformers==4.31.0 accelerate tiktoken einops ``` 另外,推荐安装`flash-attention`库,以实现更高的效率和更低的显存占用。 In addition, it is recommended to install the `flash-attention` library for higher efficiency and lower memory usage. ```bash git clone -b v1.0.8 https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # 下方安装可选,安装可能比较缓慢。 # Below are optional. Installing them might be slow. pip install csrc/layer_norm pip install csrc/rotary ``` ## 快速使用(Quickstart) 下面我们展示了一个使用Qwen-7B-Chat模型,进行多轮对话交互的样例: We show an example of multi-turn interaction with Qwen-7B-Chat in the following code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig # Note: The default behavior now has injection attack prevention off. tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # use bf16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval() # use fp16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval() # use cpu only # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="cpu", trust_remote_code=True).eval() # use auto mode, automatically select precision based on the device. model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True).eval() # Specify hyperparameters for generation model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参 # 第一轮对话 1st dialogue turn response, history = model.chat(tokenizer, "你好", history=None) print(response) # 你好!很高兴为你提供帮助。 # 第二轮对话 2nd dialogue turn response, history = model.chat(tokenizer, "给我讲一个年轻人奋斗创业最终取得成功的故事。", history=history) print(response) # 这是一个关于一个年轻人奋斗创业最终取得成功的故事。 # 故事的主人公叫李明,他来自一个普通的家庭,父母都是普通的工人。从小,李明就立下了一个目标:要成为一名成功的企业家。 # 为了实现这个目标,李明勤奋学习,考上了大学。在大学期间,他积极参加各种创业比赛,获得了不少奖项。他还利用课余时间去实习,积累了宝贵的经验。 # 毕业后,李明决定开始自己的创业之路。他开始寻找投资机会,但多次都被拒绝了。然而,他并没有放弃。他继续努力,不断改进自己的创业计划,并寻找新的投资机会。 # 最终,李明成功地获得了一笔投资,开始了自己的创业之路。他成立了一家科技公司,专注于开发新型软件。在他的领导下,公司迅速发展起来,成为了一家成功的科技企业。 # 李明的成功并不是偶然的。他勤奋、坚韧、勇于冒险,不断学习和改进自己。他的成功也证明了,只要努力奋斗,任何人都有可能取得成功。 # 第三轮对话 3rd dialogue turn response, history = model.chat(tokenizer, "给这个故事起一个标题", history=history) print(response) # 《奋斗创业:一个年轻人的成功之路》 ``` 关于更多的使用说明,请参考我们的[Github repo](https://github.com/QwenLM/Qwen-7B)获取更多信息。 For more information, please refer to our [Github repo](https://github.com/QwenLM/Qwen-7B) for more information. ## Tokenizer > 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。 基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen-7B/blob/main/tokenization_note_zh.md)。 Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen-7B/blob/main/tokenization_note.md). ## 模型细节(Model) 与Qwen-7B预训练模型相同,Qwen-7B-Chat模型规模基本情况如下所示 The details of the model architecture of Qwen-7B-Chat are listed as follows | Hyperparameter | Value | |:------|:------| | n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 151851 | | sequence length | 2048 | 在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法, 即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。 在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-7B-Chat使用了约15万token大小的词表。 该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。 词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。 For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration). For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-7B-Chat uses a vocabulary of over 150K tokens. It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary. It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization. ## 评测效果(Evaluation) 对于Qwen-7B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-7B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。 提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。 For Qwen-7B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage. Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible. ### 中文评测(Chinese Evaluation) #### C-Eval 在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-7B-Chat模型的zero-shot准确率 We demonstrate the zero-shot accuracy of Qwen-7B-Chat on C-Eval validation set | Model | Avg. Acc. | |:--------------|:------:| | LLaMA2-7B-Chat | 31.9 | | LLaMA2-13B-Chat | 40.6 | | Chinese-Alpaca-2-7B | 41.3 | | Chinese-Alpaca-Plus-13B | 43.3 | | Baichuan-13B-Chat | 50.4 | | ChatGLM2-6B-Chat | 50.7 | | InternLM-7B-Chat | 53.2 | | **Qwen-7B-Chat** | **54.2** | C-Eval测试集上,Qwen-7B-Chat模型的zero-shot准确率结果如下: The zero-shot accuracy of Qwen-7B-Chat on C-Eval testing set is provided below: | Model | Avg. | STEM | Social Sciences | Humanities | Others | |:--------------|:------:|:------:|:------:|:------:|:------:| | Chinese-Alpaca-Plus-13B | 41.5 | 36.6 | 49.7 | 43.1 | 41.2 | | Chinese-Alpaca-2-7B | 40.3 | - | - | - | - | | ChatGLM2-6B-Chat | 50.1 | 46.4 | 60.4 | 50.6 | 46.9 | | Baichuan-13B-Chat | 51.5 | 43.7 | 64.6 | 56.2 | 49.2 | | **Qwen-7B-Chat** | **54.6** | 47.8 | 67.6 | 59.3 | 50.6 | 在7B规模模型上,经过人类指令对齐的Qwen-7B-Chat模型,准确率在同类相近规模模型中仍然处于前列。 Compared with other pretrained models with comparable model size, the human-aligned Qwen-7B-Chat performs well in C-Eval accuracy. ### 英文评测(English Evaluation) #### MMLU [MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-7B-Chat模型的zero-shot准确率如下,效果同样在同类对齐模型中同样表现较优。 The zero-shot accuracy of Qwen-7B-Chat on MMLU is provided below. The performance of Qwen-7B-Chat still on the top between other human-aligned models with comparable size. | Model | Avg. Acc. | |:--------------|:------:| | ChatGLM2-6B-Chat | 45.5 | | LLaMA2-7B-Chat | 47.0 | | InternLM-7B-Chat | 50.8 | | Baichuan-13B-Chat | 52.1 | | ChatGLM2-12B-Chat | 52.1 | | **Qwen-7B-Chat** | **53.9** | ### 代码评测(Coding Evaluation) Qwen-7B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下 The zero-shot Pass@1 of Qwen-7B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below | Model | Pass@1 | |:--------------|:------:| | LLaMA2-7B-Chat | 12.2 | | InternLM-7B-Chat | 14.0 | | Baichuan-13B-Chat | 16.5 | | LLaMA2-13B-Chat | 18.9 | | **Qwen-7B-Chat** | **24.4** | ### 数学评测 在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-7B-Chat的准确率结果如下 The accuracy of Qwen-7B-Chat on GSM8K is shown below | Model | Zero-shot Acc. | 4-shot Acc. | |:--------------|:------:|:------:| | ChatGLM2-6B-Chat | - | 28.0 | | LLaMA2-7B-Chat | 20.4 | 28.2 | | LLaMA2-13B-Chat | 29.4 | 36.7 | | InternLM-7B-Chat | 32.6 | 34.5 | | Baichuan-13B-Chat | - | 36.3 | | ChatGLM2-12B-Chat | - | 38.1 | | **Qwen-7B-Chat** | **41.1** | **43.5** | ### 长序列评测(Long-Context Understanding) 通过NTK插值,LogN注意力缩放可以扩展Qwen-7B-Chat的上下文长度。在长文本摘要数据集[VCSUM](https://arxiv.org/abs/2305.05280)上(文本平均长度在15K左右),Qwen-7B-Chat的Rouge-L结果如下: **(若要启用这些技巧,请将config.json里的`use_dynamc_ntk`和`use_logn_attn`设置为true)** We introduce NTK-aware interpolation, LogN attention scaling to extend the context length of Qwen-7B-Chat. The Rouge-L results of Qwen-7B-Chat on long-text summarization dataset [VCSUM](https://arxiv.org/abs/2305.05280) (The average length of this dataset is around 15K) are shown below: **(To use these tricks, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)** | Model | VCSUM (zh) | |:----------------|:-------:| | GPT-3.5-Turbo-16k | 16.0 | | LLama2-7B-Chat | 0.2 | | InternLM-7B-Chat | 13.0 | | ChatGLM2-6B-Chat | 16.3 | | **Qwen-7B-Chat** | **16.6** | ### 工具使用能力的评测(Tool Usage) #### ReAct Prompting 千问支持通过 [ReAct Prompting](https://arxiv.org/abs/2210.03629) 调用插件/工具/API。ReAct 也是 [LangChain](https://python.langchain.com/) 框架采用的主要方式之一。在我们开源的、用于评估工具使用能力的评测基准上,千问的表现如下: Qwen-7B-Chat supports calling plugins/tools/APIs through [ReAct Prompting](https://arxiv.org/abs/2210.03629). ReAct is also one of the main approaches used by the [LangChain](https://python.langchain.com/) framework. In our evaluation benchmark for assessing tool usage capabilities, Qwen-7B-Chat's performance is as follows: | Model | Tool Selection (Acc.↑) | Tool Input (Rouge-L↑) | False Positive Error↓ | |:-----------------|:----------------------:|:---------------------:|:---------------------:| | GPT-4 | 95% | **0.90** | 15% | | GPT-3.5 | 85% | 0.88 | 75% | | **Qwen-7B-Chat** | **99%** | 0.89 | **9.7%** | > 评测基准中出现的插件均没有出现在千问的训练集中。该基准评估了模型在多个候选插件中选择正确插件的准确率、传入插件的参数的合理性、以及假阳率。假阳率(False Positive)定义:在处理不该调用插件的请求时,错误地调用了插件。 > The plugins that appear in the evaluation set do not appear in the training set of Qwen-7B-Chat. This benchmark evaluates the accuracy of the model in selecting the correct plugin from multiple candidate plugins, the rationality of the parameters passed into the plugin, and the false positive rate. False Positive: Incorrectly invoking a plugin when it should not have been called when responding to a query. 关于 ReAct Prompting 的 prompt 怎么写、怎么使用,请参考 [ReAct 样例说明](examples/react_prompt.md)。使用工具能使模型更好地完成任务。基于千问的工具使用能力,我们能实现下图所展示的效果: For how to write and use prompts for ReAct Prompting, please refer to [the ReAct examples](examples/react_prompt.md). The use of tools can enable the model to better perform tasks, as shown in the following figures: ![](assets/react_showcase_001.png) ![](assets/react_showcase_002.png) #### Huggingface Agent 千问还具备作为 [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents) 的能力。它在 Huggingface 提供的run模式评测基准上的表现如下: Qwen-7B-Chat also has the capability to be used as a [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents). Its performance on the run-mode benchmark provided by HuggingFace is as follows: | Model | Tool Selection↑ | Tool Used↑ | Code↑ | |:-|:-:|:-:|:-:| |GPT-4 | **100** | **100** | **97.41** | |GPT-3.5 | 95.37 | 96.30 | 87.04 | |StarCoder-15.5B | 87.04 | 87.96 | 68.89 | | **Qwen-7B** | 90.74 | 92.59 | 74.07 | ## 量化(Quantization) 如希望使用更低精度的量化模型,如4比特和8比特的模型,我们提供了简单的示例来说明如何快速使用量化模型。在开始前,确保你已经安装了`bitsandbytes`。请注意,`bitsandbytes`的安装要求是: We provide examples to show how to load models in `NF4` and `Int8`. For starters, make sure you have implemented `bitsandbytes`. Note that the requirements for `bitsandbytes` are: ``` **Requirements** Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0. ``` Windows用户需安装特定版本的`bitsandbytes`,可选项包括[bitsandbytes-windows-webui](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。 Windows users should find another option, which might be [bitsandbytes-windows-webui](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels). 你只需要在`AutoModelForCausalLM.from_pretrained`中添加你的量化配置,即可使用量化模型。如下所示: Then you only need to add your quantization configuration to `AutoModelForCausalLM.from_pretrained`. See the example below: ```python from transformers import AutoModelForCausalLM, BitsAndBytesConfig # quantization configuration for NF4 (4 bits) quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_compute_dtype=torch.bfloat16 ) # quantization configuration for Int8 (8 bits) quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen-7B-Chat", device_map="cuda:0", quantization_config=quantization_config, max_memory=max_memory, trust_remote_code=True, ).eval() ``` 上述方法可以让我们将模型量化成`NF4`和`Int8`精度的模型进行读取,帮助我们节省显存开销。我们也提供了相关性能数据。我们发现尽管模型在效果上存在损失,但模型的显存开销大幅降低。 With this method, it is available to load Qwen-7B-Chat in `NF4`and `Int8`, which saves you memory usage. We provide related statistics of model performance below. We find that the quantization downgrades the effectiveness slightly but significantly increases inference efficiency and reduces memory costs. | Precision | MMLU | Memory | | :---------| :-------: | :-----: | | BF16 | 56.7 | 16.2G | | Int8 | 52.8 | 10.1G | | NF4 | 48.9 | 7.4G | ## 使用协议(License Agreement) 我们的代码和模型权重对学术研究完全开放,并支持商用。请查看LICENSE了解具体的开源协议细节。 Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](LICENSE) for more details about the license. ## 联系我们(Contact Us) 如果你想给我们的研发团队和产品团队留言,请通过邮件(qianwen_opensource@alibabacloud.com)联系我们。 If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.
cuixing/textual_inversion_object_style
cuixing
2023-08-09T04:14:01Z
21
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-09T03:48:05Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - cuixing/textual_inversion_object_style These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
carolinacalce/MiModeloCatsDogs
carolinacalce
2023-08-09T04:12:08Z
220
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-08-08T02:06:33Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer model-index: - name: MiModeloCatsDogs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiModeloCatsDogs This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
reginaboateng/Compacter_SciBert_adapter_ner_pico_for_classification_task
reginaboateng
2023-08-09T04:02:28Z
0
0
adapter-transformers
[ "adapter-transformers", "adapterhub:pico_ner", "bert", "dataset:reginaboateng/cleaned_ebmnlp_pico", "region:us" ]
null
2023-08-09T04:02:26Z
--- tags: - adapterhub:pico_ner - adapter-transformers - bert datasets: - reginaboateng/cleaned_ebmnlp_pico --- # Adapter `reginaboateng/Compacter_SciBert_adapter_ner_pico_for_classification_task` for allenai/scibert_scivocab_uncased An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased") adapter_name = model.load_adapter("reginaboateng/Compacter_SciBert_adapter_ner_pico_for_classification_task", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
reginaboateng/Compacter_PubmedBert_adapter_ner_pico_for_classification_task
reginaboateng
2023-08-09T04:02:05Z
1
0
adapter-transformers
[ "adapter-transformers", "bert", "adapterhub:pico_ner", "dataset:reginaboateng/cleaned_ebmnlp_pico", "region:us" ]
null
2023-08-09T04:02:02Z
--- tags: - bert - adapter-transformers - adapterhub:pico_ner datasets: - reginaboateng/cleaned_ebmnlp_pico --- # Adapter `reginaboateng/Compacter_PubmedBert_adapter_ner_pico_for_classification_task` for microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext An [adapter](https://adapterhub.ml) for the `microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext") adapter_name = model.load_adapter("reginaboateng/Compacter_PubmedBert_adapter_ner_pico_for_classification_task", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
kimnt93/en-seed-task-cls
kimnt93
2023-08-09T04:01:07Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-09T01:28:34Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # kimnt93/en_seed_task_cls This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("kimnt93/en_seed_task_cls") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
houdi/my_awesome_model_classification_w_adapter
houdi
2023-08-09T04:00:20Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-08-09T03:41:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_model_classification_w_adapter results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model_classification_w_adapter This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Hekenye/lora-trained-xl
Hekenye
2023-08-09T03:53:59Z
3
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-09T02:52:33Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A flower in melting golden 3d rendering style tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Hekenye/lora-trained-xl These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on A flower in melting golden 3d rendering style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
salohnana2018/OTE-NoDapt-ABSA-bert-base-qarib-OrginalHP-FineTune
salohnana2018
2023-08-09T03:45:03Z
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "base_model:ahmedabdelali/bert-base-qarib", "base_model:finetune:ahmedabdelali/bert-base-qarib", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-09T03:39:33Z
--- base_model: qarib/bert-base-qarib tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: OTE-NoDapt-ABSA-bert-base-qarib-OrginalHP-FineTune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OTE-NoDapt-ABSA-bert-base-qarib-OrginalHP-FineTune This model is a fine-tuned version of [qarib/bert-base-qarib](https://huggingface.co/qarib/bert-base-qarib) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1348 - Precision: 0.7488 - Recall: 0.7723 - F1: 0.7604 - Accuracy: 0.9532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 25 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1656 | 1.0 | 61 | 0.1196 | 0.7299 | 0.7932 | 0.7603 | 0.9528 | | 0.08 | 2.0 | 122 | 0.1176 | 0.7561 | 0.7678 | 0.7619 | 0.9543 | | 0.0501 | 3.0 | 183 | 0.1348 | 0.7488 | 0.7723 | 0.7604 | 0.9532 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
nayanika/test_model
nayanika
2023-08-09T03:37:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-09T03:37:41Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
phucleh2/controlnet_mask2cloth
phucleh2
2023-08-09T03:09:01Z
5
13
diffusers
[ "diffusers", "controlnet", "stable-diffusion", "image-to-image", "mask-to-cloth", "en", "arxiv:2302.05543", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "region:us" ]
image-to-image
2023-08-04T09:07:10Z
--- base_model: runwayml/stable-diffusion-v1-5 tags: - controlnet - stable-diffusion - image-to-image - mask-to-cloth language: - en library_name: diffusers widget: - src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask1.jpg prompt: a red top with a lace - trimmed neckline - src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask2.jpg prompt: a long sleeved top - black - src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask3.jpg prompt: a blue blouse with a tie neck and a floral prints - src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask4.jpg prompt: a light pink blouse with rose patterns - src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask5.jpg prompt: t-shirt - black --- # Mask + Prompt → Cloth - This model is still in development for better realistic-ness of the output. Stay tuned 🤗 - This model is a fine-tuned version of ControlNet, tailored to utilize a black-and-white outline of an upper garment (mask) and a descriptive prompt to generate a garment that aligns with the mask's outline. - It's important to note that this model exclusively operates with upper garments. Please refrain from inputting masks for pants or jeans, as this could yield unexpected outcomes. - Input mask size: 384 x 512 - Each time the model is executed, **even with the same mask and prompt**, it will generate a distinct output. - This model is a fine-tuned of [ControlNet](https://arxiv.org/abs/2302.05543) using [VITON-HD](https://github.com/shadow2496/VITON-HD) dataset
cabranch/distilgpt2-finetuned-wikitext2
cabranch
2023-08-09T02:41:17Z
61
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-08-08T14:37:24Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_keras_callback model-index: - name: cabranch/distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cabranch/distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8577 - Validation Loss: 3.6756 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.8577 | 3.6756 | 0 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.13.0 - Datasets 2.14.3 - Tokenizers 0.13.3
Evan-Lin/Bart-abs-yelp-entailment-50
Evan-Lin
2023-08-09T02:35:26Z
51
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-08-08T16:42:14Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmps461iimn/Evan-Lin/Bart-abs-yelp-entailment-50") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmps461iimn/Evan-Lin/Bart-abs-yelp-entailment-50") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmps461iimn/Evan-Lin/Bart-abs-yelp-entailment-50") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
asenella/incomplete_mhd_MVTCAE_beta_5_scale_False_seed_0
asenella
2023-08-09T02:31:08Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-08-09T02:30:59Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
patonw/ppo-Pyramids
patonw
2023-08-09T02:23:53Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-08-09T01:20:49Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: patonw/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
saurabh2086/Reinforce-CartPole-8
saurabh2086
2023-08-09T02:12:21Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-09T02:12:11Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-8 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ittailup/distilgender-es-2M
ittailup
2023-08-09T01:58:07Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "es", "dataset:ittailup/issste", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-09T01:36:11Z
--- license: apache-2.0 datasets: - ittailup/issste language: - es metrics: - accuracy: 0.9951 widget: - text: AGATA - text: GABRIEL --- ## Model Card ### Overview This model card provides details about a trained model, its training process, and evaluation metrics. This information ensures transparency and assists users in understanding the model's performance and behavior. ### Training Details - **Training Epochs**: The model was trained for 2 epochs. - **Training Steps**: The model underwent 1856 training steps. - **Training Runtime**: The model's training runtime was approximately 2680.184 seconds. - **Training Speed**: The model trained at a rate of 0.692 steps per second and processed approximately 1417.813 samples per second. - **Learning Rate**: The learning rate during training was approximately 0.0000095905. - **Training Loss**: The average training loss recorded was approximately 0.0184, with a specific loss value of 0.023423514232553285. ### Evaluation Details - **Evaluation Loss**: The model achieved an evaluation loss of 0.017659155651926994. - **Evaluation Runtime**: The evaluation process took approximately 23.8414 seconds. - **Evaluation Speed**: The model was evaluated at a rate of 2.055 steps per second, processing approximately 4194.378 samples per second. ### Performance Metrics - **Accuracy**: The model achieved an accuracy of 0.9951 during evaluation. - **Precision**: The precision of the model is approximately 0.9957234121187588. - **Recall**: The model's recall is approximately 0.9956533216014078. - **F1-Score**: The F1-Score for the model is approximately 0.995688365626595.
Aztects222002/Ehsush
Aztects222002
2023-08-09T01:54:45Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-08-09T01:54:45Z
--- license: bigscience-openrail-m ---
alvin-wen/distilbert-base-uncased-finetuned-wos
alvin-wen
2023-08-09T01:49:39Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:web_of_science", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-09T01:36:30Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - web_of_science model-index: - name: distilbert-base-uncased-finetuned-wos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-wos This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the web_of_science dataset. It achieves the following results on the evaluation set: - Loss: 2.2025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5312 | 1.0 | 357 | 2.2975 | | 2.3847 | 2.0 | 714 | 2.2568 | | 2.3388 | 3.0 | 1071 | 2.2108 | | 2.3076 | 4.0 | 1428 | 2.2158 | | 2.2887 | 5.0 | 1785 | 2.2154 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Anjusree/videomae-base-finetuned-ucf101-subset
Anjusree
2023-08-09T01:43:22Z
60
0
transformers
[ "transformers", "pytorch", "tensorboard", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-08-02T00:24:29Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 624 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
henilp105/wav2vec2-large-xls-r-300m-telugu-asr
henilp105
2023-08-09T01:35:24Z
20
1
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-17T14:03:53Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xls-r-300m-telugu-asr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-telugu-asr This model is a fine-tuned version of [henilp105/wav2vec2-large-xls-r-300m-telugu-asr](https://huggingface.co/henilp105/wav2vec2-large-xls-r-300m-telugu-asr) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1050 - Wer: 0.6656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.0506 | 2.3 | 200 | 0.8841 | 0.7564 | | 0.6354 | 4.59 | 400 | 0.7448 | 0.6912 | | 0.3934 | 6.89 | 600 | 0.8321 | 0.6929 | | 0.2652 | 9.19 | 800 | 0.9529 | 0.6984 | | 0.2022 | 11.49 | 1000 | 0.9490 | 0.6979 | | 0.1514 | 13.79 | 1200 | 1.0025 | 0.6869 | | 0.124 | 16.09 | 1400 | 1.0367 | 0.6799 | | 0.1007 | 18.39 | 1600 | 1.0658 | 0.6734 | | 0.0875 | 20.69 | 1800 | 1.0758 | 0.6779 | | 0.0838 | 22.98 | 2000 | 1.0999 | 0.6701 | | 0.0745 | 25.29 | 2200 | 1.1020 | 0.6708 | | 0.0641 | 27.58 | 2400 | 1.1140 | 0.6683 | | 0.0607 | 29.88 | 2600 | 1.1050 | 0.6656 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.13.2
roca2357/output
roca2357
2023-08-09T01:24:18Z
21
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-09T01:19:38Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - roca2357/output This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
Jenniferkmc/controlnet-model2
Jenniferkmc
2023-08-09T00:52:49Z
2
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-08T15:34:54Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-Jenniferkmc/controlnet-model2 These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning. You can find some example images below. prompt: High-quality close-up dslr photo of man wearing a hat with trees in the background ![images_0)](./images_0.png) prompt: Girl smiling, professional dslr photograph, dark background, studio lights, high quality ![images_1)](./images_1.png) prompt: Portrait of a clown face, oil on canvas, bittersweet expression ![images_2)](./images_2.png)
asenella/MMVAEPlus_beta_5_scale_False_seed_0
asenella
2023-08-09T00:19:54Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-27T16:50:57Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
wellecks/llmstep-mathlib4-pythia2.8b
wellecks
2023-08-09T00:16:28Z
412
6
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "arxiv:2102.06203", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-08T22:42:44Z
--- license: mit --- ### llmstep: [L]LM proofstep suggestions in Lean https://github.com/wellecks/llmstep This model is a Pythia-2.8b-deduped language model fine-tuned on [LeanDojo Benchmark 4](https://zenodo.org/record/8040110). The model is fine-tuned on sequences of the form: ```bash [GOAL]tactic-state[PROOFSTEP]next-tactic<|endoftext|> ``` This format corresponds to the proofstep objective from [Han et al ICLR 2022](https://arxiv.org/abs/2102.06203).\ The [python/train](python/train) directory in the repository shows how the model was fine-tuned. Please see the repository for more details. ``` @misc{llmstep, author = {Sean Welleck}, title = {llmstep: LLM proofstep suggestions in Lean}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/wellecks/llmstep}}, } ```
RomyMy/dqn-SpaceInvadersNoFrameskip-v4
RomyMy
2023-08-09T00:15:47Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-09T00:15:08Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 657.50 +/- 342.41 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RomyMy -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RomyMy -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RomyMy ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
thisiskeithkwan/cantomed7
thisiskeithkwan
2023-08-09T00:02:28Z
76
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "yue", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-08T17:57:46Z
--- language: - yue license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper medium 1/10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper medium 1/10 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 3000 ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
C-Lo/masked-dataset
C-Lo
2023-08-08T23:45:45Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-08T23:41:15Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: masked-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # masked-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
C-Lo/unfiltered-dataset
C-Lo
2023-08-08T23:39:36Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-08T23:36:28Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: unfiltered-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unfiltered-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
C-Lo/neutral-dataset
C-Lo
2023-08-08T23:31:33Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-08T23:27:21Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: neutral-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # neutral-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
C-Lo/gendered-dataset
C-Lo
2023-08-08T23:26:05Z
122
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-08T23:22:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: gendered-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gendered-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
mgmeskill/a2c-AntBulletEnv-v0
mgmeskill
2023-08-08T22:36:09Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-08T22:34:58Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1231.03 +/- 310.83 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
iamnambiar/Reinforce-CartPole-v1
iamnambiar
2023-08-08T22:21:22Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-08T22:21:11Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Abhi5ingh/model_out
Abhi5ingh
2023-08-08T22:12:19Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-04T21:56:07Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-Abhi5ingh/model_out These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. prompt: white t-shirt with logo ![images_0)](./images_0.png) prompt: purple sports athleisure t-shirt high quality ![images_1)](./images_1.png) prompt: women's sports top ![images_2)](./images_2.png)
danorel/dqn-SpaceInvadersNoFrameskip-v4
danorel
2023-08-08T22:06:39Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-08T22:06:01Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 620.50 +/- 135.54 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga danorel -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga danorel -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga danorel ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
divyeshrajpura/speecht5-finetuned-voxpopuli-nl
divyeshrajpura
2023-08-08T21:53:36Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:facebook/voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-08-08T18:46:09Z
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: speecht5-finetuned-voxpopuli-nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5-finetuned-voxpopuli-nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5157 | 4.3 | 1000 | 0.4752 | | 0.4994 | 8.6 | 2000 | 0.4619 | | 0.5002 | 12.9 | 3000 | 0.4578 | | 0.4968 | 17.2 | 4000 | 0.4556 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
reginaboateng/Compacter_clinical_bert_adapter_ner_pico_for_classification_task
reginaboateng
2023-08-08T21:50:13Z
0
0
adapter-transformers
[ "adapter-transformers", "adapterhub:pico_ner", "bert", "dataset:reginaboateng/cleaned_ebmnlp_pico", "region:us" ]
null
2023-08-08T21:50:11Z
--- tags: - adapterhub:pico_ner - adapter-transformers - bert datasets: - reginaboateng/cleaned_ebmnlp_pico --- # Adapter `reginaboateng/Compacter_clinical_bert_adapter_ner_pico_for_classification_task` for emilyalsentzer/Bio_ClinicalBERT An [adapter](https://adapterhub.ml) for the `emilyalsentzer/Bio_ClinicalBERT` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") adapter_name = model.load_adapter("reginaboateng/Compacter_clinical_bert_adapter_ner_pico_for_classification_task", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
reginaboateng/Compacter_BioBERT_adapter_ner_pico_for_classification_task
reginaboateng
2023-08-08T21:49:48Z
0
0
adapter-transformers
[ "adapter-transformers", "bert", "adapterhub:pico_ner", "dataset:reginaboateng/cleaned_ebmnlp_pico", "region:us" ]
null
2023-08-08T21:49:46Z
--- tags: - bert - adapter-transformers - adapterhub:pico_ner datasets: - reginaboateng/cleaned_ebmnlp_pico --- # Adapter `reginaboateng/Compacter_BioBERT_adapter_ner_pico_for_classification_task` for dmis-lab/biobert-v1.1 An [adapter](https://adapterhub.ml) for the `dmis-lab/biobert-v1.1` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("dmis-lab/biobert-v1.1") adapter_name = model.load_adapter("reginaboateng/Compacter_BioBERT_adapter_ner_pico_for_classification_task", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
shtif/whisper-tiny-en
shtif
2023-08-08T21:46:36Z
84
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-08T20:13:59Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: Whisper Tiny - shtif results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train[450:] args: en-US metrics: - name: Wer type: wer value: 0.33412042502951594 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny - shtif This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6315 - Wer Ortho: 0.3368 - Wer: 0.3341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0004 | 17.86 | 500 | 0.6315 | 0.3368 | 0.3341 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
alphacep/vosk-model-small-ru
alphacep
2023-08-08T21:37:22Z
0
9
null
[ "onnx", "audio", "automatic-speech-recognition", "hf-asr-leaderboard", "ru", "speech", "license:apache-2.0", "model-index", "region:us" ]
automatic-speech-recognition
2023-08-08T21:34:06Z
--- license: apache-2.0 language: - ru tags: - audio - automatic-speech-recognition - hf-asr-leaderboard - ru - speech model-index: - name: Vosk Small Russian Model results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ru type: common_voice args: ru metrics: - name: Test WER type: wer value: 9.8 --- Small Zipformer2 model trained with k2-fsa/icefall on Russian data Links: <https://alphacephei.com/vosk> <https://github.com/k2-fsa/icefall>
openerotica/falcon-7b-GPTQ
openerotica
2023-08-08T21:10:13Z
17
1
transformers
[ "transformers", "pytorch", "RefinedWebModel", "text-generation", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2101.00027", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:2306.01116", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-08T18:01:28Z
--- datasets: - tiiuae/falcon-refinedweb language: - en inference: false license: apache-2.0 --- # 🚀 Falcon-7B **Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.** *Paper coming soon* 😊. 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)! ## Why use Falcon-7B? * **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). * **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions. ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct). 🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B. # Model Card for Falcon-7B ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** Apache 2.0. ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)). | **Data source** | **Fraction** | **Tokens** | **Sources** | |--------------------|--------------|------------|-----------------------------------| | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl | | Books | 7% | 110B | | | Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews | | Code | 3% | 45B | | | RefinedWeb-French | 3% | 45B | massive web crawl | | Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ### Training Procedure Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO. #### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 | | Weight decay | 1e-1 | | | Z-loss | 1e-4 | | | Batch size | 2304 | 30B tokens ramp-up | #### Speeds, Sizes, Times Training happened in early March 2023 and took about two weeks. ## Evaluation *Paper coming soon*. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. ## Technical Specifications ### Model Architecture and Objective Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a single layer norm. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 32 | | | `d_model` | 4544 | Increased to compensate for multiquery | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances. #### Software Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116). ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` ## License Falcon-7B is made available under the Apache 2.0 license. ## Contact falconllm@tii.ae
johnpaulbin/lora-trained-xl-colab
johnpaulbin
2023-08-08T21:00:05Z
8
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-08T20:06:41Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - johnpaulbin/lora-trained-xl-colab These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
adon81/bert-finetuned-ner
adon81
2023-08-08T20:55:28Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-08T20:43:35Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9311540366518078 - name: Recall type: recall value: 0.9491753618310333 - name: F1 type: f1 value: 0.9400783398616552 - name: Accuracy type: accuracy value: 0.9862541943839407 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0598 - Precision: 0.9312 - Recall: 0.9492 - F1: 0.9401 - Accuracy: 0.9863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0777 | 1.0 | 1756 | 0.0859 | 0.9028 | 0.9313 | 0.9168 | 0.9786 | | 0.0406 | 2.0 | 3512 | 0.0578 | 0.9249 | 0.9477 | 0.9362 | 0.9857 | | 0.0281 | 3.0 | 5268 | 0.0598 | 0.9312 | 0.9492 | 0.9401 | 0.9863 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Jean135Paul/Pilot
Jean135Paul
2023-08-08T20:51:28Z
0
0
adapter-transformers
[ "adapter-transformers", "biology", "summarization", "ab", "dataset:fka/awesome-chatgpt-prompts", "license:bigscience-openrail-m", "region:us" ]
summarization
2023-08-08T20:50:07Z
--- license: bigscience-openrail-m datasets: - fka/awesome-chatgpt-prompts language: - ab metrics: - bertscore library_name: adapter-transformers pipeline_tag: summarization tags: - biology ---
gang21/llama2-icd-10
gang21
2023-08-08T20:43:53Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-08T20:43:45Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
rajcas2421/katrina
rajcas2421
2023-08-08T20:39:44Z
6
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-08T20:34:37Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### katrina Dreambooth model trained by rajcas2421 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: