modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 06:26:56
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
538 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 06:26:41
card
stringlengths
11
1.01M
w601sxs/b1ade-1b-wizard-chkpt
w601sxs
2023-08-01T13:13:14Z
2
0
peft
[ "peft", "pytorch", "gpt_neox", "region:us" ]
null
2023-08-01T13:12:35Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
jondurbin/airoboros-65b-gpt4-2.0-peft
jondurbin
2023-08-01T13:09:03Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2023-07-30T09:49:53Z
--- license: cc-by-nc-4.0 --- Adapter model for https://hf.co/jondurbin/airoboros-65b-gpt4-2.0
IHaveNoClueAndIMustPost/llama2-22b-wizard_vicuna-ggml
IHaveNoClueAndIMustPost
2023-08-01T13:05:15Z
0
1
null
[ "llama", "llama-2", "license:other", "region:us" ]
null
2023-08-01T12:01:35Z
--- license: other tags: - llama - llama-2 --- A 22B model merge by [grimpep](https://huggingface.co/grimpep) mixing [13Bv2-llama-modelmerge](https://huggingface.co/grimpep/13Bv2-llama-modelmerge) with [Wizard-Vicuna-30B-Superhot-8K](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-fp16). <br>Please see the [original repo](https://huggingface.co/grimpep/llama2-22b-wizard_vicuna) for further information.<br><br> From my brief testing this model works great for chat or roleplaying using Llama2 syntax along with [SimpleProxy](https://github.com/anon998/simple-proxy-for-tavern) or SimpleProxy style prompt instruction.
efederici/it5-efficient-small-fanpage
efederici
2023-08-01T13:00:48Z
116
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "summarization", "it", "dataset:ARTeLab/fanpage", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-05-03T11:49:15Z
--- license: apache-2.0 tags: - summarization language: - it datasets: - ARTeLab/fanpage --- # it5-efficient-small-fanpage It is a T5 ([IT5](https://huggingface.co/stefan-it/it5-efficient-small-el32)) efficient small model trained on [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage). <p align="center"> <img src="https://compass-media.vogue.it/photos/61e574067f70d15c08312807/master/w_1600%2Cc_limit/DavideBalliano_UNTITLED_0215_%25206060_2021_1_Crop.jpeg" width="400"> </br> Davide Balliano, Untitled </p> ## Usage and Performance ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = T5Tokenizer.from_pretrained("efederici/it5-efficient-small-fanpage") model = T5ForConditionalGeneration.from_pretrained("efederici/it5-efficient-small-fanpage") ``` ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
jmzk96/PCSciBERT_cased
jmzk96
2023-08-01T12:58:46Z
185
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "physics", "computer science", "en", "dataset:adsabs/WIESP2022-NER", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-22T14:26:24Z
--- datasets: - adsabs/WIESP2022-NER language: - en tags: - physics - computer science --- PCSciBERT_cased was initiated with the cased variant of SciBERT (https://huggingface.co/allenai/scibert_scivocab_cased) and pre-trained on texts from 1,560,661 research articles of the physics and computer science domain in arXiv. The tokenizer for PCSciBERT_cased uses the same vocabulary from allenai/scibert_scivocab_cased. The model was also evaluated on its downstream performance in named entity recognition using the adsabs/WIESP2022-NER and CS-NER (https://github.com/jd-coderepos/contributions-ner-cs/tree/main) dataset. Overall, PCSciBERT_cased achieved higher micro F1 scores than SciBERT(cased) for both WIESP (Micro F1: 82.19%) and CS-NER (Micro F1: 76.22%) datasets. It improves the performance of SciBERT(cased) on CS-NER test dataset by 0.69% and on WIESP test dataset by 1.49%.
jmzk96/PCSciBERT_uncased
jmzk96
2023-08-01T12:58:25Z
185
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "physics", "computer science", "en", "dataset:adsabs/WIESP2022-NER", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-03T08:25:20Z
--- datasets: - adsabs/WIESP2022-NER language: - en tags: - physics - computer science --- PCSciBERT_uncased was initiated with the uncased variant of SciBERT (https://huggingface.co/allenai/scibert_scivocab_uncased) and pre-trained on texts from 1,560,661 research articles of the physics and computer science domain in arXiv. The tokenizer for PCSciBERT_uncased uses the same vocabulary from allenai/scibert_scivocab_uncased. The model was also evaluated on its downstream performance in named entity recognition using the adsabs/WIESP2022-NER and CS-NER (https://github.com/jd-coderepos/contributions-ner-cs/tree/main) dataset. Overall, PCSciBERT_uncased achieved higher micro F1 scores than SciBERT(uncased) for both WIESP (Micro F1: 81.54%) and CS-NER (Micro F1: 75.67%) datasets. It improves the performance of SciBERT(uncased) on CS-NER test dataset by 0.26% and on WIESP test dataset by 0.8%.
NasimB/aochildes-gutenberg_fixed-not-mixed-log-rarity-seed
NasimB
2023-08-01T12:57:03Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-01T10:23:28Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: aochildes-gutenberg_fixed-not-mixed-log-rarity-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aochildes-gutenberg_fixed-not-mixed-log-rarity-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.364 | 0.29 | 500 | 5.3495 | | 5.0533 | 0.59 | 1000 | 4.9199 | | 4.7165 | 0.88 | 1500 | 4.6978 | | 4.4578 | 1.17 | 2000 | 4.5601 | | 4.3112 | 1.47 | 2500 | 4.4459 | | 4.2134 | 1.76 | 3000 | 4.3457 | | 4.0836 | 2.05 | 3500 | 4.2823 | | 3.9069 | 2.35 | 4000 | 4.2337 | | 3.8812 | 2.64 | 4500 | 4.1786 | | 3.8474 | 2.93 | 5000 | 4.1312 | | 3.6498 | 3.23 | 5500 | 4.1280 | | 3.598 | 3.52 | 6000 | 4.1022 | | 3.5875 | 3.81 | 6500 | 4.0714 | | 3.4704 | 4.11 | 7000 | 4.0787 | | 3.3361 | 4.4 | 7500 | 4.0751 | | 3.3262 | 4.69 | 8000 | 4.0609 | | 3.3118 | 4.99 | 8500 | 4.0530 | | 3.1553 | 5.28 | 9000 | 4.0710 | | 3.149 | 5.57 | 9500 | 4.0699 | | 3.1467 | 5.87 | 10000 | 4.0690 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
rahulhuddar/llama2-chat-hub-my-finetuned-model
rahulhuddar
2023-08-01T12:51:37Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-01T12:51:31Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
NicolasDenier/distilhubert-finetuned-gtzan
NicolasDenier
2023-08-01T12:49:10Z
23
0
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-19T19:58:12Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.91 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.3539 - Accuracy: 0.91 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 18 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2281 | 1.0 | 112 | 2.1128 | 0.26 | | 1.7082 | 2.0 | 225 | 1.6252 | 0.52 | | 1.267 | 3.0 | 337 | 1.3100 | 0.54 | | 1.1791 | 4.0 | 450 | 1.0496 | 0.71 | | 1.1765 | 5.0 | 562 | 0.8928 | 0.74 | | 0.5714 | 6.0 | 675 | 0.8298 | 0.77 | | 0.4869 | 7.0 | 787 | 0.7145 | 0.79 | | 0.4967 | 8.0 | 900 | 0.6990 | 0.82 | | 0.8314 | 9.0 | 1012 | 0.5657 | 0.83 | | 0.4633 | 10.0 | 1125 | 0.4589 | 0.89 | | 0.5547 | 11.0 | 1237 | 0.4919 | 0.86 | | 0.4827 | 12.0 | 1350 | 0.4069 | 0.92 | | 0.324 | 13.0 | 1462 | 0.4634 | 0.87 | | 0.5224 | 14.0 | 1575 | 0.4419 | 0.86 | | 0.1873 | 15.0 | 1687 | 0.3988 | 0.89 | | 0.2852 | 16.0 | 1800 | 0.3788 | 0.9 | | 0.3169 | 17.0 | 1912 | 0.3526 | 0.89 | | 0.4491 | 17.92 | 2016 | 0.3539 | 0.91 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
AA2023/llama2-qlora-finetunined-french
AA2023
2023-08-01T12:45:02Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-01T12:44:44Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
AkitoP/dqn-SpaceInvaderNoFrameskip-v4
AkitoP
2023-08-01T12:33:57Z
3
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T12:33:17Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 678.00 +/- 280.94 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AkitoP -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AkitoP -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AkitoP ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
akashAlphastream/distilbert-base-uncased-finetuned-cola
akashAlphastream
2023-08-01T12:18:09Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-01T12:12:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7872 - Matthews Correlation: 0.5411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5204 | 1.0 | 535 | 0.4614 | 0.4869 | | 0.3459 | 2.0 | 1070 | 0.4912 | 0.5185 | | 0.2251 | 3.0 | 1605 | 0.6142 | 0.5150 | | 0.1747 | 4.0 | 2140 | 0.7872 | 0.5411 | | 0.1223 | 5.0 | 2675 | 0.8451 | 0.5309 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
alihcevik/pokemon-lora
alihcevik
2023-08-01T12:08:05Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-01T08:30:57Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - alihcevik/pokemon-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
draziert/poca-SoccerTwos
draziert
2023-08-01T12:06:18Z
31
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-08-01T12:05:50Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: draziert/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Ayushnangia/llama2-qlora-finetunined-french
Ayushnangia
2023-08-01T12:02:06Z
4
0
peft
[ "peft", "region:us" ]
null
2023-08-01T12:01:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
autosyrup/bert
autosyrup
2023-08-01T11:54:37Z
8
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-31T19:18:16Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3752 - Precision: 0.5495 - Recall: 0.5949 - F1: 0.5713 - Accuracy: 0.9455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 151 | 0.1826 | 0.4095 | 0.4084 | 0.4089 | 0.9362 | | No log | 2.0 | 302 | 0.1684 | 0.4941 | 0.5303 | 0.5116 | 0.9442 | | No log | 3.0 | 453 | 0.2528 | 0.5197 | 0.4477 | 0.4810 | 0.9398 | | 0.1001 | 4.0 | 604 | 0.2100 | 0.5182 | 0.5583 | 0.5375 | 0.9439 | | 0.1001 | 5.0 | 755 | 0.2556 | 0.5207 | 0.4783 | 0.4986 | 0.9419 | | 0.1001 | 6.0 | 906 | 0.2908 | 0.4132 | 0.4204 | 0.4168 | 0.9365 | | 0.0205 | 7.0 | 1057 | 0.3046 | 0.5 | 0.6236 | 0.5550 | 0.9435 | | 0.0205 | 8.0 | 1208 | 0.3057 | 0.5324 | 0.5750 | 0.5529 | 0.9458 | | 0.0205 | 9.0 | 1359 | 0.3122 | 0.5626 | 0.5776 | 0.5700 | 0.9469 | | 0.0082 | 10.0 | 1510 | 0.3673 | 0.5733 | 0.5263 | 0.5488 | 0.9441 | | 0.0082 | 11.0 | 1661 | 0.3432 | 0.5482 | 0.5270 | 0.5374 | 0.9455 | | 0.0082 | 12.0 | 1812 | 0.3305 | 0.5590 | 0.5716 | 0.5652 | 0.9445 | | 0.0082 | 13.0 | 1963 | 0.3293 | 0.5434 | 0.6009 | 0.5707 | 0.9431 | | 0.005 | 14.0 | 2114 | 0.4080 | 0.5627 | 0.5803 | 0.5713 | 0.9451 | | 0.005 | 15.0 | 2265 | 0.3752 | 0.5495 | 0.5949 | 0.5713 | 0.9455 | | 0.005 | 16.0 | 2416 | 0.4140 | 0.5823 | 0.5470 | 0.5641 | 0.9455 | | 0.002 | 17.0 | 2567 | 0.4308 | 0.5555 | 0.5670 | 0.5612 | 0.9438 | | 0.002 | 18.0 | 2718 | 0.4389 | 0.5594 | 0.5676 | 0.5635 | 0.9436 | | 0.002 | 19.0 | 2869 | 0.4463 | 0.5609 | 0.5676 | 0.5642 | 0.9444 | | 0.0007 | 20.0 | 3020 | 0.4512 | 0.5648 | 0.5636 | 0.5642 | 0.9448 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.2 - Tokenizers 0.13.3
Joelt0/bert-finetuned-ner
Joelt0
2023-08-01T11:40:31Z
62
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-01T11:28:38Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_keras_callback model-index: - name: Joelt0/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Joelt0/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0285 - Validation Loss: 0.0523 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1778 | 0.0635 | 0 | | 0.0477 | 0.0544 | 1 | | 0.0285 | 0.0523 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.2 - Tokenizers 0.13.3
MichaelaIvanova/ppo-LunarLander-v2
MichaelaIvanova
2023-08-01T11:23:27Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T11:23:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -136.33 +/- 31.75 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
wuxianchao/lazylora-7bhf
wuxianchao
2023-08-01T11:21:16Z
0
1
null
[ "arxiv:2305.14314", "arxiv:2106.09685", "arxiv:2110.07602", "arxiv:2104.08691", "arxiv:2303.16199", "license:llama2", "region:us" ]
null
2023-07-23T22:49:25Z
--- license: llama2 --- ## Lazy LoRA ### Benefits 0. using the updated [Meta's LLaMA-2 models](https://huggingface.co/meta-llama/Llama-2-7b-hf). 1. support [4-bit qlora](https://arxiv.org/abs/2305.14314), extreme GPU memory and inference time saving; 2. comparable MMLU evaluation dataset results: | | eval | test | comp-eval | comp-test | |---------------|--------|--------|-----------|-----------| |llama2-7b | 46.68% | 46.82% | | | |ckpt-200 | 44.28% | 46.03% | -2.40% | -0.79% | |ckpt-600 | 45.26% | 45.61% | -1.42% | -1.21% | llama2-7b: "4e4d531bcab430a66c4d562b7e89e21c0fa235ea" ### Introduction Determine the rank of LoRA layers by the singular values of pretrained weight matrices. Also, combines: 1. LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/abs/2106.09685) 2. Prefix Tuning: [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://aclanthology.org/2021.acl-long.3 53/), [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.or g/pdf/2110.07602.pdf) 3. Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691) 4. LLaMA adapter: [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention] (https://arxiv.org/abs/2303.16199) in one model. This allows you to perform LoRA (additional low rank adapters inserted to each linear layer), and prompt learning (additional virtual tokens attached to the input and to the attention layers acting as `past_key_values`) ## Usage: ```python import sys sys.path.insert(1, '/workspace/asr/peft/src') # TODO set this path to the lazy-lora source code path, # or you can install it from source code: # TODO, please install lazylora for usage: # git clone git@github.com:Xianchao-Wu/peft.git # cd peft # python setup.py install from transformers import (AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig) from peft import PeftModel, PeftConfig import os import torch #import ipdb; ipdb.set_trace() cache_dir="/workspace/asr/peft/qlora" # TODO set this cache_dir to the path where you # stored (or, want to store) llama2-7bhf model lazylora_dir=os.getcwd() # the path that contains 'adapter_config.json' # and 'adapter_model.bin' config = PeftConfig.from_pretrained(lazylora_dir) tokenizer = AutoTokenizer.from_pretrained( config.base_model_name_or_path, cache_dir=cache_dir, use_auth_token=True ) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4', bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, quantization_config=bnb_config, device_map="auto", cache_dir=cache_dir, use_auth_token=True ) #model.print_trainable_parameters() print(sum(p.numel() for p in model.parameters())) # 3,500,412,928 -> half-size of 7B due to 4-bit loading model = PeftModel.from_pretrained(model, lazylora_dir) print('after adding lazy lora parameters:') model.print_trainable_parameters() # trainable params: 0 || all params: 3,660,359,168 || trainable%: 0.0 ``` ## MMLU result: ### MMLU eval result: ```json {"mmlu_loss": 1.9065961667247102, "mmlu_eval_accuracy_professional_medicine": 0.3870967741935484, "mmlu_eval_accuracy_college_physics": 0.45454545454545453, "mmlu_eval_accuracy_conceptual_physics": 0.34615384615384615, "mmlu_eval_accuracy_econometrics": 0.3333333333333333, "mmlu_eval_accuracy_high_school_chemistry": 0.45454545454545453, "mmlu_eval_accuracy_nutrition": 0.5151515151515151, "mmlu_eval_accuracy_high_school_computer_science": 0.5555555555555556, "mmlu_eval_accuracy_security_studies": 0.4444444444444444, "mmlu_eval_accuracy_world_religions": 0.6842105263157895, "mmlu_eval_accuracy_anatomy": 0.5, "mmlu_eval_accuracy_prehistory": 0.42857142857142855, "mmlu_eval_accuracy_high_school_government_and_politics": 0.6666666666666666, "mmlu_eval_accuracy_professional_accounting": 0.3225806451612903, "mmlu_eval_accuracy_philosophy": 0.4411764705882353, "mmlu_eval_accuracy_astronomy": 0.3125, "mmlu_eval_accuracy_medical_genetics": 0.8181818181818182, "mmlu_eval_accuracy_jurisprudence": 0.5454545454545454, "mmlu_eval_accuracy_professional_law": 0.38235294117647056, "mmlu_eval_accuracy_college_chemistry": 0.125, "mmlu_eval_accuracy_moral_disputes": 0.4473684210526316, "mmlu_eval_accuracy_abstract_algebra": 0.36363636363636365, "mmlu_eval_accuracy_computer_security": 0.5454545454545454, "mmlu_eval_accuracy_business_ethics": 0.5454545454545454, "mmlu_eval_accuracy_virology": 0.5, "mmlu_eval_accuracy_electrical_engineering": 0.375, "mmlu_eval_accuracy_high_school_biology": 0.34375, "mmlu_eval_accuracy_public_relations": 0.3333333333333333, "mmlu_eval_accuracy_high_school_physics": 0.35294117647058826, "mmlu_eval_accuracy_high_school_psychology": 0.65, "mmlu_eval_accuracy_college_computer_science": 0.5454545454545454, "mmlu_eval_accuracy_high_school_european_history": 0.7222222222222222, "mmlu_eval_accuracy_international_law": 0.8461538461538461, "mmlu_eval_accuracy_high_school_microeconomics": 0.2692307692307692, "mmlu_eval_accuracy_college_biology": 0.25, "mmlu_eval_accuracy_formal_logic": 0.14285714285714285, "mmlu_eval_accuracy_machine_learning": 0.18181818181818182, "mmlu_eval_accuracy_human_aging": 0.6956521739130435, "mmlu_eval_accuracy_logical_fallacies": 0.5555555555555556, "mmlu_eval_accuracy_clinical_knowledge": 0.41379310344827586, "mmlu_eval_accuracy_high_school_macroeconomics": 0.3488372093023256, "mmlu_eval_accuracy_miscellaneous": 0.5930232558139535, "mmlu_eval_accuracy_sociology": 0.7272727272727273, "mmlu_eval_accuracy_high_school_us_history": 0.6363636363636364, "mmlu_eval_accuracy_college_medicine": 0.4090909090909091, "mmlu_eval_accuracy_high_school_world_history": 0.5, "mmlu_eval_accuracy_marketing": 0.8, "mmlu_eval_accuracy_human_sexuality": 0.4166666666666667, "mmlu_eval_accuracy_professional_psychology": 0.36231884057971014, "mmlu_eval_accuracy_moral_scenarios": 0.24, "mmlu_eval_accuracy_college_mathematics": 0.18181818181818182, "mmlu_eval_accuracy_us_foreign_policy": 0.6363636363636364, "mmlu_eval_accuracy_high_school_geography": 0.6818181818181818, "mmlu_eval_accuracy_high_school_statistics": 0.34782608695652173, "mmlu_eval_accuracy_high_school_mathematics": 0.2413793103448276, "mmlu_eval_accuracy_elementary_mathematics": 0.3170731707317073, "mmlu_eval_accuracy_management": 0.36363636363636365, "mmlu_eval_accuracy_global_facts": 0.2, "mmlu_eval_accuracy": 0.4526436056641111} ``` ### MMLU test result: ```json {"mmlu_loss": 1.925738222594615, "mmlu_test_accuracy_business_ethics": 0.53, "mmlu_test_accuracy_medical_genetics": 0.53, "mmlu_test_accuracy_international_law": 0.628099173553719, "mmlu_test_accuracy_professional_law": 0.3363754889178618, "mmlu_test_accuracy_econometrics": 0.32456140350877194, "mmlu_test_accuracy_high_school_biology": 0.4806451612903226, "mmlu_test_accuracy_computer_security": 0.57, "mmlu_test_accuracy_global_facts": 0.34, "mmlu_test_accuracy_clinical_knowledge": 0.46037735849056605, "mmlu_test_accuracy_miscellaneous": 0.6347381864623244, "mmlu_test_accuracy_high_school_microeconomics": 0.39915966386554624, "mmlu_test_accuracy_public_relations": 0.5636363636363636, "mmlu_test_accuracy_high_school_computer_science": 0.45, "mmlu_test_accuracy_human_sexuality": 0.5572519083969466, "mmlu_test_accuracy_virology": 0.43373493975903615, "mmlu_test_accuracy_human_aging": 0.5695067264573991, "mmlu_test_accuracy_high_school_world_history": 0.6371308016877637, "mmlu_test_accuracy_college_medicine": 0.3699421965317919, "mmlu_test_accuracy_marketing": 0.6923076923076923, "mmlu_test_accuracy_world_religions": 0.6783625730994152, "mmlu_test_accuracy_college_physics": 0.23529411764705882, "mmlu_test_accuracy_high_school_chemistry": 0.33004926108374383, "mmlu_test_accuracy_elementary_mathematics": 0.2751322751322751, "mmlu_test_accuracy_high_school_psychology": 0.6018348623853211, "mmlu_test_accuracy_sociology": 0.5920398009950248, "mmlu_test_accuracy_astronomy": 0.4342105263157895, "mmlu_test_accuracy_high_school_mathematics": 0.27037037037037037, "mmlu_test_accuracy_high_school_us_history": 0.5343137254901961, "mmlu_test_accuracy_logical_fallacies": 0.49693251533742333, "mmlu_test_accuracy_high_school_statistics": 0.19907407407407407, "mmlu_test_accuracy_management": 0.5825242718446602, "mmlu_test_accuracy_moral_disputes": 0.5057803468208093, "mmlu_test_accuracy_formal_logic": 0.24603174603174602, "mmlu_test_accuracy_college_chemistry": 0.25, "mmlu_test_accuracy_college_mathematics": 0.3, "mmlu_test_accuracy_high_school_geography": 0.5050505050505051, "mmlu_test_accuracy_machine_learning": 0.35714285714285715, "mmlu_test_accuracy_philosophy": 0.5787781350482315, "mmlu_test_accuracy_college_computer_science": 0.32, "mmlu_test_accuracy_security_studies": 0.46938775510204084, "mmlu_test_accuracy_abstract_algebra": 0.27, "mmlu_test_accuracy_professional_psychology": 0.4526143790849673, "mmlu_test_accuracy_college_biology": 0.4444444444444444, "mmlu_test_accuracy_us_foreign_policy": 0.68, "mmlu_test_accuracy_professional_medicine": 0.4522058823529412, "mmlu_test_accuracy_prehistory": 0.48148148148148145, "mmlu_test_accuracy_anatomy": 0.45925925925925926, "mmlu_test_accuracy_moral_scenarios": 0.2346368715083799, "mmlu_test_accuracy_nutrition": 0.4738562091503268, "mmlu_test_accuracy_high_school_macroeconomics": 0.4461538461538462, "mmlu_test_accuracy_high_school_european_history": 0.6181818181818182, "mmlu_test_accuracy_jurisprudence": 0.5370370370370371, "mmlu_test_accuracy_professional_accounting": 0.35815602836879434, "mmlu_test_accuracy_high_school_government_and_politics": 0.6321243523316062, "mmlu_test_accuracy_high_school_physics": 0.32450331125827814, "mmlu_test_accuracy_electrical_engineering": 0.47586206896551725, "mmlu_test_accuracy_conceptual_physics": 0.3872340425531915, "mmlu_test_accuracy": 0.4560969792275357} ``` ## License and intended use This lazy-lora adapter is based on [Meta's LLaMA-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf), and using the [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1), following [Guanaco](https://huggingface.co/timdettmers/guanaco-65b). lazy lora adapter weights are available under LLAMA-2 license. Note the use of the lazy lora adapter weights, requires access to the LLaMA model weighs. Lazy lora is based on LLaMA and therefore should be used according to the LLaMA license. ## Risks and Biases The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
autosyrup/roberta
autosyrup
2023-08-01T11:18:42Z
28
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "base_model:Jean-Baptiste/roberta-large-ner-english", "base_model:finetune:Jean-Baptiste/roberta-large-ner-english", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-31T10:36:47Z
--- license: mit base_model: Jean-Baptiste/roberta-large-ner-english tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta This model is a fine-tuned version of [Jean-Baptiste/roberta-large-ner-english](https://huggingface.co/Jean-Baptiste/roberta-large-ner-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3908 - Precision: 0.5990 - Recall: 0.5581 - F1: 0.5778 - Accuracy: 0.9470 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 151 | 0.2078 | 0.1899 | 0.2388 | 0.2115 | 0.9246 | | No log | 2.0 | 302 | 0.1499 | 0.4322 | 0.5535 | 0.4854 | 0.9393 | | No log | 3.0 | 453 | 0.1916 | 0.5204 | 0.4946 | 0.5072 | 0.9418 | | 0.1542 | 4.0 | 604 | 0.1671 | 0.4615 | 0.5109 | 0.4849 | 0.9426 | | 0.1542 | 5.0 | 755 | 0.1940 | 0.4841 | 0.4829 | 0.4835 | 0.9439 | | 0.1542 | 6.0 | 906 | 0.2462 | 0.5066 | 0.5651 | 0.5343 | 0.9428 | | 0.0616 | 7.0 | 1057 | 0.2106 | 0.5041 | 0.5271 | 0.5153 | 0.9437 | | 0.0616 | 8.0 | 1208 | 0.2621 | 0.5620 | 0.5202 | 0.5403 | 0.9474 | | 0.0616 | 9.0 | 1359 | 0.2903 | 0.5242 | 0.5550 | 0.5392 | 0.9440 | | 0.0326 | 10.0 | 1510 | 0.3083 | 0.5883 | 0.5628 | 0.5753 | 0.9483 | | 0.0326 | 11.0 | 1661 | 0.3125 | 0.5451 | 0.5853 | 0.5645 | 0.9444 | | 0.0326 | 12.0 | 1812 | 0.3616 | 0.5503 | 0.5388 | 0.5445 | 0.9427 | | 0.0326 | 13.0 | 1963 | 0.3398 | 0.5978 | 0.5023 | 0.5459 | 0.9447 | | 0.0155 | 14.0 | 2114 | 0.2942 | 0.5701 | 0.5550 | 0.5625 | 0.9467 | | 0.0155 | 15.0 | 2265 | 0.3723 | 0.5771 | 0.5597 | 0.5683 | 0.9462 | | 0.0155 | 16.0 | 2416 | 0.3651 | 0.5751 | 0.5760 | 0.5755 | 0.9439 | | 0.0062 | 17.0 | 2567 | 0.3674 | 0.5667 | 0.5891 | 0.5777 | 0.9455 | | 0.0062 | 18.0 | 2718 | 0.3866 | 0.5897 | 0.5403 | 0.5639 | 0.9463 | | 0.0062 | 19.0 | 2869 | 0.3908 | 0.5990 | 0.5581 | 0.5778 | 0.9470 | | 0.0033 | 20.0 | 3020 | 0.4036 | 0.5914 | 0.5620 | 0.5763 | 0.9467 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.2 - Tokenizers 0.13.3
wilson-wei/whisper-tiny-finetuned-minds14
wilson-wei
2023-08-01T11:17:08Z
77
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-01T11:05:40Z
--- language: - en license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-finetuned-minds-14 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: MInDS-14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.3578403216542217 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned-minds-14 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the MInDS-14 dataset. It achieves the following results on the evaluation set: - Loss: 0.5412 - Wer Ortho: 0.3581 - Wer: 0.3578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0121 | 4.46 | 1000 | 0.5412 | 0.3581 | 0.3578 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1 - Datasets 2.14.0 - Tokenizers 0.13.3
himanimaheshwari3/distilbert-base-uncased-finetuned-DIS-mlm5
himanimaheshwari3
2023-08-01T11:06:28Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-01T11:02:23Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-DIS-mlm5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-DIS-mlm5 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.4911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.036 | 1.0 | 2 | 4.3499 | | 0.1722 | 2.0 | 4 | 4.3545 | | 0.6087 | 3.0 | 6 | 5.8627 | | 0.2151 | 4.0 | 8 | 3.6960 | | 0.2115 | 5.0 | 10 | 3.2086 | | 0.3443 | 6.0 | 12 | 5.1042 | | 0.1082 | 7.0 | 14 | 4.0195 | | 0.5068 | 8.0 | 16 | 3.6664 | | 0.7362 | 9.0 | 18 | 4.3850 | | 0.4281 | 10.0 | 20 | 4.6974 | | 1.3107 | 11.0 | 22 | 4.3258 | | 1.4157 | 12.0 | 24 | 4.8907 | | 2.5918 | 13.0 | 26 | 4.6595 | | 2.577 | 14.0 | 28 | 4.3417 | | 1.6291 | 15.0 | 30 | 5.0013 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
embaas/sentence-transformers-gte-small
embaas
2023-08-01T11:04:18Z
40
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-01T11:04:14Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # embaas/sentence-transformers-gte-small This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('embaas/sentence-transformers-gte-small') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=embaas/sentence-transformers-gte-small) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Shishir1807/Indication_v3-1
Shishir1807
2023-08-01T11:03:29Z
141
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-01T11:02:51Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.29.2 pip install einops==0.6.1 pip install accelerate==0.19.0 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="Shishir1807/Indication_v3-1", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Shishir1807/Indication_v3-1", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "Shishir1807/Indication_v3-1", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Shishir1807/Indication_v3-1" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` GPTNeoXForCausalLM( (gpt_neox): GPTNeoXModel( (embed_in): Embedding(50304, 2560) (layers): ModuleList( (0-31): 32 x GPTNeoXLayer( (input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) (post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) (attention): GPTNeoXAttention( (rotary_emb): RotaryEmbedding() (query_key_value): Linear(in_features=2560, out_features=7680, bias=True) (dense): Linear(in_features=2560, out_features=2560, bias=True) ) (mlp): GPTNeoXMLP( (dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True) (dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True) (act): GELUActivation() ) ) ) (final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) ) (embed_out): Linear(in_features=2560, out_features=50304, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
embaas/sentence-transformers-gte-base
embaas
2023-08-01T11:02:10Z
67
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-01T11:02:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # embaas/sentence-transformers-gte-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('embaas/sentence-transformers-gte-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=embaas/sentence-transformers-gte-base) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
manojkumarvohra/llama2-7B-8bit-guanaco-pico-finetuned
manojkumarvohra
2023-08-01T10:59:42Z
75
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2023-08-01T10:51:08Z
This is a `llama-2-7b-hf` model fine-tuned using QLoRA (8-bit precision) on the [`manojkumarvohra/guanaco-pico-100-samples`]
heegyu/RedTulu-Uncensored-3B-0719
heegyu
2023-08-01T10:57:18Z
1,509
2
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-23T03:21:07Z
--- license: apache-2.0 language: - en --- Base Model: togethercomputer/RedPajama-INCITE-Base-3B-v1 Dataset from: https://github.com/allenai/open-instruct and uncensored it using code in ehartford/wizard_vicuna_70k_unfiltered Usage ``` ### Human: your instruction ### ASSISANT: output will be generated and ended with <|endoftext|> ```
JinsooKim/CartPole
JinsooKim
2023-08-01T10:57:06Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T10:57:00Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 177.70 +/- 9.43 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
embaas/sentence-transformers-gte-large
embaas
2023-08-01T10:53:43Z
6
1
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-01T10:53:35Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # embaas/sentence-transformers-gte-large This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('embaas/sentence-transformers-gte-large') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=embaas/sentence-transformers-gte-large) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
wuxianchao/lazylora-7b-chathf
wuxianchao
2023-08-01T10:50:26Z
0
1
null
[ "arxiv:2305.14314", "arxiv:2106.09685", "arxiv:2110.07602", "arxiv:2104.08691", "arxiv:2303.16199", "license:llama2", "region:us" ]
null
2023-07-24T00:45:54Z
--- license: llama2 --- ## Lazy LoRA ### Benefits 0. using the updated [Meta's LLaMA-2 models](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). 1. support [4-bit qlora](https://arxiv.org/abs/2305.14314), extreme GPU memory and inference time saving; 2. comparable MMLU evaluation dataset results: | | eval | test | comp-eval | comp-test | |---------------|--------|--------|-----------|-----------| |llama2-7b-chat | 49.38% | 48.22% | | | |ckpt-1600 | 46.51% | 47.44% | -2.87% | -0.78% | |ckpt-1500 | 47.66% | 46.88% | -1.72% | -1.33% | llama2-7b-chat: '0ede8dd71e923db6258295621d817ca8714516d4' ### Introduction Determine the rank of LoRA layers by the singular values of pretrained weight matrices. Also, combines: 1. LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/abs/2106.09685) 2. Prefix Tuning: [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://aclanthology.org/2021.acl-long.3 53/), [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.or g/pdf/2110.07602.pdf) 3. Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691) 4. LLaMA adapter: [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention] (https://arxiv.org/abs/2303.16199) in one model. This allows you to perform LoRA (additional low rank adapters inserted to each linear layer), and prompt learning (additional virtual tokens attached to the input and to the attention layers acting as `past_key_values`) ## Usage: ```python import sys sys.path.insert(1, '/workspace/asr/peft/src') # TODO set this path to the lazy-lora source code path, # or you can install it from source code: # TODO, please install lazylora for usage: # git clone git@github.com:Xianchao-Wu/peft.git # cd peft # python setup.py install from transformers import (AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig) from peft import PeftModel, PeftConfig import os import torch #import ipdb; ipdb.set_trace() cache_dir="/workspace/asr/peft/qlora" # TODO set this cache_dir to the path where you # stored (or, want to store) llama2-7b-chat-hf model lazylora_dir=os.getcwd() # the path that contains 'adapter_config.json' # and 'adapter_model.bin' config = PeftConfig.from_pretrained(lazylora_dir) tokenizer = AutoTokenizer.from_pretrained( config.base_model_name_or_path, cache_dir=cache_dir, use_auth_token=True ) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4', bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, quantization_config=bnb_config, device_map="auto", cache_dir=cache_dir, use_auth_token=True ) #model.print_trainable_parameters() print(sum(p.numel() for p in model.parameters())) # 3,500,412,928 -> half-size of 7B due to 4-bit loading model = PeftModel.from_pretrained(model, lazylora_dir) print('after adding lazy lora parameters:') model.print_trainable_parameters() # trainable params: 0 || all params: 3,660,359,168 || trainable%: 0.0 ``` ## MMLU result: ### MMLU eval result: ```json {"mmlu_loss": 1.44828412985973, "mmlu_eval_accuracy_professional_accounting": 0.3548387096774194, "mmlu_eval_accuracy_high_school_physics": 0.35294117647058826, "mmlu_eval_accuracy_elementary_mathematics": 0.24390243902439024, "mmlu_eval_accuracy_high_school_statistics": 0.391304347826087, "mmlu_eval_accuracy_machine_learning": 0.36363636363636365, "mmlu_eval_accuracy_sociology": 0.6363636363636364, "mmlu_eval_accuracy_human_sexuality": 0.4166666666666667, "mmlu_eval_accuracy_professional_medicine": 0.3870967741935484, "mmlu_eval_accuracy_computer_security": 0.2727272727272727, "mmlu_eval_accuracy_astronomy": 0.375, "mmlu_eval_accuracy_high_school_computer_science": 0.5555555555555556, "mmlu_eval_accuracy_high_school_world_history": 0.5384615384615384, "mmlu_eval_accuracy_high_school_psychology": 0.7166666666666667, "mmlu_eval_accuracy_professional_law": 0.3176470588235294, "mmlu_eval_accuracy_econometrics": 0.16666666666666666, "mmlu_eval_accuracy_high_school_chemistry": 0.2727272727272727, "mmlu_eval_accuracy_global_facts": 0.5, "mmlu_eval_accuracy_high_school_government_and_politics": 0.5238095238095238, "mmlu_eval_accuracy_electrical_engineering": 0.25, "mmlu_eval_accuracy_college_chemistry": 0.375, "mmlu_eval_accuracy_high_school_biology": 0.4375, "mmlu_eval_accuracy_high_school_microeconomics": 0.4230769230769231, "mmlu_eval_accuracy_public_relations": 0.5833333333333334, "mmlu_eval_accuracy_high_school_macroeconomics": 0.4883720930232558, "mmlu_eval_accuracy_world_religions": 0.7368421052631579, "mmlu_eval_accuracy_medical_genetics": 0.8181818181818182, "mmlu_eval_accuracy_college_biology": 0.375, "mmlu_eval_accuracy_jurisprudence": 0.36363636363636365, "mmlu_eval_accuracy_college_mathematics": 0.36363636363636365, "mmlu_eval_accuracy_miscellaneous": 0.627906976744186, "mmlu_eval_accuracy_high_school_us_history": 0.8181818181818182, "mmlu_eval_accuracy_virology": 0.4444444444444444, "mmlu_eval_accuracy_anatomy": 0.5, "mmlu_eval_accuracy_college_medicine": 0.3181818181818182, "mmlu_eval_accuracy_human_aging": 0.7391304347826086, "mmlu_eval_accuracy_college_physics": 0.45454545454545453, "mmlu_eval_accuracy_philosophy": 0.38235294117647056, "mmlu_eval_accuracy_clinical_knowledge": 0.4482758620689655, "mmlu_eval_accuracy_formal_logic": 0.21428571428571427, "mmlu_eval_accuracy_nutrition": 0.6363636363636364, "mmlu_eval_accuracy_high_school_mathematics": 0.20689655172413793, "mmlu_eval_accuracy_logical_fallacies": 0.6666666666666666, "mmlu_eval_accuracy_professional_psychology": 0.42028985507246375, "mmlu_eval_accuracy_prehistory": 0.5428571428571428, "mmlu_eval_accuracy_high_school_geography": 0.7272727272727273, "mmlu_eval_accuracy_management": 0.6363636363636364, "mmlu_eval_accuracy_marketing": 0.76, "mmlu_eval_accuracy_international_law": 0.9230769230769231, "mmlu_eval_accuracy_us_foreign_policy": 0.7272727272727273, "mmlu_eval_accuracy_moral_scenarios": 0.32, "mmlu_eval_accuracy_high_school_european_history": 0.5, "mmlu_eval_accuracy_business_ethics": 0.45454545454545453, "mmlu_eval_accuracy_moral_disputes": 0.39473684210526316, "mmlu_eval_accuracy_conceptual_physics": 0.38461538461538464, "mmlu_eval_accuracy_security_studies": 0.5925925925925926, "mmlu_eval_accuracy_abstract_algebra": 0.36363636363636365, "mmlu_eval_accuracy_college_computer_science": 0.36363636363636365, "mmlu_eval_accuracy": 0.4766441930115949} ``` ### MMLU test result: ```json {"mmlu_loss": 1.4452685356679218, "mmlu_test_accuracy_moral_scenarios": 0.23575418994413408, "mmlu_test_accuracy_security_studies": 0.5020408163265306, "mmlu_test_accuracy_astronomy": 0.4934210526315789, "mmlu_test_accuracy_medical_genetics": 0.52, "mmlu_test_accuracy_logical_fallacies": 0.5521472392638037, "mmlu_test_accuracy_professional_psychology": 0.4444444444444444, "mmlu_test_accuracy_high_school_psychology": 0.6110091743119266, "mmlu_test_accuracy_high_school_us_history": 0.6372549019607843, "mmlu_test_accuracy_high_school_physics": 0.33112582781456956, "mmlu_test_accuracy_prehistory": 0.5308641975308642, "mmlu_test_accuracy_human_sexuality": 0.549618320610687, "mmlu_test_accuracy_management": 0.6213592233009708, "mmlu_test_accuracy_international_law": 0.6363636363636364, "mmlu_test_accuracy_moral_disputes": 0.49421965317919075, "mmlu_test_accuracy_conceptual_physics": 0.4127659574468085, "mmlu_test_accuracy_econometrics": 0.3508771929824561, "mmlu_test_accuracy_college_medicine": 0.3815028901734104, "mmlu_test_accuracy_high_school_biology": 0.5064516129032258, "mmlu_test_accuracy_high_school_statistics": 0.27314814814814814, "mmlu_test_accuracy_high_school_macroeconomics": 0.43333333333333335, "mmlu_test_accuracy_college_mathematics": 0.28, "mmlu_test_accuracy_elementary_mathematics": 0.30687830687830686, "mmlu_test_accuracy_public_relations": 0.509090909090909, "mmlu_test_accuracy_high_school_european_history": 0.5515151515151515, "mmlu_test_accuracy_human_aging": 0.5381165919282511, "mmlu_test_accuracy_high_school_geography": 0.5555555555555556, "mmlu_test_accuracy_formal_logic": 0.25396825396825395, "mmlu_test_accuracy_miscellaneous": 0.665389527458493, "mmlu_test_accuracy_high_school_computer_science": 0.4, "mmlu_test_accuracy_global_facts": 0.33, "mmlu_test_accuracy_world_religions": 0.6666666666666666, "mmlu_test_accuracy_machine_learning": 0.33035714285714285, "mmlu_test_accuracy_sociology": 0.6169154228855721, "mmlu_test_accuracy_clinical_knowledge": 0.49433962264150944, "mmlu_test_accuracy_virology": 0.4397590361445783, "mmlu_test_accuracy_high_school_government_and_politics": 0.6839378238341969, "mmlu_test_accuracy_high_school_world_history": 0.6329113924050633, "mmlu_test_accuracy_college_biology": 0.5138888888888888, "mmlu_test_accuracy_philosophy": 0.5627009646302251, "mmlu_test_accuracy_college_physics": 0.2549019607843137, "mmlu_test_accuracy_college_computer_science": 0.34, "mmlu_test_accuracy_high_school_chemistry": 0.3793103448275862, "mmlu_test_accuracy_nutrition": 0.5163398692810458, "mmlu_test_accuracy_professional_accounting": 0.35106382978723405, "mmlu_test_accuracy_jurisprudence": 0.5925925925925926, "mmlu_test_accuracy_high_school_mathematics": 0.25925925925925924, "mmlu_test_accuracy_marketing": 0.6923076923076923, "mmlu_test_accuracy_business_ethics": 0.48, "mmlu_test_accuracy_high_school_microeconomics": 0.4495798319327731, "mmlu_test_accuracy_college_chemistry": 0.3, "mmlu_test_accuracy_us_foreign_policy": 0.72, "mmlu_test_accuracy_computer_security": 0.6, "mmlu_test_accuracy_anatomy": 0.4740740740740741, "mmlu_test_accuracy_professional_law": 0.3220338983050847, "mmlu_test_accuracy_abstract_algebra": 0.27, "mmlu_test_accuracy_electrical_engineering": 0.4827586206896552, "mmlu_test_accuracy_professional_medicine": 0.3897058823529412, "mmlu_test_accuracy": 0.46883545484585126} ``` ## License and intended use This lazy-lora adapter is based on [Meta's LLaMA-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), and using the [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1), following [Guanaco](https://huggingface.co/timdettmers/guanaco-65b). lazy lora adapter weights are available under LLAMA-2 license. Note the use of the lazy lora adapter weights, requires access to the LLaMA model weighs. Lazy lora is based on LLaMA and therefore should be used according to the LLaMA license. ## Risks and Biases The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
ztrip/autotrain-testtranste-79085141139
ztrip
2023-08-01T10:45:28Z
106
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "autotrain", "translation", "zh", "en", "dataset:ztrip/autotrain-data-testtranste", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-08-01T10:45:13Z
--- tags: - autotrain - translation language: - zh - en datasets: - ztrip/autotrain-data-testtranste co2_eq_emissions: emissions: 0.0013030083852032 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 79085141139 - CO2 Emissions (in grams): 0.0013 ## Validation Metrics - Loss: 6.040 - SacreBLEU: 0.000 - Gen len: 3.000
nokotin/taxi
nokotin
2023-08-01T10:44:58Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T10:14:26Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="nokotin/taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
giuseppemassafra/a2c-AntBulletEnv-v0
giuseppemassafra
2023-08-01T10:41:50Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T10:40:43Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1402.29 +/- 91.75 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
seanghay/khmer-sentence-segmentation
seanghay
2023-08-01T10:39:14Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-01T10:11:04Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: khmer-sentence-segmentation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # khmer-sentence-segmentation This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1784 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.9266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | 0.1963 | 1.0 | 1390 | 0.1842 | 0.0 | 0.0 | 0.0 | 0.9222 | | 0.1749 | 2.0 | 2780 | 0.1816 | 0.0 | 0.0 | 0.0 | 0.9251 | | 0.1629 | 3.0 | 4170 | 0.1775 | 0.0 | 0.0 | 0.0 | 0.9264 | | 0.1521 | 4.0 | 5560 | 0.1784 | 0.0 | 0.0 | 0.0 | 0.9266 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
TurkuNLP/megatron-bert-finnish-cased-1.3B
TurkuNLP
2023-08-01T10:27:54Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "megatron-bert", "fi", "arxiv:1909.08053", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-05-15T09:19:31Z
--- license: apache-2.0 language: - fi --- # Model Description 1.3 billion parameter [Megatron](https://aps.arxiv.org/pdf/1909.08053.pdf) version of [FinBERT](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1). # Evaluation results ## Document classification Results have been measured on 10% of the datasets ### [Ylilauta](https://github.com/spyysalo/ylilauta-corpus) | Model | Accuracy | |--------------------|----------| | **Megatron-finbert-1.3B-cased** | **82.64%** | | FinBERT-cased | 79.18% | ### [Yle](https://github.com/spyysalo/yle-corpus/tree/master) | Model | Accuracy | |--------------------|----------| | **Megatron-finbert-1.3B-cased** | **82.64%** | | FinBERT-cased | 79.18% | More documentation coming soon...
TheBloke/BigTranslate-13B-GGML
TheBloke
2023-08-01T10:26:23Z
0
27
null
[ "arxiv:2305.18098", "license:other", "region:us" ]
null
2023-06-18T15:03:18Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # James WYang's BigTrans GGML These files are GGML format model files for [James WYang's BigTrans](https://huggingface.co/James-WYang/BigTrans). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/BigTranslate-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/BigTranslate-13B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/James-WYang/BigTrans) ## Prompt format: Alpaca ``` ### Instruction: please translate the following into French: Large language models are the future ### Response: ``` <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`. These are guaranteed to be compatbile with any UIs, tools and libraries released since late May. ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`. They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | bigtrans-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.64 GB | 8.14 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | bigtrans-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 7.07 GB | 9.57 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | bigtrans-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.45 GB | 8.95 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | bigtrans-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.80 GB | 8.30 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | bigtrans-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.45 GB | 9.95 GB | Original llama.cpp quant method, 4-bit. | | bigtrans-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.28 GB | 10.78 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | bigtrans-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 8.02 GB | 10.52 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | bigtrans-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.52 GB | 10.02 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | bigtrans-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 9.10 GB | 11.60 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | bigtrans-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.93 GB | 12.43 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | bigtrans-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.40 GB | 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | bigtrans-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 9.14 GB | 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | bigtrans-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.86 GB | 13.36 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | bigtrans-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 14.07 GB | 16.57 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m bigtrans-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: please translate the following into French: llamas are cute and fluffy and make excellent guards.\n### Response:" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: James WYang's BigTrans # BigTrans: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages Large language models (LLMs) demonstrate promising translation performance among various natural languages. However, many LLMs especially the open-sourced ones, such as BLOOM and LLaMA, are English-dominant and support only dozens of natural languages, making the potential of LLMs on language translation less explored. In this work, we present BigTrans which adapts LLaMA that covers only 20 languages and enhances it with multilingual translation capability on more than 100 languages. BigTrans is built upon LLaMA-13B and it is optimized in three steps. First, we continue training LLaMA with massive Chinese monolingual data. Second, we continue training the model with a large-scale parallel dataset that covers 102 natural languages. Third, we instruct-tune the foundation model with multilingual translation instructions, leading to our BigTrans model. The preliminary experiments on multilingual translation show that BigTrans performs comparably with ChatGPT and Google Translate in many languages and even outperforms ChatGPT in 8 language pairs. We release the BigTrans model and hope it can advance the research progress. **More Details can be found at https://github.com/ZNLP/BigTrans and https://arxiv.org/abs/2305.18098**
caseywilliams27/Ever-Growing-Demand-for-Pink-Diamonds
caseywilliams27
2023-08-01T10:24:58Z
0
0
null
[ "region:us" ]
null
2023-08-01T10:17:27Z
--- metrics: - accuracy --- --- license: openrail language: - ja - fr - it - de - en tags: - Pink Diamonds - <strong>The Ever-Growing Demand for Pink Diamonds: A Sparkling Investment Trend</strong> ---<p>Pink diamonds are a captivating gemstone rarity that has captured the attention of both investors and collectors. These magnificent colored diamonds have come to represent exclusivity, luxury, and style. Because of its distinct beauty and reputation as a wise investment choice, the demand for pink diamonds has increased significantly in recent years. The dynamics of the diamond market, the factors behind the rise in demand for pink diamonds, and the attractiveness of diamonds as alternative asset classes are all discussed in this article.</p> <p><strong>1.Rarity and restricted Supply:</strong> The rarity and restricted supply of pink diamonds are at the root of their attractiveness. Pink diamonds are extremely rare in nature and make up a very small portion of the world's total production of diamonds. The majority of these diamonds were formerly obtained from the famous Argyle Diamond Mine in Australia, which stopped operating in 2020. Since there is no new supply of these alluring stones, their rarity has boosted their attractiveness, driving up demand from investors and collectors looking for a one-of-a-kind, limited-edition gem.</p> <p><strong>2.Symbol of Elegance and Romance:</strong> Pink diamonds are a popular choice for engagement rings and other high-end jewelry because they represent refinement and romanticism. Because of their light and delicate colour, which evokes feelings of love and generosity, they are a sentimental and meaningful gift option for key events. As a result, buyers looking to commemorate life's milestones with a touch of elegance and emotion have seen a significant surge in demand for <a href="https://flawlessfinejewelry.com/lab-grown-pink-coloured-diamonds-search/"><strong>Lab-Grown Pink Diamond Engagement Rings.</strong></a></p> <p><strong>3.Celebrity Endorsement and Media Influence:</strong> Pink diamonds have seen a considerable increase in demand as a result of celebrity endorsements and notable media appearances. The perception of these superb stones is tainted by high-profile individuals wearing stunning pink diamond jewelry on red carpets and in publications, which inspires aspirational enthusiasm. Because of the increased perception in the media that pink diamonds are the height of exclusivity and elegance, buyers seeking to imitate their favorite celebrities are buying more of them.</p> <p><strong>4.Investment Potential and Inflation Protection:</strong> Pink diamonds have become a popular investment choice due to their potential price rise over time and as a hedge against inflation. Pink diamonds are the ideal alternative asset since they have a more steady value retention than traditional financial markets, which has attracted many investors. Since they are regarded as a tangible and transportable repository of wealth, their scarcity and limited supply add to their investment appeal. Pink diamonds may also act as a hedge against inflation, which adds to their allure for investors trying to diversify their holdings.</p> <p><strong>5.Rising Affluence in Emerging economies:</strong> The rise of affluence in emerging economies, particularly in China and India, has considerably contributed to the rising demand for luxury items such as pink diamonds. As these regions' middle and upper classes grow, so does their desire for unique and distinguished goods. Pink diamonds, with their mesmerizing beauty and financial possibilities, have piqued the interest of these new sectors of luxury consumers, fueling demand even further.</p> <strong>CONCLUSION:</strong> <p>Pink diamonds' increasing popularity reflects their timeless beauty and status as a valuable investment. Scarcity and rarity will continue to define these precious jewels, increasing their allure among collectors, investors, and luxury fans. However, like with any investment, due diligence and expert counsel are essential for successfully navigating the diamond market. Pink diamonds' fascination is unquestionable, whether for personal decoration or investment goals, making them a brilliant treasure in the world of precious stones.</p>
Anjoe/german-poetry-xlm-roberta
Anjoe
2023-08-01T10:24:05Z
19
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "xlm-roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-01T18:43:21Z
--- tags: - generated_from_trainer model-index: - name: german-poetry-xlm-roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german-poetry-xlm-roberta This model is a fine-tuned version of [xlm-roberta-large-finetuned-conll03-german](https://huggingface.co/xlm-roberta-large-finetuned-conll03-german) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.1984 | 1.0 | 2774 | 2.8507 | | 2.7279 | 2.0 | 5548 | 2.5140 | | 2.5236 | 3.0 | 8322 | 2.3583 | | 2.4115 | 4.0 | 11096 | 2.2371 | | 2.2953 | 5.0 | 13870 | 2.1773 | | 2.2179 | 6.0 | 16644 | 2.1217 | | 2.1652 | 7.0 | 19418 | 2.0483 | | 2.1023 | 8.0 | 22192 | 2.0003 | | 2.0532 | 9.0 | 24966 | 1.9961 | | 2.0153 | 10.0 | 27740 | 1.9626 | | 1.9657 | 11.0 | 30514 | 1.9310 | | 1.9456 | 12.0 | 33288 | 1.9065 | | 1.919 | 13.0 | 36062 | 1.8906 | | 1.9077 | 14.0 | 38836 | 1.8712 | | 1.9048 | 15.0 | 41610 | 1.8672 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Anjoe/german-poetry-gpt2
Anjoe
2023-08-01T10:23:56Z
167
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-28T21:11:02Z
--- license: mit tags: - generated_from_trainer model-index: - name: german-poetry-gpt2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german-poetry-gpt2 This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 3.8196 - eval_runtime: 43.8543 - eval_samples_per_second: 86.993 - eval_steps_per_second: 5.45 - epoch: 9.0 - step: 11520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 22 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
with-madrid/with-e5-small-v2
with-madrid
2023-08-01T10:06:39Z
7
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-24T13:01:11Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # with-madrid/with-e5-small-v2 This model is to be used for information retrieval for https://with-madrid.com/ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 143 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 4.762918902353135e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 8, "weight_decay": 0.00936376631468652 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
nokotin/q-FrozenLake-v1-4x4-noSlippery
nokotin
2023-08-01T10:06:31Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T10:06:29Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="nokotin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
cmarkea/distilcamembert-base
cmarkea
2023-08-01T10:05:33Z
6,965
30
transformers
[ "transformers", "pytorch", "tf", "safetensors", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1910.01108", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: fr license: mit datasets: - oscar widget: - text: "J'aime lire les <mask> de SF." --- DistilCamemBERT =============== We present a distillation version of the well named [CamemBERT](https://huggingface.co/camembert-base), a RoBERTa French model version, alias DistilCamemBERT. The aim of distillation is to drastically reduce the complexity of the model while preserving the performances. The proof of concept is shown in the [DistilBERT paper](https://arxiv.org/abs/1910.01108) and the code used for the training is inspired by the code of [DistilBERT](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation). Loss function ------------- The training for the distilled model (student model) is designed to be the closest as possible to the original model (teacher model). To perform this the loss function is composed of 3 parts: * DistilLoss: a distillation loss which measures the silimarity between the probabilities at the outputs of the student and teacher models with a cross-entropy loss on the MLM task ; * CosineLoss: a cosine embedding loss. This loss function is applied on the last hidden layers of student and teacher models to guarantee a collinearity between them ; * MLMLoss: and finaly a Masked Language Modeling (MLM) task loss to perform the student model with the original task of the teacher model. The final loss function is a combination of these three losses functions. We use the following ponderation: $$Loss = 0.5 \times DistilLoss + 0.3 \times CosineLoss + 0.2 \times MLMLoss$$ Dataset ------- To limit the bias between the student and teacher models, the dataset used for the DstilCamemBERT training is the same as the camembert-base training one: OSCAR. The French part of this dataset approximately represents 140 GB on a hard drive disk. Training -------- We pre-trained the model on a nVidia Titan RTX during 18 days. Evaluation results ------------------ | Dataset name | f1-score | | :----------: | :------: | | [FLUE](https://huggingface.co/datasets/flue) CLS | 83% | | [FLUE](https://huggingface.co/datasets/flue) PAWS-X | 77% | | [FLUE](https://huggingface.co/datasets/flue) XNLI | 77% | | [wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr) NER | 98% | How to use DistilCamemBERT -------------------------- Load DistilCamemBERT and its sub-word tokenizer : ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cmarkea/distilcamembert-base") model = AutoModel.from_pretrained("cmarkea/distilcamembert-base") model.eval() ... ``` Filling masks using pipeline : ```python from transformers import pipeline model_fill_mask = pipeline("fill-mask", model="cmarkea/distilcamembert-base", tokenizer="cmarkea/distilcamembert-base") results = model_fill_mask("Le camembert est <mask> :)") results [{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.3878222405910492, 'token': 7200}, {'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.06469205021858215, 'token': 2183}, {'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.04534877464175224, 'token': 1654}, {'sequence': '<s> Le camembert est succulent :)</s>', 'score': 0.04128391295671463, 'token': 26202}, {'sequence': '<s> Le camembert est magnifique :)</s>', 'score': 0.02425697259604931, 'token': 1509}] ``` Citation -------- ```bibtex @inproceedings{delestre:hal-03674695, TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}}, AUTHOR = {Delestre, Cyrile and Amar, Abibatou}, URL = {https://hal.archives-ouvertes.fr/hal-03674695}, BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}}, ADDRESS = {Vannes, France}, YEAR = {2022}, MONTH = Jul, KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation}, PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf}, HAL_ID = {hal-03674695}, HAL_VERSION = {v1}, } ```
ShahlaDnshi96/xlm-roberta-base-finetuned-digikala
ShahlaDnshi96
2023-08-01T09:51:50Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-01T09:46:10Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlm-roberta-base-finetuned-digikala results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-digikala This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.2 - Tokenizers 0.13.3
TencentARC/t2iadapter_sketch_sd15v2
TencentARC
2023-08-01T09:42:42Z
2,423
5
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
2023-07-14T19:01:49Z
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Sketch T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on sketches for the stable diffusion 1.5 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers controlnet_aux ``` 2. Run code: ```python import torch from PIL import Image from controlnet_aux import PidiNetDetector from diffusers import ( T2IAdapter, StableDiffusionAdapterPipeline ) image = Image.open('./images/sketch_in.png') processor = PidiNetDetector.from_pretrained('lllyasviel/Annotators') sketch_image = processor(image) sketch_image.save('./images/sketch.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_zoedepth_sd15v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(0) sketch_image_out = pipe(prompt="royal chamber with fancy bed", image=sketch_image, generator=generator).images[0] sketch_image_out.save('./images/sketch_image_out.png') ``` ![sketch_in](./images/sketch_in.png) ![sketch](./images/sketch.png) ![sketch_image_out](./images/sketch_image_out.png)
runningsnake/marian-finetuned-kde4-en-to-fr
runningsnake
2023-08-01T09:39:27Z
59
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-01T06:29:47Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_keras_callback model-index: - name: runningsnake/marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # runningsnake/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6852 - Validation Loss: 0.8040 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0614 | 0.8758 | 0 | | 0.7972 | 0.8199 | 1 | | 0.6852 | 0.8040 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.2 - Tokenizers 0.13.3
JamalHaneef/llama2-stride-demo
JamalHaneef
2023-08-01T09:39:26Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-01T09:39:18Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
Rocinante/ppo-LunarLander-v2
Rocinante
2023-08-01T09:37:11Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T09:36:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.33 +/- 18.67 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
michelecafagna26/clipcap-base-captioning-ft-hl-actions
michelecafagna26
2023-08-01T09:25:24Z
0
2
transformers
[ "transformers", "image-captioning", "en", "dataset:michelecafagna26/hl", "arxiv:2111.09734", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-07-24T14:10:38Z
--- license: apache-2.0 tags: - image-captioning languages: - en datasets: - michelecafagna26/hl language: - en metrics: - sacrebleu - rouge library_name: transformers --- ## ClipCap fine-tuned for Action Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **high-level action descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pretrained on COCO - Trained for 10 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |---------|------------|--------| | 176.54 | 27.37 | 39.15 | ## Demo [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Rw9_oNNfP2QsIpekmJhRHAXv_6MX-0ur?usp=sharing) ## Installation ```bash pip install git+https://github.com/michelecafagna26/CLIPCap.git ``` ## Download the model ```bash git lfs install # if not installed git clone https://huggingface.co/michelecafagna26/clipcap-base-captioning-ft-hl-actions ``` ## Model in Action 🚀 ```python from clipcap import ClipCaptionModel from transformers import ( GPT2Tokenizer, GPT2LMHeadModel, ) import torch import clip import requests from PIL import Image model_path = "clipcap-base-captioning-ft-hl-actions/pytorch_model.pt" # change accordingly # load clip device = "cuda" if torch.cuda.is_available() else "cpu" clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") prefix_length = 10 # load ClipCap model = ClipCaptionModel(prefix_length, tokenizer=tokenizer) model.from_pretrained(model_path) model = model.eval() model = model.to(device) # load the image img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # extract the prefix image = preprocess(raw_image).unsqueeze(0).to(device) with torch.no_grad(): prefix = clip_model.encode_image(image).to( device, dtype=torch.float32 ) prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) # generate the caption model.generate_beam(embed=prefix_embed)[0] # >> "she is posing for a photo." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
michelecafagna26/blip-base-captioning-ft-hl-actions
michelecafagna26
2023-08-01T09:25:07Z
110
0
transformers
[ "transformers", "pytorch", "safetensors", "blip", "image-text-to-text", "image-captioning", "image-to-text", "en", "dataset:michelecafagna26/hl", "arxiv:2201.12086", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2023-07-22T09:40:35Z
--- license: apache-2.0 tags: - image-captioning languages: - en pipeline_tag: image-to-text datasets: - michelecafagna26/hl language: - en metrics: - sacrebleu - rouge library_name: transformers --- ## BLIP-base fine-tuned for Image Captioning on High-Level descriptions of Actions [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **action generation of images** ## Model fine-tuning 🏋️‍ - Trained for 6 epochs - lr: 5e−5, - Adam optimizer, - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 123.07 | 17.16 | 32.16 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-actions") model = BlipForConditionalGeneration.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-actions").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is holding an umbrella." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
michelecafagna26/blip-base-captioning-ft-hl-narratives
michelecafagna26
2023-08-01T09:24:59Z
116
1
transformers
[ "transformers", "pytorch", "safetensors", "blip", "image-text-to-text", "image-captioning", "image-to-text", "en", "dataset:michelecafagna26/hl-narratives", "arxiv:2201.12086", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2023-07-24T06:41:58Z
--- license: apache-2.0 tags: - image-captioning languages: - en pipeline_tag: image-to-text datasets: - michelecafagna26/hl-narratives language: - en metrics: - sacrebleu - rouge library_name: transformers --- ## BLIP-base fine-tuned for Narrative Image Captioning [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ - Trained for a 3 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 79.39 | 11.70 | 26.17 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("blip-base-captioning-ft-hl-narratives") model = BlipForConditionalGeneration.from_pretrained("blip-base-captioning-ft-hl-narratives").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is holding an umbrella near a lake and is on vacation." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
michelecafagna26/clipcap-base-captioning-ft-hl-narratives
michelecafagna26
2023-08-01T09:24:52Z
0
1
transformers
[ "transformers", "image-captioning", "en", "dataset:michelecafagna26/hl-narratives", "arxiv:2111.09734", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-07-24T10:18:22Z
--- license: apache-2.0 tags: - image-captioning languages: - en datasets: - michelecafagna26/hl-narratives language: - en metrics: - sacrebleu - rouge library_name: transformers --- ## ClipCap fine-tuned for Narrative Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pretrained on COCO - Trained for 3 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 63.91 | 8.15 | 24.53 | ## Demo [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1xcaJOxaAp8TRd8a6x1XnAptVjHQRv3Zj?usp=sharing) ## Installation ```bash pip install git+https://github.com/michelecafagna26/CLIPCap.git ``` ## Download the model ```bash git lfs install # if not installed git clone https://huggingface.co/michelecafagna26/clipcap-base-captioning-ft-hl-narratives ``` ## Model in Action 🚀 ```python from clipcap import ClipCaptionModel from transformers import ( GPT2Tokenizer, GPT2LMHeadModel, ) import torch import clip import requests from PIL import Image model_path = "clipcap-base-captioning-ft-hl-narratives/pytorch_model.pt" # change accordingly # load clip device = "cuda" if torch.cuda.is_available() else "cpu" clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") prefix_length = 10 # load ClipCap model = ClipCaptionModel(prefix_length, tokenizer=tokenizer) model.from_pretrained(model_path) model = model.eval() model = model.to(device) # load the image img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl-narratives/--/default/train/3/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # extract the prefix image = preprocess(raw_image).unsqueeze(0).to(device) with torch.no_grad(): prefix = clip_model.encode_image(image).to( device, dtype=torch.float32 ) prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) # generate the caption model.generate_beam(embed=prefix_embed)[0] # >> "He is riding a skateboard in a skate park, he wants to skate." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
michelecafagna26/clipcap-base-captioning-ft-hl-scenes
michelecafagna26
2023-08-01T09:24:41Z
0
1
transformers
[ "transformers", "image-captioning", "en", "dataset:michelecafagna26/hl", "arxiv:2111.09734", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-07-24T14:23:24Z
--- license: apache-2.0 tags: - image-captioning languages: - en datasets: - michelecafagna26/hl language: - en metrics: - sacrebleu - rouge library_name: transformers --- ## ClipCap fine-tuned for Scene Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **high-level scene descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pretrained on COCO - Trained for 9 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |---------|------------|--------| | 145.93 | 36.73 | 42.83 | ## Demo [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1redkE4W2GusU_GVDuY_R2PXbh9ZDuTqf?usp=sharing) ## Installation ```bash pip install git+https://github.com/michelecafagna26/CLIPCap.git ``` ## Download the model ```bash git lfs install # if not installed git clone https://huggingface.co/michelecafagna26/clipcap-base-captioning-ft-hl-scenes ``` ## Model in Action 🚀 ```python from clipcap import ClipCaptionModel from transformers import ( GPT2Tokenizer, GPT2LMHeadModel, ) import torch import clip import requests from PIL import Image model_path = "clipcap-base-captioning-ft-hl-scenes/pytorch_model.pt" # change accordingly # load clip device = "cuda" if torch.cuda.is_available() else "cpu" clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") prefix_length = 10 # load ClipCap model = ClipCaptionModel(prefix_length, tokenizer=tokenizer) model.from_pretrained(model_path) model = model.eval() model = model.to(device) # load the image img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # extract the prefix image = preprocess(raw_image).unsqueeze(0).to(device) with torch.no_grad(): prefix = clip_model.encode_image(image).to( device, dtype=torch.float32 ) prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) # generate the caption model.generate_beam(embed=prefix_embed)[0] # >> "the picture is taken on the beach." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
michelecafagna26/git-base-captioning-ft-hl-narratives
michelecafagna26
2023-08-01T09:24:11Z
105
0
transformers
[ "transformers", "pytorch", "safetensors", "git", "image-text-to-text", "image-captioning", "image-to-text", "en", "dataset:michelecafagna26/hl-narratives", "arxiv:2205.14100", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2023-07-24T06:49:21Z
--- license: apache-2.0 tags: - image-captioning languages: - en pipeline_tag: image-to-text datasets: - michelecafagna26/hl-narratives language: - en metrics: - sacrebleu - rouge library_name: transformers --- ## GIT-base fine-tuned for Narrative Image Captioning [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ - Trained for a 3 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 75.78 | 11.11 | 27.61 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("git-base-captioning-ft-hl-narratives") model = AutoModelForCausalLM.from_pretrained("git-base-captioning-ft-hl-narratives").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is posing for a photo on the beach, she wants to post on her social media." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
ramgiitk/bert-finetuned-brand
ramgiitk
2023-08-01T09:23:30Z
75
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-01T06:52:52Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: bert-finetuned-brand results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-brand This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results ### Framework versions - Transformers 4.27.4 - TensorFlow 2.9.1 - Datasets 2.11.0 - Tokenizers 0.13.3
usman0007/stablediffusion
usman0007
2023-08-01T09:22:17Z
3
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-01T09:17:26Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### StableDiffusion Dreambooth model trained by usman0007 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
KnutJaegersberg/claim_extraction_classifier
KnutJaegersberg
2023-08-01T09:15:22Z
138
0
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "deberta", "dataset:KnutJaegersberg/FEVER_claim_extraction", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-16T18:28:10Z
--- license: mit pipeline_tag: text-classification tags: - deberta datasets: - KnutJaegersberg/FEVER_claim_extraction --- deberta-v3-large trained for one epoch (properly underfitted) on a dataset that combined FEVER data with externally sourced non-claims. Label 0: no claim Label 1: claim Paper of the data: "Claim extraction from text using transfer learning" - By Acharya Ashish Prabhakar, Salar Mohtaj, Sebastian Möller https://aclanthology.org/2020.icon-main.39/ Performance on held out data: Accuracy 0.8128525 F1_Score 0.854962 mcc 0.6173648 properly is of use already
EranML/ppo-Huggy
EranML
2023-08-01T09:00:36Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-08-01T09:00:30Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: EranML/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ram25/flan-t5-xl
ram25
2023-08-01T08:56:53Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2023-08-01T08:56:53Z
--- license: bigcode-openrail-m ---
jkhan447/HateXplain-DS-labeled
jkhan447
2023-08-01T08:55:00Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-27T06:17:30Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: HateXplain-DS-labeled results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HateXplain-DS-labeled This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2386 - Accuracy: 0.6386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
vesteinn/vit-mae-cub
vesteinn
2023-08-01T08:28:42Z
203
0
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-24T13:12:26Z
Note that this model does not work directly with HF, a modification that does mean pooling before the layernorm and classification head is needed. ```python from transformers import ( ViTForImageClassification, pipeline, AutoImageProcessor, ViTConfig, ViTModel, ) from transformers.modeling_outputs import ( ImageClassifierOutput, BaseModelOutputWithPooling, ) from PIL import Image import torch from torch import nn from typing import Optional, Union, Tuple class CustomViTModel(ViTModel): def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPooling]: r""" bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*): Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). """ output_attentions = ( output_attentions if output_attentions is not None else self.config.output_attentions ) output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = ( return_dict if return_dict is not None else self.config.use_return_dict ) if pixel_values is None: raise ValueError("You have to specify pixel_values") # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) # TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?) expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype if pixel_values.dtype != expected_dtype: pixel_values = pixel_values.to(expected_dtype) embedding_output = self.embeddings( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding, ) encoder_outputs = self.encoder( embedding_output, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = encoder_outputs[0] sequence_output = sequence_output[:, 1:, :].mean(dim=1) sequence_output = self.layernorm(sequence_output) pooled_output = ( self.pooler(sequence_output) if self.pooler is not None else None ) if not return_dict: head_outputs = ( (sequence_output, pooled_output) if pooled_output is not None else (sequence_output,) ) return head_outputs + encoder_outputs[1:] return BaseModelOutputWithPooling( last_hidden_state=sequence_output, pooler_output=pooled_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) class CustomViTForImageClassification(ViTForImageClassification): def __init__(self, config: ViTConfig) -> None: super().__init__(config) self.num_labels = config.num_labels self.vit = CustomViTModel(config, add_pooling_layer=False) # Classifier head self.classifier = ( nn.Linear(config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() ) # Initialize weights and apply final processing self.post_init() def forward( self, pixel_values: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[tuple, ImageClassifierOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the image classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = ( return_dict if return_dict is not None else self.config.use_return_dict ) outputs = self.vit( pixel_values, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, return_dict=return_dict, ) sequence_output = outputs[0] logits = self.classifier(sequence_output) loss = None return ImageClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) model = CustomViTForImageClassification.from_pretrained("vesteinn/vit-mae-cub") image_processor = AutoImageProcessor.from_pretrained("vesteinn/vit-mae-cub") classifier = pipeline( "image-classification", model=model, image_processor=image_processor ) ```
mundepankaj/bert-finetuned-ner
mundepankaj
2023-08-01T08:28:34Z
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-01T07:03:39Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9330467845924947 - name: Recall type: recall value: 0.9498485358465163 - name: F1 type: f1 value: 0.9413726961888084 - name: Accuracy type: accuracy value: 0.9865926885265203 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0567 - Precision: 0.9330 - Recall: 0.9498 - F1: 0.9414 - Accuracy: 0.9866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0786 | 1.0 | 1756 | 0.0779 | 0.9090 | 0.9334 | 0.9210 | 0.9798 | | 0.0408 | 2.0 | 3512 | 0.0584 | 0.9288 | 0.9467 | 0.9377 | 0.9855 | | 0.0258 | 3.0 | 5268 | 0.0567 | 0.9330 | 0.9498 | 0.9414 | 0.9866 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
hey123889/llama2-qlora-finetunined-french
hey123889
2023-08-01T08:26:46Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-01T08:26:28Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
adi0308/Colour-Extract
adi0308
2023-08-01T08:16:50Z
0
0
null
[ "license:unknown", "region:us" ]
null
2023-08-01T08:14:19Z
--- license: unknown --- --- language: en pipeline_tag: image-classification task: Image Color Extraction --- # Image Color Extraction Model This model performs color extraction from an image using KMeans clustering. Given an input image, the model identifies the dominant colors present in the image and returns their HEX codes, along with the color closest to white. Model Details: - Model Type: Color Extraction - Model Name: ColorExtractionPipeline - Architecture: KMeans Clustering - Compatible Transformers Library Version: >=4.11.0 Input Format: - The model takes an image in the form of a PIL (Python Imaging Library) image. Output Format: - A list of HEX codes representing the dominant colors in the image. - The HEX code of the color closest to white in the image. Usage: ```python from transformers import pipeline from PIL import Image # Load the color extraction pipeline color_extraction = pipeline("color-extraction") # Load an image using PIL (Python Imaging Library) image = Image.open("path/to/your/image.jpg") # Perform color extraction on the image colors, closest_to_white = color_extraction(image) # Print the results print("Dominant Colors:", colors) print("Color Closest to White:", closest_to_white)
Job6742/t5-small-finetuned-wikisql
Job6742
2023-08-01T08:13:32Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wikisql", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-01T06:06:15Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer datasets: - wikisql model-index: - name: t5-small-finetuned-wikisql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikisql This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset. It achieves the following results on the evaluation set: - Loss: 0.2547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.2875 | 1.0 | 3523 | 0.2645 | | 0.286 | 2.0 | 7046 | 0.2594 | | 0.2714 | 3.0 | 10569 | 0.2567 | | 0.2667 | 4.0 | 14092 | 0.2550 | | 0.268 | 5.0 | 17615 | 0.2547 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
rooftopcoder/mT5_base_English_Gujrati
rooftopcoder
2023-08-01T07:59:40Z
141
0
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "autotrain", "translation", "unk", "dataset:rooftopcoder/autotrain-data-en-gj", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-05-01T14:24:23Z
--- tags: - autotrain - translation language: - unk - unk datasets: - rooftopcoder/autotrain-data-en-gj co2_eq_emissions: emissions: 11.738270627825147 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 54465127487 - CO2 Emissions (in grams): 11.7383 ## Validation Metrics - Loss: 1.736 - SacreBLEU: 2.095 - Gen len: 18.757
mjwong/contriever-mnli
mjwong
2023-08-01T07:57:29Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "zero-shot-classification", "en", "dataset:glue", "arxiv:2112.09118", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2023-05-18T17:15:57Z
--- datasets: - glue model-index: - name: contriever-mnli results: [] pipeline_tag: zero-shot-classification language: - en license: mit --- # contriever-mnli This model is a fine-tuned version of [facebook/contriever](https://huggingface.co/facebook/contriever) on the glue dataset. ## Model description [Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard Grave, arXiv 2021 ## How to use the model ### With the zero-shot classification pipeline The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="mjwong/contriever-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels) ``` If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently: ```python candidate_labels = ['travel', 'cooking', 'dancing', 'exploration'] classifier(sequence_to_classify, candidate_labels, multi_class=True) ``` ### With manual PyTorch The model can also be applied on NLI tasks like so: ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification # device = "cuda:0" or "cpu" device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "mjwong/contriever-mnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "But I thought you'd sworn off coffee." hypothesis = "I thought that you vowed to drink more coffee." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 2) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Eval results The model was evaluated using the dev sets for MultiNLI and test sets for ANLI. The metric used is accuracy. |Datasets|mnli_dev_m|mnli_dev_mm|anli_test_r1|anli_test_r2|anli_test_r3| | :---: | :---: | :---: | :---: | :---: | :---: | |[contriever-mnli](https://huggingface.co/mjwong/contriever-mnli)|0.821|0.822|0.247|0.281|0.312| |[contriever-msmarco-mnli](https://huggingface.co/mjwong/contriever-msmarco-mnli)|0.820|0.819|0.244|0.296|0.306| ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Framework versions - Transformers 4.28.1 - Pytorch 1.12.1+cu116 - Datasets 2.11.0 - Tokenizers 0.12.1
rooftopcoder/bart-base-coqa
rooftopcoder
2023-08-01T07:55:03Z
69
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-10T18:33:59Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-base-coqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-coqa This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.29.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
jscore2023/falcon-7b-instruct-ft-adapters
jscore2023
2023-08-01T07:48:50Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-31T18:30:47Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
jinxuewen/vicuna-7b
jinxuewen
2023-08-01T07:48:25Z
11
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-06T02:08:00Z
--- inference: false --- # Vicuna 7b Weights vicuna-weights Converted model from https://huggingface.co/lmsys/vicuna-7b-delta-v1.1 ## Install ```bash pip3 install fschat ``` ## Inference with Command Line Interface (Experimental Feature: You can specify `--style rich` to enable rich text output and better text streaming quality for some non-ASCII content. This may not work properly on certain terminals.) <a href="https://chat.lmsys.org"><img src="assets/screenshot_cli.png" width="70%"></a> When use huggingface, the </path/to/vicuna/weights> is "jinxuewen/vicuna-7b" #### Single GPU The command below requires around 28GB of GPU memory for Vicuna-13B and 14GB of GPU memory for Vicuna-7B. See the "No Enough Memory" section below if you do not have enough memory. ``` python3 -m fastchat.serve.cli --model-path /path/to/vicuna/weights ``` #### Multiple GPUs You can use model parallelism to aggregate GPU memory from multiple GPUs on the same machine. ``` python3 -m fastchat.serve.cli --model-path /path/to/vicuna/weights --num-gpus 2 ``` #### CPU Only This runs on the CPU only and does not require GPU. It requires around 60GB of CPU memory for Vicuna-13B and around 30GB of CPU memory for Vicuna-7B. ``` python3 -m fastchat.serve.cli --model-path /path/to/vicuna/weights --device cpu ``` #### Metal Backend (Mac Computers with Apple Silicon or AMD GPUs) Use `--device mps` to enable GPU acceleration on Mac computers (requires torch >= 2.0). Use `--load-8bit` to turn on 8-bit compression. ``` python3 -m fastchat.serve.cli --model-path /path/to/vicuna/weights --device mps --load-8bit ``` Vicuna-7B can run on a 32GB M1 Macbook with 1 - 2 words / second. #### No Enough Memory or Other Platforms If you do not have enough memory, you can enable 8-bit compression by adding `--load-8bit` to commands above. This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend. Vicuna-13B with 8-bit compression can run on a single NVIDIA 3090/4080/V100(16GB) GPU. ``` python3 -m fastchat.serve.cli --model-path /path/to/vicuna/weights --load-8bit ``` Besides, we are actively exploring more methods to make the model easier to run on more platforms. Contributions and pull requests are welcome. ## Serving with Web GUI <a href="https://chat.lmsys.org"><img src="assets/screenshot_gui.png" width="70%"></a> To serve using the web UI, you need three main components: web servers that interface with users, model workers that host one or more models, and a controller to coordinate the webserver and model workers. Here are the commands to follow in your terminal: #### Launch the controller ```bash python3 -m fastchat.serve.controller ``` This controller manages the distributed workers. #### Launch the model worker ```bash python3 -m fastchat.serve.model_worker --model-path /path/to/vicuna/weights ``` Wait until the process finishes loading the model and you see "Uvicorn running on ...". You can launch multiple model workers to serve multiple models concurrently. The model worker will connect to the controller automatically. To ensure that your model worker is connected to your controller properly, send a test message using the following command: ```bash python3 -m fastchat.serve.test_message --model-name vicuna-13b ``` #### Launch the Gradio web server ```bash python3 -m fastchat.serve.gradio_web_server ``` This is the user interface that users will interact with. By following these steps, you will be able to serve your models using the web UI. You can open your browser and chat with a model now. ## API ### Huggingface Generation APIs See [fastchat/serve/huggingface_api.py](fastchat/serve/huggingface_api.py) ### OpenAI-compatible RESTful APIs & SDK (Experimental. We will keep improving the API and SDK.) #### Chat Completion Reference: https://platform.openai.com/docs/api-reference/chat/create Some features/compatibilities to be implemented: - [ ] streaming - [ ] support of some parameters like `top_p`, `presence_penalty` - [ ] proper error handling (e.g. model not found) - [ ] the return value in the client SDK could be used like a dict **RESTful API Server** First, launch the controller ```bash python3 -m fastchat.serve.controller ``` Then, launch the model worker(s) ```bash python3 -m fastchat.serve.model_worker --model-name 'vicuna-7b-v1.1' --model-path /path/to/vicuna/weights ``` Finally, launch the RESTful API server ```bash export FASTCHAT_CONTROLLER_URL=http://localhost:21001 python3 -m fastchat.serve.api --host localhost --port 8000 ``` Test the API server ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "vicuna-7b-v1.1", "messages": [{"role": "user", "content": "Hello!"}] }' ``` **Client SDK** Assuming environment variable `FASTCHAT_BASEURL` is set to the API server URL (e.g., `http://localhost:8000`), you can use the following code to send a request to the API server: ```python import os from fastchat import client client.set_baseurl(os.getenv("FASTCHAT_BASEURL")) completion = client.ChatCompletion.create( model="vicuna-7b-v1.1", messages=[ {"role": "user", "content": "Hello!"} ] ) print(completion.choices[0].message) ```
s3nh/gogpt2-13b-GGML
s3nh
2023-08-01T07:40:00Z
0
1
null
[ "text-generation-inference", "text-generation", "en", "license:cc-by-sa-4.0", "region:us" ]
text-generation
2023-08-01T07:22:51Z
--- license: cc-by-sa-4.0 language: - en tags: - text-generation-inference pipeline_tag: text-generation --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGML Format model files for [This project](https://huggingface.co/golaxy/gogpt2-13b/tree/main). ### inference ```python import ctransformers from ctransformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file, gpu_layers=32, model_type="llama") manual_input: str = "Tell me about your last dream, please." llm(manual_input, max_new_tokens=256, temperature=0.9, top_p= 0.7) ``` # Original model card ## 测试效果 - 怎么学习机器学习 ``` 好的,以下是一些学习机器学习的方法: 1. 上下线:通过实践项目来学习,例如使用开源机器学习框架(如TensorFlow、Keras、Scikit-Learn等)来实现各种机器学习任务。 2. 课程学习:选择一些在线或线下的机器学习课程,例如Coursera、Udemy、edX等,这些平台都提供了各种机器学习课程。 3. 阅读书籍:选择一些经典的机器学习书籍,例如Andrew Ng的《深度学习》、《机器学习》等,这些书籍可以帮助你深入地学习机器学习基础知识。 4. 加入社区:加入机器学习社区(如Kaggle、Github、Reddit等),与其他机器学习爱好者交流和分享经验,从而不断学习和改进。 5. 参加比赛:参加一些机器学习比赛(如Kaggle等),可以通过解决实际问题来学习并提高自己的技能。 总之,学习机器学习需要不断实践和探索,找到适合自己的学习方法。 ``` - 撰写一篇交响乐音乐会评论,讨论乐团的表演和观众的整体体验。 ``` 交响乐音乐会是音乐爱好者和古典音乐的爱好者的梦想。上周末,我参加了一场交响乐音乐会,我的经历真的很难忘。 首先,乐团的表演非常出色。他们都是专业音乐家,他们的演奏技巧非常娴熟。他们在演奏时,展现出了音乐的真谛,让人感受到音乐的力量和魅力。同时,他们还非常出色地演绎了交响乐中的不同元素,如高音、低音、节奏、调性和变化等。他们的表演非常自然,没有过分的紧张或放松,给人一种平和的、和谐的感觉。 其次,观众的整体体验也非常不错。音乐会现场非常宽敞明亮,座椅间隔非常宽敞。观众可以自由地欣赏音乐的每一个细节,同时还可以享受到音乐的魅力。同时,音乐会现场的氛围也非常友好,观众之间相互交流,享受音乐的共品。此外,音乐会现场还设有各种美食和饮料供观众品尝,增加了观众的舒适度。 总的来说,这次交响乐音乐会是一次非常愉快的音乐体验。乐团的表演非常出色,观众的整体体验也非常不错。我真的非常期待下一场音乐会,再次领略到音乐的魅力。 ``` - 为一部最近上映的科幻电影撰写一个引人入胜的影评,讨论它的情节,角色和特效。 ``` 《星球大战:新纪元》是一部令人兴奋的科幻电影,它延续了之前的星球大战系列的故事情节和角色,同时也带来了新的挑战和惊喜。 这部电影的情节非常引人入胜,它讲述了一群反抗军与帝国之间的斗争,反抗军试图推翻帝国的统治,而帝国则竭力维持其统治。故事情节的复杂程度和紧张程度让人惊叹,每个角色都有自己独特的动机和经历,这使得整个电影的故事更加生动。 这部电影的主要角色包括雷伊(Rey),一个年轻的女武士,以及她的导师凯诺(Kylo Ren),一名邪恶的武士。这两个角色的互动和冲突是电影中的一个重要焦点,他们的故事情节充满了神秘和悬念,使得观众期待着下一刻会发生什么。此外,电影中还有一些新角色的出现,如米斯·恩德(Maisy Endor),一个新的星球上反抗军的领导者,以及C-3PO和R2-D2等经典的机器人角色,这些角色的回归和表现也都令人兴奋。 这部电影的视觉效果也非常惊人,它使用了最新的技术和特效来营造出逼真的宇宙场景和战斗场面。电影中的星球、建筑物、机械设备和武器的视觉效果都非常逼真,而且动作场面也非常刺激。电影中的视觉效果让人感到惊奇,也使观众更加沉浸在电影中的世界。 总的来说,《星球大战:新纪元》是一部非常值得一看的科幻电影,它讲述了一个引人入胜的故事,有着精彩的角色和特效,使得整个电影令人兴奋和震撼。这部电影的成功,也预示着星球大战系列的未来。 ``` - 人类平均一生眨眼多少次?请尝试解释你的答案。你的解释应该逐步带领读者理解你的推理过程 ``` 人类平均一生眨眼几次是5600000次左右。这个数字是由许多不同的研究结果和估算得出的。 首先,眨眼频率随年龄的增加而变化。婴儿眨眼频率比成人高,随着年龄的增长,眨眼频率逐渐下降。根据这个规律,我们可以计算出成人眨眼频率约为10次/分钟。 其次,眨眼频率还受到多种因素的影响。例如,当人们感到紧张或疲劳时,眨眼频率会增加。同样地,当人们感到愉悦或放松时,眨眼频率会降低。此外,不同的身体部位的眨眼频率也不同。例如,眼睛内侧的眨眼频率比内侧的眨眼频率高。 最后,由于人们的眼睛健康状况也会影响眨眼频率,例如眼底病变或视力问题。这些因素都可以导致眨眼频率的改变。 综上所述,人类平均一生眨眼几次约为5600000次左右。这个数字是由多个因素影响的结果,包括年龄、身体状况、情感状态等。 ``` - 请用python编程语言实现快速排序算法的代码 以下是使用Python快速排序算法的示例代码: ```python def quick_sort(arr): if len(arr) <= 1: return arr else: pivot = arr[0] left = [] right = [] for i in range(1, len(arr)): if arr[i] < pivot: left.append(arr[i]) else: right.append(arr[i]) return quick_sort(left) + [pivot] + quick_sort(right) ``` 这个函数接受一个整数数组作为输入,并在最坏情况下的时间复杂度为O(n^2)的情况下,最佳情况下时间复杂度为O(nlogn)。 - 如何提高我的时间管理技能? ``` 以下是一些提高时间管理技能的建议: 1. 制定计划:每天早上或每天开始时,制定一个详细的计划,包括每天需要完成的任务和目标。 2. 优先级排序:将任务按照优先级排序,先完成重要任务。 3. 集中注意力:集中注意力,将注意力集中在当前的任务上,避免分心。 4. 避免拖延:尽可能避免拖延,尽快完成任务,避免任务过期。 5. 避免多任务:多任务会增加任务的时间,所以最好只专注一个任务,直到完成它。 6. 休息:休息是恢复和提高生产力的关键,因此定期休息,放松身心。 7. 调整时间:如果发现自己的时间管理有问题,可以调整时间,例如延长休息时间或缩短任务时间。 8. 学习新技能:学习新的技能可以帮助提高工作效率,例如使用工具或软件来管理任务。 9. 寻求帮助:如果你发现自己难以管理时间,可以寻求家人、朋友或同事的帮助。 希望这些建议可以帮助您提高时间管理技能。 ``` - Python和JavaScript编程语言的主要区别是什么? ``` Python和JavaScript是两个不同的编程语言,它们有以下主要区别: 1. 数据类型:Python是一种动态类型的编程语言,它无法指定数据类型,而是根据变量中使用的值自动确定数据类型。JavaScript是一种动态类型编程语言,它使用字面量来指定变量的数据类型。 2. 语法:Python和JavaScript的语法有一些相似之处,但也有一些不同之处。例如,Python使用缩进来定义代码块,而JavaScript使用花括号。 3. 执行环境:Python通常运行在本地计算机的Python解释器中运行,而JavaScript通常运行在浏览器中运行。这意味着Python通常用于开发桌面应用程序和Web服务器端应用程序,而JavaScript主要用于开发Web应用程序。 4. 编程语言特性:Python具有更高的可读性和可维护性,而JavaScript具有更好的可扩展性和可维护性。 总之,Python和JavaScript是两种不同的编程语言,具有不同的特点和应用场景。选择哪种编程语言取决于项目的具体需求和开发者的技能水平。 ``` - 植物蛋白和动物蛋白的主要区别是什么? ``` 植物蛋白和动物蛋白的主要区别在于来源和氨基酸成分。 来源: 植物蛋白主要来自植物组织,如豆类、种子、果实、叶子等。而动物蛋白主要来自动物的身体部位,如肌肉、内脏、鱼类、蛋类等。 氨基酸成分: 植物蛋白中的主要氨基酸是植物性氨基酸,如氨基酸,植物性必需氨基酸(如硫胺素和植酸)等。而动物蛋白中的主要氨基酸是动物性氨基酸,如肌肉蛋白、乳糖蛋白等。 因此,植物蛋白和动物蛋白的营养价值是不同的。植物蛋白富含植物性必需氨基酸,如硫胺素、植酸等,而动物蛋白则含有更多的动物性必需氨基酸,如肌肉蛋白、乳糖蛋白等。植物蛋白通常比动物蛋白低热量,但也有些植物蛋白的蛋白质质量较低。动物蛋白通常比植物蛋白高蛋白质质量,但同样含有较多的饱和脂肪和胆固醇等有害物质。 ``` - 影响消费者行为的主要因素是什么? ``` 影响消费者行为的因素有很多,以下是其中几个主要的因素: 1. 消费需求:消费者的需求是影响消费者行为的主要因素之一。消费者的需求是根据个人利益和价值观而产生的。例如,如果一个消费者喜欢时尚的服装,那么他们可能会对购买这些服装的商场和品牌的选择更有吸引力。 2. 购买力:消费者购买力的限制是一个影响消费者行为的因素。消费者可能会因为预算限制而选择购买某些商品。 3. 价格:价格也是影响消费者行为的因素之一。消费者通常会比较不同商品的价格,并根据价格选择购买哪种商品。 4. 品牌知名度:品牌知名度是影响消费者行为的因素之一。消费者通常会选择有信誉的品牌,因为他们相信这些品牌能够提供高质量的产品和服务。 5. 社交媒体:社交媒体在影响消费者行为中扮演越来越重要的作用。许多消费者会在社交媒体上了解商品和品牌的信息,并选择购买这些商品。 6. 社交媒体:社交媒体在影响消费者行为中扮演越来越重要的角色。许多消费者会在社交媒体上了解商品和品牌的信息,并选择购买这些商品。 总之,影响消费者行为的主要因素有很多,这些因素往往是相互关联的。企业可以通过了解这些因素,来制定有效的营销策略,吸引和留住消费者。 ```
AkitoP/Taxi-V3
AkitoP
2023-08-01T07:35:42Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T07:35:40Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-V3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AkitoP/Taxi-V3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AkitoP/q-FrozenLake-v1-4x4-noSlippery
AkitoP
2023-08-01T07:29:23Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T07:29:21Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="AkitoP/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
NiscR/Reinforce-3
NiscR
2023-08-01T06:56:16Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T06:56:14Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 21.60 +/- 29.07 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
efainman/a2c-AntBulletEnv-v0
efainman
2023-08-01T06:44:49Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T13:28:17Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1287.58 +/- 135.76 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
NiscR/Reinforce-2
NiscR
2023-08-01T06:44:00Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T06:43:54Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 256.60 +/- 14.49 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jakobkruse/bert-finetuned-squad
jakobkruse
2023-08-01T06:43:49Z
128
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-31T13:50:35Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
DOFOFFICIAL/animeGender-dvgg-0.9
DOFOFFICIAL
2023-08-01T06:24:37Z
0
0
null
[ "image-classification", "en", "license:apache-2.0", "region:us" ]
image-classification
2023-07-14T05:47:10Z
--- license: apache-2.0 language: - en pipeline_tag: image-classification --- - Will be released in the future. - It is going to be a magnificent redesigned, and a brand new deep convolutional-structured model series with enormously enhanced versatility and functionality. - Projected released time: Later in 2023.
jangmin/qlora-llama2-13b-chat-hf-food-order-understanding-30K
jangmin
2023-08-01T05:50:16Z
4
1
peft
[ "peft", "license:apache-2.0", "region:us" ]
null
2023-08-01T05:07:06Z
--- license: apache-2.0 library_name: peft --- # Food Order Understanding in Korean This is a LoRA adapter as a result of fine-tuning the pre-trained model `meta-llama/Llama-2-13b-chat-hf`. It is designed with the expectation of understanding Korean food ordering sentences, and analyzing food menus, option names, and quantities. ## Usage Here is an example of loading the model. Note the pretrained model is `meta-llama/Llama-2-13b-chat-hf`. ```python peft_model_id = "jangmin/qlora-llama2-13b-chat-hf-food-order-understanding-30K" config = PeftConfig.from_pretrained(peft_model_id) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, cache_dir=cache_dir, device_map={"":0}) model = PeftModel.from_pretrained(model, peft_model_id) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, cache_dir=cache_dir) model.eval() ``` Inferece can be done as follows. ```python instruction_prompt_template = """ 다음은 매장에서 고객이 음식을 주문하는 주문 문장이다. 이를 분석하여 음식명, 옵션, 수량을 추출하여 고객의 의도를 이해하고자 한다. 분석 결과를 완성해주기 바란다. ### 주문 문장: {0} ### 분석 결과: """ def gen(x): q = instruction_prompt_template.format(x) gened = model.generate( **tokenizer( q, return_tensors='pt', return_token_type_ids=False ).to('cuda'), max_new_tokens=256, early_stopping=True, do_sample=True, eos_token_id=tokenizer.eos_token_id ) decoded_results = tokenizer.batch_decode(gened, skip_special_tokens=True) return decoded_results[0] ``` A generated sample is as follows. ```python print(gen("아이스아메리카노 톨사이즈 한잔 하고요. 딸기스무디 한잔 주세요. 또, 콜드브루라떼 하나요.")) ``` ``` 다음은 매장에서 고객이 음식을 주문하는 주문 문장이다. 이를 분석하여 음식명, 옵션명, 수량을 추출하여 고객의 의도를 이해하고자 한다. 분석 결과를 완성해주기 바란다. ### 명령: 아이스아메리카노 톨사이즈 한잔 하고요. 딸기스무디 한잔 주세요. 또, 콜드브루라떼 하나요. ### 응답: - 분석 결과 0: 음식명:아이스아메리카노, 옵션:톨사이즈, 수량:한잔 - 분석 결과 1: 음식명:딸기스무디, 수량:한잔 - 분석 결과 2: 음식명:콜드브루라떼, 수량:하나 `````` ## Training Fine-tuning was conducted using https://github.com/artidoro/qlora on an RTX-4090 machine, and took approximately 9 hours. The max_steps parameter was set to 5,000, which allowed nearly two complete scans of the entire dataset. Below is my training script. ```bash python qlora.py \ --cache_dir /Jupyter/huggingface/.cache \ --model_name_or_path meta-llama/Llama-2-13b-chat-hf \ --use_auth \ --output_dir ../output/llama2-gpt4-30k-food-order-understanding-13b \ --logging_steps 10 \ --save_strategy steps \ --data_seed 42 \ --save_steps 500 \ --save_total_limit 40 \ --evaluation_strategy steps \ --eval_dataset_size 1024 \ --max_eval_samples 1000 \ --per_device_eval_batch_size 12 \ --max_new_tokens 32 \ --dataloader_num_workers 1 \ --group_by_length \ --logging_strategy steps \ --remove_unused_columns False \ --do_train \ --do_eval \ --lora_r 64 \ --lora_alpha 16 \ --lora_modules all \ --double_quant \ --quant_type nf4 \ --bf16 \ --bits 4 \ --warmup_ratio 0.03 \ --lr_scheduler_type constant \ --gradient_checkpointing \ --dataset /Jupyter/dev_src/ASR-for-noisy-edge-devices/data/food-order-understanding-gpt4-30k.json \ --target_max_len 512 \ --per_device_train_batch_size 12 \ --gradient_accumulation_steps 1 \ --max_steps 5000 \ --eval_steps 500 \ --learning_rate 0.0002 \ --adam_beta2 0.999 \ --max_grad_norm 0.3 \ --lora_dropout 0.1 \ --weight_decay 0.0 \ --seed 0 \ --report_to tensorboard ``` ## Dataset The dataset was constructed using GPT-API with `gpt-4`. A prompt template is desginged to generate examples of sentence pairs of a food order and its understanding. Total 30k examples were generated. Note that it cost about $400 to generate 30K examples through 3,000 API calls. Some generated examples are as follows: ```json { 'input': '다음은 매장에서 고객이 음식을 주문하는 주문 문장이다. 이를 분석하여 음식명, 옵션명, 수량을 추출하여 고객의 의도를 이해하고자 한다.\n분석 결과를 완성해주기 바란다.\n\n### 명령: 제육볶음 한그릇하고요, 비빔밥 한그릇 추가해주세요. ### 응답:\n', 'output': '- 분석 결과 0: 음식명:제육볶음,수량:한그릇\n- 분석 결과 1: 음식명:비빔밥,수량:한그릇' }, { 'input': '다음은 매장에서 고객이 음식을 주문하는 주문 문장이다. 이를 분석하여 음식명, 옵션명, 수량을 추출하여 고객의 의도를 이해하고자 한다.\n분석 결과를 완성해주기 바란다.\n\n### 명령: 사천탕수육 곱배기 주문하고요, 샤워크림치킨도 하나 추가해주세요. ### 응답:\n', 'output': '- 분석 결과 0: 음식명:사천탕수육,옵션:곱배기\n- 분석 결과 1: 음식명:샤워크림치킨,수량:하나' } ``` ## Note I have another fine-tuned Language Model, `jangmin/qlora-polyglot-ko-12.8b-food-order-understanding-32K`, which is based on `EleutherAI/polyglot-ko-12.8b`. The dataset was generated using `gpt-3.5-turbo-16k`. I believe that the quality of a dataset generated by `GPT-4` would be superior to that generated by `GPT-3.5`.
834188divi/rare-puppers
834188divi
2023-08-01T05:42:25Z
226
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-08-01T05:42:19Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8656716346740723 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### corgi ![corgi](images/corgi.jpg) #### samoyed ![samoyed](images/samoyed.jpg) #### shiba inu ![shiba inu](images/shiba_inu.jpg)
shahrukhx01/schema-aware-distilbart-cnn-12-6-text2sql
shahrukhx01
2023-08-01T05:36:40Z
35
1
transformers
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "wikisql", "text2sql", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - wikisql - text2sql --- ```python from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig model = BartForConditionalGeneration.from_pretrained('shahrukhx01/schema-aware-distilbart-cnn-12-6-text2sql') tokenizer = BartTokenizer.from_pretrained('shahrukhx01/schema-aware-distilbart-cnn-12-6-text2sql') ## add NL query with table schema question = "What is terrence ross' nationality? </s> <col0> Player : text <col1> No. : text <col2> Nationality : text <col3> Position : text <col4> Years in Toronto : text <col5> School/Club Team : text" inputs = tokenizer([question], max_length=1024, return_tensors='pt') # Generate SQL text_query_ids = model.generate(inputs['input_ids'], num_beams=4, min_length=0, max_length=125, early_stopping=True) prediction = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in text_query_ids][0] print(prediction) ```
Charankumarpc/test-dialogue-summarization
Charankumarpc
2023-08-01T05:20:11Z
103
1
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-01T04:10:13Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer metrics: - rouge model-index: - name: test-dialogue-summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-dialogue-summarization This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9653 - Rouge1: 61.2091 - Rouge2: 36.8979 - Rougel: 46.3962 - Rougelsum: 58.3082 - Gen Len: 135.6733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 94 | 1.3755 | 53.9112 | 25.5975 | 36.8507 | 50.0306 | 132.7733 | | No log | 2.0 | 188 | 1.2081 | 55.5956 | 27.4849 | 37.7785 | 51.7906 | 137.1267 | | No log | 3.0 | 282 | 1.1149 | 55.714 | 28.3629 | 39.0763 | 52.439 | 137.62 | | No log | 4.0 | 376 | 1.0564 | 56.6202 | 29.789 | 39.9223 | 53.3054 | 135.1733 | | No log | 5.0 | 470 | 1.0107 | 57.8272 | 31.5716 | 41.9775 | 54.5114 | 135.1733 | | 1.1609 | 6.0 | 564 | 0.9775 | 58.561 | 32.5462 | 42.9577 | 55.1653 | 133.5533 | | 1.1609 | 7.0 | 658 | 0.9683 | 59.0592 | 33.8153 | 43.918 | 56.0493 | 135.3267 | | 1.1609 | 8.0 | 752 | 0.9626 | 60.4587 | 35.8511 | 45.9511 | 57.3658 | 134.38 | | 1.1609 | 9.0 | 846 | 0.9623 | 60.3938 | 35.8996 | 45.7161 | 57.2104 | 135.2333 | | 1.1609 | 10.0 | 940 | 0.9653 | 61.2091 | 36.8979 | 46.3962 | 58.3082 | 135.6733 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
Maxlumaga/osim
Maxlumaga
2023-08-01T05:09:28Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-28T20:27:27Z
--- license: creativeml-openrail-m ---
himanimaheshwari3/distilbert-base-uncased-finetuned-himani-m
himanimaheshwari3
2023-08-01T05:02:20Z
124
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-01T05:00:33Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-himani-m results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-himani-m This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.8945 | 1.0 | 8 | 5.4556 | | 4.3449 | 2.0 | 16 | 2.6799 | | 4.4967 | 3.0 | 24 | 3.1203 | | 4.0367 | 4.0 | 32 | 3.7410 | | 3.7329 | 5.0 | 40 | 3.9018 | | 4.3099 | 6.0 | 48 | 2.2667 | | 3.767 | 7.0 | 56 | 3.9794 | | 3.5045 | 8.0 | 64 | 2.1890 | | 3.576 | 9.0 | 72 | 5.1615 | | 3.2903 | 10.0 | 80 | 2.8625 | | 3.3835 | 11.0 | 88 | 5.7664 | | 3.219 | 12.0 | 96 | 2.5192 | | 3.2197 | 13.0 | 104 | 2.5271 | | 3.1208 | 14.0 | 112 | 2.7014 | | 3.3357 | 15.0 | 120 | 3.8341 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
irena/audioldm
irena
2023-08-01T04:25:48Z
2
0
diffusers
[ "diffusers", "license:cc-by-nc-nd-4.0", "region:us" ]
null
2023-08-01T04:19:23Z
--- license: cc-by-nc-nd-4.0 ---
usamakenway/llama2_7b_chat_uncensored-AutoGPTQ_Wizard_Vicuna
usamakenway
2023-08-01T04:24:09Z
9
2
transformers
[ "transformers", "llama", "text-generation", "dataset:ehartford/wizard_vicuna_70k_unfiltered", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-07-25T13:49:13Z
--- datasets: - ehartford/wizard_vicuna_70k_unfiltered inference: false license: other model_type: llama --- # George Sung's Llama2 7B Chat Uncensored GPTQ These files are GPTQ model files for [George Sung's Llama2 7B Chat Uncensored](https://huggingface.co/georgesung/llama2_7b_chat_uncensored). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML) * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/georgesung/llama2_7b_chat_uncensored) ## Prompt template: Human-Response ``` ### HUMAN: {prompt} ### RESPONSE: ``` ## Provided files Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description | | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- | | main | 4 | 128 | False | 3.90 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. | | gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. | | gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. | | gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. | ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/llama2_7b_chat_uncensored-GPTQ:gptq-4bit-32g-actorder_True` - With Git, you can clone a branch with: ``` git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ` ``` - In Python Transformers code, the branch is the `revision` parameter; see below. ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/llama2_7b_chat_uncensored-GPTQ`. - To download from a specific branch, enter for example `TheBloke/llama2_7b_chat_uncensored-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done" 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `llama2_7b_chat_uncensored-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! ## How to use this GPTQ model from Python code First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: `GITHUB_ACTIONS=true pip install auto-gptq` Then try the following example code: ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig model_name_or_path = "TheBloke/llama2_7b_chat_uncensored-GPTQ" model_basename = "gptq_model-4bit-128g" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", use_triton=use_triton, quantize_config=None) """ To download from a specific branch, use the revision parameter, as in this example: model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, revision="gptq-4bit-32g-actorder_True", model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", quantize_config=None) """ prompt = "Tell me about AI" prompt_template=f'''### HUMAN: {prompt} ### RESPONSE: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Compatibility The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork. ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. <!-- footer start --> ## Discord ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: George Sung's Llama2 7B Chat Uncensored # Overview Fine-tuned [Llama-2 7B](https://huggingface.co/TheBloke/Llama-2-7B-fp16) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered). Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~19 hours to train. # Prompt style The model was trained with the following prompt style: ``` ### HUMAN: Hello ### RESPONSE: Hi, how are you? ### HUMAN: I'm fine. ### RESPONSE: How can I help you? ... ``` # Training code Code used to train the model is available [here](https://github.com/georgesung/llm_qlora). To reproduce the results: ``` git clone https://github.com/georgesung/llm_qlora cd llm_qlora pip install -r requirements.txt python train.py configs/llama2_7b_chat_uncensored.yaml ```
chriskim2273/IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.6_DistilBert_UNK_RETEST
chriskim2273
2023-08-01T04:11:05Z
103
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-08-01T04:00:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.6_DistilBert_UNK_RETEST results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.6_DistilBert_UNK_RETEST This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5994 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
Eggsbena/model_004
Eggsbena
2023-08-01T04:08:41Z
29
0
diffusers
[ "diffusers", "text-to-image", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-01T03:59:41Z
--- library_name: diffusers pipeline_tag: text-to-image ---
seungheondoh/lp-music-caps
seungheondoh
2023-08-01T04:06:07Z
0
19
null
[ "music", "music-captioning", "en", "dataset:seungheondoh/LP-MusicCaps-MSD", "dataset:seungheondoh/LP-MusicCaps-MC", "arxiv:2307.16372", "license:mit", "region:us" ]
null
2023-07-12T03:51:59Z
--- license: mit datasets: - seungheondoh/LP-MusicCaps-MSD - seungheondoh/LP-MusicCaps-MC language: - en metrics: - bleu - bertscore tags: - music - music-captioning --- - **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps) - **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372) # :sound: LP-MusicCaps: LLM-Based Pseudo Music Captioning [![Demo Video](https://i.imgur.com/cgi8NsD.jpg)](https://youtu.be/ezwYVaiC-AM) This is a implementation of [LP-MusicCaps: LLM-Based Pseudo Music Captioning](#). This project aims to generate captions for music. 1) Tag-to-Caption: Using existing tags, We leverage the power of OpenAI's GPT-3.5 Turbo API to generate high-quality and contextually relevant captions based on music tag. 2) Audio-to-Caption: Using music-audio and pseudo caption pairs, we train a cross-model encoder-decoder model for end-to-end music captioning > [**LP-MusicCaps: LLM-Based Pseudo Music Captioning**](#) > SeungHeon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam > To appear ISMIR 2023 ## TL;DR <p align = "center"> <img src = "https://i.imgur.com/2LC0nT1.png"> </p> - **[1.Tag-to-Caption: LLM Captioning](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/llm_captioning)**: Generate caption from given tag input. - **[2.Pretrain Music Captioning Model](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/music_captioning)**: Generate pseudo caption from given audio. - **[3.Transfer Music Captioning Model](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/music_captioning/transfer.py)**: Generate human level caption from given audio. ## Open Source Material - [pre-trained models](https://huggingface.co/seungheondoh/lp-music-caps) - [music-pseudo caption dataset](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD) - [demo](https://huggingface.co/spaces/seungheondoh/LP-Music-Caps-demo) are available online for future research. example of dataset in [notebook](https://github.com/seungheondoh/lp-music-caps/blob/main/notebook/Dataset.ipynb)
JordanWHLewis/k80_jordan_batch_size_32
JordanWHLewis
2023-08-01T04:03:21Z
105
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-31T22:58:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: k80_jordan_batch_size_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # k80_jordan_batch_size_32 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 20.472 | 16.0 | 200 | 3.9251 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
mylesmharrison/gpt2-moviedialog
mylesmharrison
2023-08-01T03:42:28Z
62
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "dataset:mylesmharrison/movie_scripts", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-08-01T01:36:02Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback datasets: - mylesmharrison/movie_scripts model-index: - name: mylesmharrison/gpt2-moviedialog results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mylesmharrison/gpt2-moviedialog This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the mylesmharrison/movie_scripts dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3485 - Validation Loss: 2.3753 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': 1.0, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 4227, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.5074 | 2.4004 | 0 | | 2.3925 | 2.3816 | 1 | | 2.3485 | 2.3753 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.2 - Tokenizers 0.13.3
chandrasutrisnotjhong/Pyramids
chandrasutrisnotjhong
2023-08-01T03:37:11Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-08-01T03:34:53Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: chandrasutrisnotjhong/Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
platzi/platzi-vit-model-cesar-vega
platzi
2023-08-01T03:20:30Z
193
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-31T22:59:55Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer datasets: - beans metrics: - accuracy widget: - src: >- https://huggingface.co/platzi/platzi-vit-model-luis-reyes/resolve/main/healthy.jpeg example_title: Healthy - src: >- https://huggingface.co/platzi/platzi-vit-model-luis-reyes/resolve/main/bean_rust.jpeg example_title: Bean Rust model-index: - name: platzi-vit-model-cesar-vega results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9849624060150376 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-cesar-vega This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0489 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1466 | 3.85 | 500 | 0.0489 | 0.9850 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
KatarLegacy/1-zhuaxiong
KatarLegacy
2023-08-01T03:07:41Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-01T03:07:09Z
--- license: creativeml-openrail-m ---
KatarLegacy/TWGirlPutBallsOnTableMix1
KatarLegacy
2023-08-01T03:06:24Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-01T02:49:19Z
--- license: creativeml-openrail-m ---
Ningxin/bloom_wikitext_collaboration_2
Ningxin
2023-08-01T02:47:32Z
0
0
peft
[ "peft", "pytorch", "bloom", "region:us" ]
null
2023-07-31T23:19:18Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
martinwunderlich/bert-base-uncased-issues-128
martinwunderlich
2023-08-01T02:44:57Z
109
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-31T14:37:27Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2394 | 1.0 | 291 | 0.2312 | | 0.1858 | 2.0 | 582 | 0.1863 | | 0.1687 | 3.0 | 873 | 0.1948 | | 0.1602 | 4.0 | 1164 | 0.1948 | | 0.1537 | 5.0 | 1455 | 0.1678 | | 0.1467 | 6.0 | 1746 | 0.1805 | | 0.1406 | 7.0 | 2037 | 0.1641 | | 0.1382 | 8.0 | 2328 | 0.1770 | | 0.1354 | 9.0 | 2619 | 0.1724 | | 0.1312 | 10.0 | 2910 | 0.1679 | | 0.1284 | 11.0 | 3201 | 0.1709 | | 0.1262 | 12.0 | 3492 | 0.1549 | | 0.1252 | 13.0 | 3783 | 0.1559 | | 0.1239 | 14.0 | 4074 | 0.1441 | | 0.1217 | 15.0 | 4365 | 0.1673 | | 0.1225 | 16.0 | 4656 | 0.1598 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
BlunderPanini/rl_course_vizdoom_health_gathering_supreme
BlunderPanini
2023-08-01T02:28:35Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T02:28:30Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.72 +/- 6.10 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r BlunderPanini/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
gabejohnson/ppo-LunarLander-v2
gabejohnson
2023-08-01T02:13:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T02:13:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 240.97 +/- 15.90 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jonahshader/ppo-LunarLander-v2
jonahshader
2023-08-01T02:10:54Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-01T02:10:34Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.11 +/- 19.60 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Uminosachi/realisticVisionV51_v51VAE-inpainting
Uminosachi
2023-08-01T02:05:37Z
5,199
12
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-01T01:53:24Z
--- license: creativeml-openrail-m --- This is an inpainting model, which has been converted from the [realisticVisionV51_v51VAE-inpainting](https://civitai.com/models/4201?modelVersionId=130090).