modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-30 18:26:50
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-30 18:26:48
card
stringlengths
11
1.01M
botp/Realistic_Vision_V1.3
botp
2023-05-04T09:18:22Z
2
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-04T09:18:22Z
--- license: creativeml-openrail-m duplicated_from: SG161222/Realistic_Vision_V1.3 --- <b>Please read this!</b><br> My model has always been free and always will be free. There are no restrictions on the use of the model. The rights to this model still belong to me.<br> This model is available on <a href="https://www.mage.space/">Mage.Space</a> and <a href="https://sinkin.ai/">Sinkin.ai</a> <hr/> <b>I use this template to get good generation results: Prompt:</b> RAW photo, *subject*, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <b>Example:</b> RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <b>Negative Prompt:</b> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br> <b>OR</b><br> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation <b>Euler A or DPM++ 2M Karras with 25 steps<br> CFG Scale 3,5 - 7<br> Hires. fix with Latent upscaler<br> 0 Hires steps and Denoising strength 0.25-0.45<br> Upscale by 1.1-2.0</b>
Pietro97/ppo-Huggy
Pietro97
2023-05-04T09:15:03Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-05-04T09:14:55Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: Pietro97/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
botp/Realistic_Vision_V2.0
botp
2023-05-04T09:14:37Z
4
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-04T09:14:36Z
--- license: creativeml-openrail-m duplicated_from: SG161222/Realistic_Vision_V2.0 --- <b>Please read this!</b><br> For version 2.0 it is recommended to use with VAE (to improve generation quality and get rid of blue artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br> This model is available on <a href="https://www.mage.space/">Mage.Space</a>, <a href="https://sinkin.ai/">Sinkin.ai</a>, <a href="https://getimg.ai/">GetImg.ai</a> and (<a href="https://randomseed.co/">RandomSeed.co</a> - NSFW content) <hr/> <b>I use this template to get good generation results: Prompt:</b> RAW photo, *subject*, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <b>Example:</b> RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <b>Negative Prompt:</b> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br> <b>OR</b><br> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation <b>Euler A or DPM++ 2M Karras with 25 steps<br> CFG Scale 3,5 - 7<br> Hires. fix with Latent upscaler<br> 0 Hires steps and Denoising strength 0.25-0.45<br> Upscale by 1.1-2.0</b>
Theju/switch_low_b4_2
Theju
2023-05-04T09:13:46Z
106
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-04T09:11:07Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: switch_low_b4_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # switch_low_b4_2 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
truegpt/truegpt_small
truegpt
2023-05-04T09:13:36Z
3
0
transformers
[ "transformers", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-01T14:30:48Z
# TrueGPT Small: AI Model for Action and Empowerment TrueGPT Small is a lightweight version of the TrueGPT artificial intelligence model, designed for users who need the empowering and actionable features of TrueGPT with reduced computational requirements. By providing actionable solutions and eliminating uncertainty, TrueGPT Small retains the core features of the original TrueGPT while making it accessible to a wider range of devices and systems. With seamless integration to the Hugging Face ecosystem, users can easily utilize TrueGPT Small for various AI applications.
usix79/a2c-PandaReachDense-v2
usix79
2023-05-04T09:07:43Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T09:05:05Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.70 +/- 0.62 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
udon2301/gpt2-ft
udon2301
2023-05-04T08:56:41Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T11:05:45Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-ft This model is a fine-tuned version of [rinna/japanese-gpt-1b](https://huggingface.co/rinna/japanese-gpt-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
brathief/Alice_extend_brathief_e500
brathief
2023-05-04T08:43:17Z
7
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-04-22T13:39:09Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - brathief/Alice_extend_brathief_e500 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
pkufool/icefall_asr_aishell_pruned_transducer_stateless7_bbpe
pkufool
2023-05-04T08:39:07Z
0
0
null
[ "tensorboard", "license:apache-2.0", "region:us" ]
null
2023-05-04T07:25:32Z
--- license: apache-2.0 --- The results: |Vocab size | Greedy search(dev & test) | Modified beam search(dev & test) | Fast beam search (dev & test) | Fast beam search LG (dev & test) | comments| |-- | -- | -- | -- | -- | --| |500 | 4.31 & 4.59 | 4.25 & 4.54 | 4.27 & 4.55 | 4.07 & 4.38 | --epoch 48 --avg 29| The training command: ```bash export CUDA_VISIBLE_DEVICES="4,5,6,7" ./pruned_transducer_stateless7_bbpe/train.py \ --world-size 4 \ --num-epochs 50 \ --start-epoch 1 \ --use-fp16 1 \ --max-duration 800 \ --bpe-model data/lang_bbpe_500/bbpe.model \ --exp-dir pruned_transducer_stateless7_bbpe/exp \ --lr-epochs 6 \ --master-port 12535 ``` The decoding command: ```bash for m in greedy_search modified_beam_search fast_beam_search fast_beam_search_LG; do ./pruned_transducer_stateless7_bbpe/decode.py \ --epoch 48 \ --avg 29 \ --exp-dir ./pruned_transducer_stateless7_bbpe/exp \ --max-sym-per-frame 1 \ --ngram-lm-scale 0.25 \ --ilme-scale 0.2 \ --bpe-model data/lang_bbpe_500/bbpe.model \ --max-duration 2000 \ --decoding-method $m done ```
civitary/msbrew
civitary
2023-05-04T08:38:50Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T08:32:55Z
--- license: creativeml-openrail-m ---
pkufool/icefall_asr_librispeech_conformer_ctc
pkufool
2023-05-04T08:35:10Z
0
4
null
[ "en", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 language: - en --- # Pre-trained Conformer-CTC models for the librispeech dataset with icefall. The model was trained on full [LibriSpeech](http://openslr.org/12/) with the scripts in [icefall](https://github.com/k2-fsa/icefall). See (https://github.com/k2-fsa/icefall/pull/13) for more details of this model. ## How to use See (https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/conformer_ctc/README.md) ## Training procedure The version of the mainly repositories are list below. k2: https://github.com/k2-fsa/k2/commit/81cec9ec736d2c603ad75d933bb3e3a3706fb0dd icefall: https://github.com/k2-fsa/icefall/commit/ef233486ae6d21bacb940de45efb35d0c334605c lhotse: https://github.com/lhotse-speech/lhotse/commit/5dfe0f4c02b1334ebb7db6d67e1141fe406ca76b * Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. It is better to use the given version above, but I think the latest version would be ok. And also install the requirements listed in icefall. * Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above. ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout ef233486 ``` * Preparing data. ``` cd egs/librispeech/ASR bash ./prepare.sh ``` * Training ```bash export CUDA_VISIBLE_DEVICES="0,1,2,3" python conformer_ctc/train.py --bucketing-sampler True \ --concatenate-cuts False \ --max-duration 200 \ --full-libri True \ --world-size 4 ``` ## Evaluation results The best decoding results (WERs) on LibriSpeech test-clean and test-other are listed below, we got this results by averaging models from epoch 15 to 34. ||test-clean|test-other| |--|--|--| |WER|2.57%|5.94%|
Kiriko/LunarLanderAgent
Kiriko
2023-05-04T08:31:39Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T08:31:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.09 +/- 11.64 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
yemiancheng/like-model
yemiancheng
2023-05-04T08:27:52Z
0
0
null
[ "region:us" ]
null
2023-05-04T05:15:02Z
# readme saving some models i like. i will collect them here for using(downloading) easily. ## why - [x] sometimes i want to use it but fogget where to download it. ## life guarantee statement If there is infringement, please temporarily notify me and I will delete it. my email: `ymc-github@gmail.com` or `yemiancheng1993@163.com`
MartinMarenz/q-Taxiv3-02
MartinMarenz
2023-05-04T08:21:18Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T08:21:13Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxiv3-02 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="MartinMarenz/q-Taxiv3-02", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Aleksandar/electra-srb-ner
Aleksandar
2023-05-04T08:14:22Z
117
0
transformers
[ "transformers", "pytorch", "safetensors", "electra", "token-classification", "generated_from_trainer", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model_index: - name: electra-srb-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: sr metric: name: Accuracy type: accuracy value: 0.9568394937134688 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.3406 - Precision: 0.8934 - Recall: 0.9087 - F1: 0.9010 - Accuracy: 0.9568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3686 | 1.0 | 625 | 0.2108 | 0.8326 | 0.8494 | 0.8409 | 0.9335 | | 0.1886 | 2.0 | 1250 | 0.1784 | 0.8737 | 0.8713 | 0.8725 | 0.9456 | | 0.1323 | 3.0 | 1875 | 0.1805 | 0.8654 | 0.8870 | 0.8760 | 0.9468 | | 0.0675 | 4.0 | 2500 | 0.2018 | 0.8736 | 0.8880 | 0.8807 | 0.9502 | | 0.0425 | 5.0 | 3125 | 0.2162 | 0.8818 | 0.8945 | 0.8881 | 0.9512 | | 0.0343 | 6.0 | 3750 | 0.2492 | 0.8790 | 0.8928 | 0.8859 | 0.9513 | | 0.0253 | 7.0 | 4375 | 0.2562 | 0.8821 | 0.9006 | 0.8912 | 0.9525 | | 0.0142 | 8.0 | 5000 | 0.2788 | 0.8807 | 0.9013 | 0.8909 | 0.9524 | | 0.0114 | 9.0 | 5625 | 0.2793 | 0.8861 | 0.9002 | 0.8931 | 0.9534 | | 0.0095 | 10.0 | 6250 | 0.2967 | 0.8887 | 0.9034 | 0.8960 | 0.9550 | | 0.008 | 11.0 | 6875 | 0.2993 | 0.8899 | 0.9067 | 0.8982 | 0.9556 | | 0.0048 | 12.0 | 7500 | 0.3215 | 0.8887 | 0.9038 | 0.8962 | 0.9545 | | 0.0034 | 13.0 | 8125 | 0.3242 | 0.8897 | 0.9068 | 0.8982 | 0.9554 | | 0.003 | 14.0 | 8750 | 0.3311 | 0.8884 | 0.9085 | 0.8983 | 0.9559 | | 0.0025 | 15.0 | 9375 | 0.3383 | 0.8943 | 0.9062 | 0.9002 | 0.9562 | | 0.0011 | 16.0 | 10000 | 0.3346 | 0.8941 | 0.9112 | 0.9026 | 0.9574 | | 0.0015 | 17.0 | 10625 | 0.3362 | 0.8944 | 0.9081 | 0.9012 | 0.9567 | | 0.001 | 18.0 | 11250 | 0.3464 | 0.8877 | 0.9100 | 0.8987 | 0.9559 | | 0.0012 | 19.0 | 11875 | 0.3415 | 0.8944 | 0.9089 | 0.9016 | 0.9568 | | 0.0005 | 20.0 | 12500 | 0.3406 | 0.8934 | 0.9087 | 0.9010 | 0.9568 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
SHENMU007/neunit_BASE_V4
SHENMU007
2023-05-04T08:11:57Z
80
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "1.1.0", "generated_from_trainer", "zh", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-05-04T06:13:58Z
--- language: - zh license: mit tags: - 1.1.0 - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: SpeechT5 TTS Dutch neunit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Dutch neunit This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.12.1
MartinMarenz/q-Taxiv3-01
MartinMarenz
2023-05-04T08:11:51Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T08:11:46Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxiv3-01 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="MartinMarenz/q-Taxiv3-01", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
leonardosaveri/DSChallenge_Roberta_Base
leonardosaveri
2023-05-04T08:08:51Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-04T07:52:31Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: DSChallenge_Roberta_Base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DSChallenge_Roberta_Base This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1755 - Accuracy: 0.9549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2974 | 1.0 | 793 | 0.1676 | 0.9419 | | 0.1491 | 2.0 | 1586 | 0.1755 | 0.9549 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
nozmenoz/bella
nozmenoz
2023-05-04T08:06:36Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-04-29T07:37:29Z
--- license: creativeml-openrail-m ---
zohaib99k/Bert_Arabic-SQuADv2-QA
zohaib99k
2023-05-04T07:42:02Z
115
1
transformers
[ "transformers", "pytorch", "electra", "question-answering", "ar", "dataset:ZeyadAhmed/Arabic-SQuADv2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-04T07:37:13Z
--- datasets: - ZeyadAhmed/Arabic-SQuADv2.0 language: - ar metrics: - name: exact_match type: exact_match value: 65.12 - name: F1 type: f1 value: 71.49 --- # AraElectra for Question Answering on Arabic-SQuADv2 This is the [AraElectra](https://huggingface.co/aubmindlab/araelectra-base-discriminator) model, fine-tuned using the [Arabic-SQuADv2.0](https://huggingface.co/datasets/ZeyadAhmed/Arabic-SQuADv2.0) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. with help of [AraElectra Classifier](https://huggingface.co/ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS) to predicted unanswerable question. ## Overview **Language model:** AraElectra <br> **Language:** Arabic <br> **Downstream-task:** Extractive QA **Training data:** Arabic-SQuADv2.0 **Eval data:** Arabic-SQuADv2.0 <br> **Test data:** Arabic-SQuADv2.0 <br> **Code:** [See More Info on Github](https://github.com/zeyadahmed10/Arabic-MRC) **Infrastructure**: 1x Tesla K80 ## Hyperparameters ``` batch_size = 8 n_epochs = 4 base_LM_model = "AraElectra" learning_rate = 3e-5 optimizer = AdamW padding = dynamic ``` ## Online Demo on Arabic Wikipedia and User Provided Contexts See model in action hosted on streamlit [![Open in Streamlit](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/wissamantoun/arabic-wikipedia-qa-streamlit/main) ## Usage For best results use the AraBert [preprocessor](https://github.com/aub-mind/arabert/blob/master/preprocess.py) by aub-mind ```python from transformers import ElectraForQuestionAnswering, ElectraForSequenceClassification, AutoTokenizer, pipeline from preprocess import ArabertPreprocessor prep_object = ArabertPreprocessor("araelectra-base-discriminator") question = prep_object('ما هي جامعة الدول العربية ؟') context = prep_object(''' جامعة الدول العربية هيمنظمة إقليمية تضم دولاً عربية في آسيا وأفريقيا. ينص ميثاقها على التنسيق بين الدول الأعضاء في الشؤون الاقتصادية، ومن ضمنها العلاقات التجارية الاتصالات، العلاقات الثقافية، الجنسيات ووثائق وأذونات السفر والعلاقات الاجتماعية والصحة. المقر الدائم لجامعة الدول العربية يقع في القاهرة، عاصمة مصر (تونس من 1979 إلى 1990). ''') # a) Get predictions qa_modelname = 'ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA' cls_modelname = 'ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS' qa_pipe = pipeline('question-answering', model=qa_modelname, tokenizer=qa_modelname) QA_input = { 'question': question, 'context': context } CLS_input = { 'text': question, 'text_pair': context } qa_res = qa_pipe(QA_input) cls_res = cls_pipe(CLS_iput) threshold = 0.5 #hyperparameter can be tweaked ## note classification results label0 probability it can be answered label1 probability can't be answered ## if label1 probability > threshold then consider the output of qa_res is empty string else take the qa_res # b) Load model & tokenizer qa_model = ElectraForQuestionAnswering.from_pretrained(qa_modelname) cls_model = ElectraForSequenceClassification.from_pretrained(cls_modelname) tokenizer = AutoTokenizer.from_pretrained(qa_modelname) ``` ## Performance Evaluated on the Arabic-SQuAD 2.0 test set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) except changing in the preprocessing a little to fit the arabic language [the modified eval script](https://github.com/zeyadahmed10/Arabic-MRC/blob/main/evaluatev2.py). ``` "exact": 65.11555277951281, "f1": 71.49042547237256,, "total": 9606, "HasAns_exact": 56.14535768645358, "HasAns_f1": 67.79623803036668, "HasAns_total": 5256, "NoAns_exact": 75.95402298850574, "NoAns_f1": 75.95402298850574, "NoAns_total": 4350 ```
maksim2000153/xlm-roberta-base-finetuned-panx-de-ner
maksim2000153
2023-05-04T07:39:10Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-04T07:18:16Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8653353814644136 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1339 - F1: 0.8653 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 | | 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 | | 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-simcse-roberta-large-semeval2015-restaurants
StevenLimcorn
2023-05-04T07:31:28Z
106
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T13:10:10Z
--- tags: - generated_from_keras_callback model-index: - name: unsup-simcse-roberta-large-semeval2015-restaurants results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # unsup-simcse-roberta-large-semeval2015-restaurants This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
MDOWNLOAD/ZNBAELORA
MDOWNLOAD
2023-05-04T07:28:45Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T07:27:12Z
--- license: creativeml-openrail-m ---
redstonehero/aiomonstergirls_v3
redstonehero
2023-05-04T07:18:12Z
29
0
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-04T06:54:17Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image ---
Theju/switch_low_2
Theju
2023-05-04T07:14:20Z
107
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-04T07:13:27Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: switch_low_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # switch_low_2 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Theju/switch_medium_2
Theju
2023-05-04T07:10:38Z
105
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-04T07:09:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: switch_medium_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # switch_medium_2 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-simcse-roberta-large-semeval2015-laptops
StevenLimcorn
2023-05-04T07:06:14Z
107
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T13:00:59Z
--- tags: - generated_from_keras_callback model-index: - name: unsup-simcse-roberta-large-semeval2015-laptops results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # unsup-simcse-roberta-large-semeval2015-laptops This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
ttj/sac-logos-ava1-l14-linearMSE
ttj
2023-05-04T06:57:58Z
0
0
null
[ "pytorch", "safetensors", "license:apache-2.0", "region:us" ]
null
2023-05-04T06:52:03Z
--- license: apache-2.0 --- model ported from https://github.com/christophschuhmann/improved-aesthetic-predictor
soumi-maiti/libri23mix_eend_ss
soumi-maiti
2023-05-04T06:49:28Z
4
0
espnet
[ "espnet", "audio", "diarization", "en", "dataset:librimix", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2023-05-04T06:34:06Z
--- tags: - espnet - audio - diarization language: en datasets: - librimix license: cc-by-4.0 --- ## ESPnet2 DIAR model ### `soumi-maiti/libri23mix_eend_ss` This model was trained by soumimaiti using librimix recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout d837c97c88f13ffe655a30bcff93d814f212b225 pip install -e . cd egs2/librimix/enh_diar23 ./run.sh --skip_data_prep false --skip_train true --download_model soumi-maiti/libri23mix_eend_ss ``` ## DIAR config <details><summary>expand</summary> ``` config: conf/tuning/train_diar_enh_convtasnet_concat_feats_adapt.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/diar_enh_train_diar_enh_convtasnet_concat_feats_adapt ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: 4 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss_enh - min keep_nbest_models: 1 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 16 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: - ../enh_diar1/exp/diar_enh_train_diar_enh_convtasnet_concat_feats_raw/valid.loss_enh.best.pth ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 1 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/diar_enh_stats_8k/train/speech_shape - exp/diar_enh_stats_8k/train/text_shape - exp/diar_enh_stats_8k/train/speech_ref1_shape - exp/diar_enh_stats_8k/train/speech_ref2_shape - exp/diar_enh_stats_8k/train/speech_ref3_shape - exp/diar_enh_stats_8k/train/noise_ref1_shape valid_shape_file: - exp/diar_enh_stats_8k/valid/speech_shape - exp/diar_enh_stats_8k/valid/text_shape - exp/diar_enh_stats_8k/valid/speech_ref1_shape - exp/diar_enh_stats_8k/valid/speech_ref2_shape - exp/diar_enh_stats_8k/valid/speech_ref3_shape - exp/diar_enh_stats_8k/valid/noise_ref1_shape batch_type: folded valid_batch_type: null fold_length: - 800 - 80000 - 80000 - 80000 - 80000 - 80000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 24000 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/espnet_rttm - text - rttm - - dump/raw/train/spk1.scp - speech_ref1 - sound - - dump/raw/train/spk2.scp - speech_ref2 - sound - - dump/raw/train/spk3.scp - speech_ref3 - sound - - dump/raw/train/noise1.scp - noise_ref1 - sound valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/espnet_rttm - text - rttm - - dump/raw/dev/spk1.scp - speech_ref1 - sound - - dump/raw/dev/spk2.scp - speech_ref2 - sound - - dump/raw/dev/spk3.scp - speech_ref3 - sound - - dump/raw/dev/noise1.scp - noise_ref1 - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-07 weight_decay: 0 scheduler: reducelronplateau scheduler_conf: mode: min factor: 0.5 patience: 1 token_list: null src_token_list: null init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true enh_criterions: - name: si_snr conf: eps: 1.0e-07 wrapper: pit wrapper_conf: weight: 1.0 independent_perm: true flexible_numspk: true diar_num_spk: 3 diar_input_size: 128 enh_model_conf: loss_type: si_snr asr_model_conf: ctc_weight: 0.5 interctc_weight: 0.0 ignore_id: -1 lsm_weight: 0.0 length_normalized_loss: false report_cer: true report_wer: true sym_space: <space> sym_blank: <blank> extract_feats_in_collect_stats: true st_model_conf: stft_consistency: false loss_type: mask_mse mask_type: null diar_model_conf: diar_weight: 0.2 attractor_weight: 0.2 subtask_series: - enh - diar model_conf: calc_enh_loss: true bypass_enh_prob: 0 use_preprocessor: true token_type: bpe bpemodel: null src_token_type: bpe src_bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null enh_encoder: conv enh_encoder_conf: channel: 512 kernel_size: 16 stride: 8 enh_separator: tcn_nomask enh_separator_conf: layer: 8 stack: 3 bottleneck_dim: 128 hidden_dim: 512 kernel: 3 causal: false norm_type: gLN enh_decoder: conv enh_decoder_conf: channel: 512 kernel_size: 16 stride: 8 enh_mask_module: multi_mask enh_mask_module_conf: max_num_spk: 3 mask_nonlinear: relu bottleneck_dim: 128 frontend: default frontend_conf: {} specaug: null specaug_conf: {} normalize: utterance_mvn normalize_conf: {} asr_preencoder: null asr_preencoder_conf: {} asr_encoder: rnn asr_encoder_conf: {} asr_postencoder: null asr_postencoder_conf: {} asr_decoder: rnn asr_decoder_conf: {} st_preencoder: null st_preencoder_conf: {} st_encoder: rnn st_encoder_conf: {} st_postencoder: null st_postencoder_conf: {} st_decoder: rnn st_decoder_conf: {} st_extra_asr_decoder: rnn st_extra_asr_decoder_conf: {} st_extra_mt_decoder: rnn st_extra_mt_decoder_conf: {} diar_frontend: default diar_frontend_conf: hop_length: 64 fs: 8000 diar_specaug: null diar_specaug_conf: {} diar_normalize: utterance_mvn diar_normalize_conf: {} diar_encoder: transformer diar_encoder_conf: input_layer: conv2d8 num_blocks: 4 linear_units: 512 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.1 diar_decoder: linear diar_decoder_conf: {} label_aggregator: label_aggregator label_aggregator_conf: win_length: 256 hop_length: 64 diar_attractor: rnn diar_attractor_conf: unit: 256 layer: 1 dropout: 0.0 attractor_grad: true required: - output_dir version: '202205' distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
VinayakMane47/mt5-small-finetuned-amazon-en-es
VinayakMane47
2023-05-04T06:46:38Z
5
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-04T05:58:15Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: VinayakMane47/mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # VinayakMane47/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.4768 - Validation Loss: 3.7299 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 6160, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.4984 | 4.9846 | 0 | | 6.4092 | 4.2145 | 1 | | 5.5483 | 3.9695 | 2 | | 5.0862 | 3.8716 | 3 | | 4.8314 | 3.8164 | 4 | | 4.6503 | 3.7648 | 5 | | 4.5296 | 3.7418 | 6 | | 4.4768 | 3.7299 | 7 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
BigXiang/Sea_side_shaonv
BigXiang
2023-05-04T06:28:12Z
0
3
null
[ "region:us" ]
null
2023-03-30T18:26:46Z
介绍: 我上次因为在讨论区主张那些标记韩服的banban,不要跑来偷汉服,而被ban了3天(大概率是被banban们恶意举报了),刚刚通过申诉解封。同时上次的shaonv崇拜(LOL_style_v10)也在没有任何提醒的情况下被删除了。我会在之后想办法补档。这次带来的是海边shaonv,20个批次练了3小时,成品很不错。不多说,请直接看效果。考虑到上次shaonv崇拜因为不同人群喜好而在评论区引起了争议。这次考虑到不同受众,直接做了“大小”两个版本的lora。 (哈哈,我个人强推小奈奈,效果最佳。大奈奈内测一天,喜欢大奈奈的得等到明天才能下) 同样的,不需要过多prompt就能得到优秀成品,荤素兼备,大道至简。触发词也一如既往可爱。 相关问题、或者渲染出来的优秀作品,都欢迎在评论区留言反馈。 看板娘样图示范: ![00001-0-00010-1010b2e32b1f870fc8c8d1fa5825d42872940f580.png](https://s3.amazonaws.com/moonup/production/uploads/642459e3956c16097c2673b8/2iCZS2XQu97Yk7BUzhNCq.png) keai, <lora:SSS_style-000018:0.6> Negative prompt: bad-picture-chill-75v, negativeembed Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 4231914231, Size: 560x700, xyz权重参考值(不知道为啥,我发现xyz脚本跑出来的图其实不准,真的是仅供参考): 注意!一旦下载这个lora,即证明你自愿遵守以下使用事项: · 本Lora仅供个人学习交流,禁止任何形式的转载、传播;禁止用于任何商业用途。 · Lora禁止用于从事非法活动,使用时请遵守所在地的法律法规。对于Lora使用者的非法行为,本人概不负责并坚决反对。 · 不鼓励将Lora用于生成NSFW内容。 EN This is used to generate shaonv at the beach .And you are welcome to show your work in the comment section.Hope you enjoy. Example: keai, <lora:SSS_style-000018:0.6> Negative prompt: bad-picture-chill-75v, negativeembed Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 4231914231, Size: 560x700, Attention! By downloading this lora, you certify your voluntary compliance with the following terms: Terms of Use: · My Lora is only for personal learning and communication.Any form of reproduction and dissemination is prohibited; any commercial use is prohibited. · Use for illegal activities is prohibited, and please observe the laws and regulations of your location when using Lora. I am not responsible for and strongly oppose any illegal actions by Lora users. · it's discouraged for generating NSFW content.
Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically
Zayn
2023-05-04T06:28:10Z
0
9
transformers
[ "transformers", "pytorch", "safetensors", "vision-encoder-decoder", "image-text-to-text", "image-to-text", "image-captioning", "doi:10.57967/hf/0658", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2022-10-09T09:34:50Z
--- tags: - image-to-text - image-captioning license: apache-2.0 widget: - src: https://pixabay.com/get/ga187b8f146a9fa30b1f553d63fa94271e023868cd247fbad7ce02b6ffb5718a52fc04809be440f997f57dad90614dde2e9821edf8e628925f0042c6584fc04ec809421a040e3bc9561324249ab6e09c4_1280.jpg example_title: Horse Riding - src: https://static1.bigstockphoto.com/6/8/2/large1500/286059499.jpg example_title: Bicycle --- This is an image captioning model training by Zayn ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer model = VisionEncoderDecoderModel.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") feature_extractor = ViTFeatureExtractor.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") tokenizer = AutoTokenizer.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 20 num_beams = 8 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['Image URL.jpg'])
sd-concepts-library/ahx-beta-453407d
sd-concepts-library
2023-05-04T05:58:52Z
0
0
null
[ "license:mit", "region:us" ]
null
2023-05-04T05:58:48Z
--- license: mit --- ### ahx-beta-453407d on Stable Diffusion This is the `<ahx-beta-453407d>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<ahx-beta-453407d> 0](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/1.jpeg) ![<ahx-beta-453407d> 1](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/2.jpeg) ![<ahx-beta-453407d> 2](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/0.jpeg) ![<ahx-beta-453407d> 3](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/4.jpeg) ![<ahx-beta-453407d> 4](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/6.jpeg) ![<ahx-beta-453407d> 5](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/5.jpeg) ![<ahx-beta-453407d> 6](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/7.jpeg) ![<ahx-beta-453407d> 7](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/3.jpeg) ![<ahx-beta-453407d> 8](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/9.jpeg) ![<ahx-beta-453407d> 9](https://huggingface.co/sd-concepts-library/ahx-beta-453407d/resolve/main/concept_images/8.jpeg)
hanafuusen2001/LoRA_download_2
hanafuusen2001
2023-05-04T05:57:40Z
0
3
null
[ "license:other", "region:us" ]
null
2023-04-12T12:36:37Z
--- license: other --- # 聲明 Disclaimer 本資料夾中的模型不是我所製作,版權歸原作者所有(各模型版權詳見 http://www.civitai.com 所示)。我上傳至本資料夾僅爲方便在綫抽取資源,并非盈利。 The models in this folder are not made by me, and the copyright belongs to the original author (see http://www.civitai.com for details on the copyright of each model). I uploaded to this folder only for the convenience of extracting resources online, not for profit. # 模型列表 List of Models 本資料夾中所有模型詳見下表。 All the models in this folder are detailed in the table below. | 模型名稱 Model Name | Civitai 頁面鏈接 Civitai Page Link | Civitai 下載鏈接 Civitai Download Link | |----------------------|--------------------|--------------------| |samdoesartsSamYang_offset.safetensors |https://civitai.com/models/6638 |https://civitai.com/api/download/models/7804 | |samdoesartsSamYang_original.safetensors |已失效 expired |https://civitai.com/api/download/models/10864 | |hipoly3DModelLora_v20.safetensors |https://civitai.com/models/8730?modelVersionId=44566 |https://civitai.com/api/download/models/44566 | |hipoly3DModelLora_v10.safetensors |https://civitai.com/models/8730?modelVersionId=10301 |https://civitai.com/api/download/models/10301 | |Zheng.safetensors |https://civitai.com/models/11034?modelVersionId=39348 |https://civitai.com/api/download/models/39348 | 注 1:samdoesartsSamYang 模型的觸發詞為:sam yang 注 2:hipoly3DModelLora_v10 模型的觸發詞為:hiqcgbody <img src="https://raw.githubusercontent.com/hanafuusen/images/main/samdoesartsSamYang_civitai.jpg" width="" height=""> <img src="https://raw.githubusercontent.com/hanafuusen/images/main/hipoly3DModelLora_v10_civitai.jpg" width="" height="">
adiga20/git-base-pokemon
adiga20
2023-05-04T05:44:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T05:44:11Z
--- license: creativeml-openrail-m ---
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2015-restaurants
StevenLimcorn
2023-05-04T05:42:15Z
98
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:04:57Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2015-restaurants results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2015-restaurants This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2015-laptops
StevenLimcorn
2023-05-04T05:41:13Z
94
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:00:54Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2015-laptops results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2015-laptops This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-promcse-bert-base-uncased-facebook-election-ads
StevenLimcorn
2023-05-04T05:40:44Z
97
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:03:37Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-facebook-election-ads results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-facebook-election-ads This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2016-laptops
StevenLimcorn
2023-05-04T05:36:20Z
93
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T16:59:00Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2016-laptops results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2016-laptops This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2014-restaurants
StevenLimcorn
2023-05-04T05:32:50Z
88
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:02:15Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2014-restaurants results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2014-restaurants This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
versae/wav2vec2-base-finetuned-coscan-sex
versae
2023-05-04T05:32:07Z
156
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:coscan-speech", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2022-09-06T23:00:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - coscan-speech metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-coscan-sex results: - task: name: Audio Classification type: audio-classification dataset: name: Coscan Speech type: NbAiLab/coscan-speech args: no metrics: - name: Test Accuracy type: accuracy value: 0.9993247805536799 - name: Validation Accuracy type: accuracy value: 0.9965283657917019 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-coscan-sex This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the coscan-speech dataset. It achieves the following results on the evaluation set: - Loss: 0.0229 - Accuracy: 0.9965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0034 | 1.0 | 6644 | 0.0229 | 0.9965 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.10.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2014-laptops
StevenLimcorn
2023-05-04T05:32:00Z
104
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:07:54Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2014-laptops results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2014-laptops This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-simcse-roberta-large-semeval2014-laptops
StevenLimcorn
2023-05-04T05:23:55Z
103
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-04-27T16:36:00Z
--- tags: - generated_from_keras_callback model-index: - name: unsup-simcse-roberta-large-semeval2014-laptops results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # unsup-simcse-roberta-large-semeval2014-laptops This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
alanwalk/ShirtTugPose_lora
alanwalk
2023-05-04T05:07:58Z
0
0
null
[ "region:us" ]
null
2023-05-04T05:05:21Z
https://civitai.com/models/7706/shirt-tug-pose-lora LORA model for shirt tug pose, suggested LORA weights: 0.5 ~ 1.5, default weight 1 should be good enough. If the pose doesn't show up for some checkpoints, try greater weights. Trigger words: shirt, naked shirt, shirt tug
imania/amir_take_home_result-2023_05_03-22_33_43
imania
2023-05-04T05:03:42Z
179
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-04T04:52:50Z
--- language: - en library_name: transformers pipeline_tag: text-classification ---
fiatrete/dan-used-models
fiatrete
2023-05-04T04:58:27Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-03-16T09:36:38Z
--- license: openrail --- models used in [DAN](https://github.com/fiatrete/DAN-Stable-Diffusion-Computing-Network). all models are gathered from network(most from [civitai](https://civitai.com)). this place is used as a data store.
P1NHE4D/whisper-medium-nn-v3
P1NHE4D
2023-05-04T04:57:41Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "nn", "dataset:norwegian-parliament", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-03T12:25:27Z
--- language: - nn license: apache-2.0 tags: - generated_from_trainer datasets: - norwegian-parliament metrics: - wer model-index: - name: whisper-medium-nn-v3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Stortingskorpuset type: norwegian-parliament config: default split: validation args: default metrics: - name: Wer type: wer value: 11.337582785573966 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-nn-v3 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Stortingskorpuset dataset. It achieves the following results on the evaluation set: - Loss: 0.2116 - Wer: 11.3376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 8000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4413 | 0.25 | 2000 | 0.4447 | 26.7707 | | 0.1945 | 1.1 | 4000 | 0.3042 | 17.8344 | | 0.1013 | 1.35 | 6000 | 0.2421 | 14.2138 | | 0.0308 | 2.2 | 8000 | 0.2116 | 11.3376 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
P1NHE4D/whisper-medium-nb-v3
P1NHE4D
2023-05-04T04:34:02Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "nb", "dataset:norwegian-parliament", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-03T12:17:13Z
--- language: - nb license: apache-2.0 tags: - generated_from_trainer datasets: - norwegian-parliament metrics: - wer model-index: - name: whisper-medium-nb-v3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Stortingskorpuset type: norwegian-parliament config: default split: validation args: default metrics: - name: Wer type: wer value: 10.024541720925574 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-nb-v3 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Stortingskorpuset dataset. It achieves the following results on the evaluation set: - Loss: 0.1948 - Wer: 10.0245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 8000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4018 | 0.25 | 2000 | 0.4179 | 25.0751 | | 0.1617 | 1.1 | 4000 | 0.2911 | 16.5849 | | 0.0885 | 1.35 | 6000 | 0.2264 | 12.5146 | | 0.0269 | 2.2 | 8000 | 0.1948 | 10.0245 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
abhishek/autotrain-m7xl-lpfp-h4qr-55209128847
abhishek
2023-05-04T04:21:15Z
0
0
null
[ "autotrain", "text-generation", "dataset:abhishek/autotrain-data-m7xl-lpfp-h4qr", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T19:55:21Z
--- tags: - autotrain - text-generation widget: - text: "I love 🤗 AutoTrain because " datasets: - abhishek/autotrain-data-m7xl-lpfp-h4qr co2_eq_emissions: emissions: 0 --- # Model Trained Using AutoTrain - Problem type: Text Generation - CO2 Emissions (in grams): 0.0000 ## Validation Metrics loss: 0.8759807348251343
shawt100/shawtsanders
shawt100
2023-05-04T04:14:50Z
36
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "dataset:OpenAssistant/oasst1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-04T03:46:42Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion datasets: - OpenAssistant/oasst1 metrics: - character library_name: diffusers pipeline_tag: text-to-image --- ### shawtsanders Dreambooth model trained by shawt100 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
joseph-t/purrfect-ai-test
joseph-t
2023-05-04T03:44:02Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T03:44:02Z
--- license: creativeml-openrail-m ---
muwenxin/autotrain-xgwbishe1-55280129012
muwenxin
2023-05-04T03:38:17Z
120
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain", "summarization", "en", "dataset:muwenxin/autotrain-data-xgwbishe1", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-05-04T03:34:03Z
--- tags: - autotrain - summarization language: - en widget: - text: "I love AutoTrain 🤗" datasets: - muwenxin/autotrain-data-xgwbishe1 co2_eq_emissions: emissions: 1.7354362265383152 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 55280129012 - CO2 Emissions (in grams): 1.7354 ## Validation Metrics - Loss: 3.123 - Rouge1: 15.575 - Rouge2: 2.825 - RougeL: 11.785 - RougeLsum: 13.616 - Gen Len: 20.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/muwenxin/autotrain-xgwbishe1-55280129012 ```
4bit/oasst-llama13b-4bit-128g
4bit
2023-05-04T03:10:55Z
6
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-04T02:57:06Z
https://wandb.ai/open-assistant/supervised-finetuning/runs/lguuq2c1 Quantized from https://huggingface.co/dvruette/oasst-llama-13b-2-epochs GGML Version: https://huggingface.co/Black-Engineer/oasst-llama13b-ggml-q4
yfyeung/icefall-asr-multidataset-pruned_transducer_stateless7-2023-05-04
yfyeung
2023-05-04T03:00:54Z
0
3
null
[ "tensorboard", "onnx", "license:apache-2.0", "region:us" ]
null
2023-05-04T02:34:06Z
--- license: apache-2.0 --- Introduction This repo contains pre-trained models, checkpoints, training logs and decoding results of the following pull-request: https://github.com/k2-fsa/icefall/pull/1010
4bit/koala-13B-GPTQ-4bit-128g
4bit
2023-05-04T02:54:46Z
7
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "koala", "ShareGPT", "gptq", "dataset:RyokoAI/ShareGPT52K", "dataset:Hello-SimpleAI/HC3", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-04T02:48:14Z
--- license: other library_name: transformers pipeline_tag: text-generation datasets: - RyokoAI/ShareGPT52K - Hello-SimpleAI/HC3 tags: - koala - ShareGPT - llama - gptq inference: false --- # Koala: A Dialogue Model for Academic Research This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model. This version has then been quantized to 4-bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). ## My Koala repos I have the following Koala model repositories available: **13B models:** * [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF) * [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g) * [GPTQ quantized 4bit 13B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g-GGML) **7B models:** * [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF) * [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized) * [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g) * [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML) ## Provided files Three model files are provided. You don't need all three - choose the one that suits your needs best! Details of the files provided: * `koala-13B-4bit-128g.pt` * pt format file, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code. * Command to create: * `python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save koala-13B-4bit-128g.pt` * `koala-13B-4bit-128g.safetensors` * newer `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code. * Command to create: * `python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors koala-13B-4bit-128g.safetensors` * `koala-13B-4bit-128g.no-act-order.ooba.pt` * `pt` format file, created with [oobabooga's older CUDA fork of GPTQ-for-LLaMa](https://github.com/oobabooga/GPTQ-for-LLaMa). * This file is included primarily for Windows users, as it can be used without needing to compile the latest GPTQ-for-LLaMa code. * It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code. * The older GPTQ code does not support all the latest features, so the quality may be fractionally lower. * Command to create: * `python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save koala-13B-4bit-128g.no-act-order.ooba.pt` ## How to run in `text-generation-webui` File `koala-13B-4bit-128g.no-act-order.ooba.pt` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui). The other two model files were created with the latest GPTQ code, and require that the latest GPTQ-for-LLaMa is used inside the UI. Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI: ``` git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa git clone https://github.com/oobabooga/text-generation-webui mkdir -p text-generation-webui/repositories ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa ``` Then install this model into `text-generation-webui/models` and launch the UI as follows: ``` cd text-generation-webui python server.py --model koala-13B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want ``` The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information. If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch: ``` git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda cd GPTQ-for-LLaMa python setup_cuda.py install ``` Then link that into `text-generation-webui/repositories` as described above. Or just use `koala-13B-4bit-128g.no-act-order.ooba.pt` as mentioned above. ## How the Koala delta weights were merged The Koala delta weights were originally merged using the following commands, producing [koala-13B-HF](https://huggingface.co/TheBloke/koala-13B-HF): ``` git clone https://github.com/young-geng/EasyLM git clone https://huggingface.co/TheBloke/llama-13b mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_13b_diff_v2 cd EasyLM PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.models.llama.convert_torch_to_easylm \ --checkpoint_dir=/content/llama-13b \ --output_file=/content/llama-13b-LM \ --streaming=True PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.scripts.diff_checkpoint --recover_diff=True \ --load_base_checkpoint='params::/content/llama-13b-LM' \ --load_target_checkpoint='params::/content/koala_diffs/koala_13b_diff_v2' \ --output_file=/content/koala_13b.diff.weights \ --streaming=True PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.models.llama.convert_easylm_to_hf --model_size=13b \ --output_dir=/content/koala-13B-HF \ --load_checkpoint='params::/content/koala_13b.diff.weights' \ --tokenizer_path=/content/llama-13b/tokenizer.model ``` ## Further info Check out the following links to learn more about the Berkeley Koala model. * [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/) * [Online demo](https://koala.lmsys.org/) * [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM) * [Documentation for running Koala locally](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md) ## License The model weights are intended for academic research only, subject to the [model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md), [Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use), and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb). Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
MDOWNLOAD/OMOECLORA
MDOWNLOAD
2023-05-04T02:48:02Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T02:46:16Z
--- license: creativeml-openrail-m ---
Smoden/pinocchio_diff_lora_1500
Smoden
2023-05-04T02:38:17Z
4
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-04T00:47:15Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - Smoden/pinocchio_diff_lora_1500 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
stablediffusionapi/theallys-mix-iv-veri
stablediffusionapi
2023-05-04T02:00:14Z
0
1
null
[ "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-04T02:00:06Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # TheAlly's Mix IV: Verisimilar API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/1300711641683165602.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "theallys-mix-iv-veri" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/theallys-mix-iv-veri) Credits: [View credits](https://civitai.com/?query=TheAlly%27s%20Mix%20IV%3A%20Verisimilar) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "theallys-mix-iv-veri", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
platzi/platzi-distilroberta-base-mrpc-glue-cristian-durango
platzi
2023-05-04T01:52:35Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-04T01:33:56Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: platzi-distilroberta-base-mrpc-glue-cristian-durango results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8259803921568627 - name: F1 type: f1 value: 0.8794567062818336 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilroberta-base-mrpc-glue-cristian-durango This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.4245 - Accuracy: 0.8260 - F1: 0.8795 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5318 | 1.09 | 500 | 0.4245 | 0.8260 | 0.8795 | | 0.3704 | 2.18 | 1000 | 0.6045 | 0.8309 | 0.8739 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
TMZN/train_MEGA
TMZN
2023-05-04T01:36:35Z
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
2023-05-03T07:36:51Z
--- license: gpl-3.0 --- # train_MEGA 以马恩全集为主要数据集的训练,未完。<br> The training using the complete works of Marx and Engels as the primary dataset is incomplete.<br> Das Training mit den gesammelten Werken von Marx und Engels als primärem Datensatz ist unvollständig.<br> 2023年5月3日15点20分 还在手搓数据集,打算用训练小说的方法试试。 <br> 同步https://github.com/tmzncty/train_MEGA
juan-barsce/my_awesome_eli5_clm-model
juan-barsce
2023-05-04T01:31:51Z
63
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-04T01:14:01Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: juan-barsce/my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juan-barsce/my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.7254 - Validation Loss: 3.7653 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.9035 | 3.7936 | 0 | | 3.7854 | 3.7763 | 1 | | 3.7254 | 3.7653 | 2 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
ToddGoldfarb/Cadet-Medium
ToddGoldfarb
2023-05-04T01:31:07Z
47
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "conversational", "en", "dataset:allenai/soda", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T02:36:53Z
--- license: openrail datasets: - allenai/soda language: - en pipeline_tag: conversational --- # What is Cadet-Medium? Inspired by Allen AI's **Cosmo-XL**, **Cadet-Medium** is a somewhat small conversational model trained off of the **SODA** dataset. **Cadet-Medium** is intended for inference at the edge (on something as small as a 2GB RAM Raspberry Pi). **Cadet-Medium** is trained off of the **t5-base** pretrained model from Google. If you have any questions, or any comments on improvements, please contact me at: **tcgoldfarb@gmail.com** # Google Colab Link Here is the link to the Google Colab file, where I walk through the process of training the model and using the SODA public dataset from AI2. https://colab.research.google.com/drive/1uekZ0gO3GqjPwno16tV1A4Gitrl7p3ur?usp=sharing # Get Started With Cadet-Medium Use the code snippet below to get started with Cadet-Medium! ``` import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import colorful as cf cf.use_true_colors() cf.use_style('monokai') class CadetMedAgent: def __init__(self): print(cf.bold | cf.purple("Waking up Cadet-Medium...")) self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.tokenizer = AutoTokenizer.from_pretrained("t5-base", model_max_length=512) self.model = AutoModelForSeq2SeqLM.from_pretrained("ToddGoldfarb/Cadet-Medium", low_cpu_mem_usage=True).to(self.device) self.conversation_history = "" def observe(self, observation): self.conversation_history = self.conversation_history + observation # The number 400 below is just a truncation safety net. It leaves room for 112 input tokens. if len(self.conversation_history) > 400: self.conversation_history = self.conversation_history[112:] def set_input(self, situation_narrative="", role_instruction=""): input_text = "dialog: " if situation_narrative != "": input_text = input_text + situation_narrative if role_instruction != "": input_text = input_text + " <SEP> " + role_instruction input_text = input_text + " <TURN> " + self.conversation_history # Uncomment the line below to see what is fed to the model. # print(input_text) return input_text def generate(self, situation_narrative, role_instruction, user_response): user_response = user_response + " <TURN> " self.observe(user_response) input_text = self.set_input(situation_narrative, role_instruction) inputs = self.tokenizer([input_text], return_tensors="pt").to(self.device) # I encourage you to change the hyperparameters of the model! Start by trying to modify the temperature. outputs = self.model.generate(inputs["input_ids"], max_new_tokens=512, temperature=1, top_p=.95, do_sample=True) cadet_response = self.tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) added_turn = cadet_response + " <TURN> " self.observe(added_turn) return cadet_response def reset_history(self): self.conversation_history = [] def run(self): def get_valid_input(prompt, default): while True: user_input = input(prompt) if user_input in ["Y", "N", "y", "n"]: return user_input if user_input == "": return default while True: continue_chat = "" # MODIFY THESE STRINGS TO YOUR LIKING :) situation_narrative = "Imagine you are Cadet-Medium talking to ???." role_instruction = "You are Cadet-Medium, and you are talking to ???." self.chat(situation_narrative, role_instruction) continue_chat = get_valid_input(cf.purple("Start a new conversation with new setup? [Y/N]:"), "Y") if continue_chat in ["N", "n"]: break print(cf.blue("CM: See you!")) def chat(self, situation_narrative, role_instruction): print(cf.green( "Cadet-Medium is running! Input [RESET] to reset the conversation history and [END] to end the conversation.")) while True: user_input = input("You: ") if user_input == "[RESET]": self.reset_history() print(cf.green("[Conversation history cleared. Chat with Cadet-Medium!]")) continue if user_input == "[END]": break response = self.generate(situation_narrative, role_instruction, user_input) print(cf.blue("CM: " + response)) def main(): print(cf.bold | cf.blue("LOADING MODEL")) CadetMed = CadetMedAgent() CadetMed.run() if __name__ == '__main__': main() ``` # Citations and Special Thanks Special thanks to Hyunwoo Kim for discussing with me the best way to use the SODA dataset. If you haven't looked into their work with SODA, Prosocial-Dialog, or COSMO, I recommend you do so! As well, read the paper on SODA! The article is listed below. ``` @article{kim2022soda, title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization}, author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi}, journal={ArXiv}, year={2022}, volume={abs/2212.10465} } ```
rcugarte/genfonts
rcugarte
2023-05-04T01:28:39Z
0
0
null
[ "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:rcugarte/genfonts_data", "region:us" ]
text-to-image
2023-05-04T01:19:53Z
--- datasets: - rcugarte/genfonts_data tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image ---
ZyXin/ppo-Pyramids_Training
ZyXin
2023-05-04T01:14:39Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-05-04T01:14:34Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: ZyXin/ppo-Pyramids_Training 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
1008611sS/159357258
1008611sS
2023-05-04T01:12:53Z
0
0
null
[ "region:us" ]
null
2023-05-04T01:11:38Z
--- license: bigscience-bloom-rail-1.0 ---Nanshan Mountain lies to the southeast. Since then, the worm has been a snake and the snake a fish. The Nanshan Mountain lies to the southeast of Jiehuni. The twin bird in its east, its bird green, red, two birds wings. One day in the southern Shandong Province.
DurangoFon/vit_model
DurangoFon
2023-05-04T00:55:55Z
216
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-04T00:07:22Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: vit_model results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9924812030075187 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0189 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1345 | 3.85 | 500 | 0.0189 | 0.9925 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Smoden/Chronicles_diff_lora_1500
Smoden
2023-05-04T00:45:11Z
4
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-03T23:27:34Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - Smoden/Chronicles_diff_lora_1500 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
huggingtweets/marcash_uk
huggingtweets
2023-05-04T00:07:28Z
138
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-04T00:07:19Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1641976415481389056/XkRvxaLF_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MARC 🍊</div> <div style="text-align: center; font-size: 14px;">@marcash_uk</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from MARC 🍊. | Data | MARC 🍊 | | --- | --- | | Tweets downloaded | 349 | | Retweets | 44 | | Short tweets | 176 | | Tweets kept | 129 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/njtz7k2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marcash_uk's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/v9r62wtl) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/v9r62wtl/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/marcash_uk') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lewdryuna/A-Himawari
lewdryuna
2023-05-03T23:55:01Z
0
2
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "ja", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-03T23:55:01Z
--- license: creativeml-openrail-m language: - ja pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image library_name: diffusers duplicated_from: natsusakiyomi/HimawariMixs --- <div class="flex justify-center"> <div class="container p-0 w-100"> <img class="mt-0 object-cover rounded-t-lg w-100" style="height: 320px;" src="https://huggingface.co/natsusakiyomi/HimawariMixs/resolve/main/image/header.jpeg" width="100%"/> <div class="flex px-4"> <div class="flex-auto"> <h1 class="mb-2 text-3xl font-bold leading-tight" style="color: rgb(255, 151, 0/var(--tw-text-opacity));"> HimawariMixSeries </h1> <p class="mb-4 text-base text-neutral-600 dark:text-neutral-200"> 様々なモデルをマージした背景や細部の表現力が強いVAE内蔵型モデル </p> </div> <div> <a href="https://twitter.com/min__san" class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md" style="background-color: #1da1f2"> <svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24"> <path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" /> </svg> </a> </div> </div> </div> </div> --- <h4>📄 ライセンス / License</h4> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> 修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="blue_pencil-v7" class="mt-0 text-2xl"> <code>HimawariMix-v3</code> <small></small> </h3> <div> 背景強化をメインに改宗したモデルでリアル系のモデルの比率が多くなったモデル<br> リアル系を多く含んでいるため手の破綻は他と比べて比較的出ずらい.....気がする B型のほうが比較的扱いやすい気がします <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="blue_pencil-v7" class="mt-0 text-2xl"> <code>HimawariMix-v2</code> <small></small> </h3> <div> 背景よりキャラを重視して作られたモデル<br> v1.20v1.10やv1とは違いいろいろな場面でも使えるようになりました<br> このHimawariMixの特徴である彩度の高さが出始めた<br> 悪く言えば器用貧乏 <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="blue_pencil-v7" class="mt-0 text-2xl"> <code>HimawariMix-v1.20 and 1.10</code> <small></small> </h3> <div> HimawariMix-v1のマージ比率を変えたマイナーチェンジモデル<br> クローズアップに特化しておりそれ以外はあまりさえない<br> マイナーチェンジにより破綻が少なくなり安定性が増した v1.20とv1.10の違いはVAEの違い <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="blue_pencil-v7" class="mt-0 text-2xl"> <code>HimawariMix-v1</code> <small></small> </h3> <div> 初代HimawariMix<br> 当時あまりなかった背景とキャラクターを両立させるために作ったモデル<br> 特徴は背景が割と強いモデルなお今となっては普通でHimawariMixの特徴である彩度の高さはこの時点ではまだない --- # 作者&連絡先 Twiter: [@min__san](https://twitter.com/min__san)
hashiikhan/whisper-small-Eng-1
hashiikhan
2023-05-03T23:00:30Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:speech_commands", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-03T22:50:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - speech_commands metrics: - wer model-index: - name: whisper-small-Eng-1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: speech_commands type: speech_commands config: v0.01 split: test args: v0.01 metrics: - name: Wer type: wer value: 239.6 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-Eng-1 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the speech_commands dataset. It achieves the following results on the evaluation set: - Loss: 5.0620 - Wer: 239.6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-----:| | 7.156 | 0.01 | 5 | 6.9727 | 256.4 | | 7.5392 | 0.02 | 10 | 5.0620 | 239.6 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
jkkawach/ppo-Huggy
jkkawach
2023-05-03T23:00:20Z
15
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-05-03T23:00:12Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: jkkawach/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
poorviachar/my_awesome_qa_model
poorviachar
2023-05-03T22:51:55Z
61
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-02T22:31:23Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: poorviachar/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # poorviachar/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6838 - Validation Loss: 1.8516 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.6824 | 2.6359 | 0 | | 2.0129 | 1.8516 | 1 | | 1.6838 | 1.8516 | 2 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
sqllama/lora-spider-dono
sqllama
2023-05-03T22:36:46Z
0
0
null
[ "region:us" ]
null
2023-04-30T01:00:50Z
## Setup Notes For this model, a VM with 2 T4 GPUs was used. Note 1. Output directory was initially lora-alpaca and then contents were moved to new folder when initializing git repository. ## Log (sqltest) chrisdono@deep-learning-duo-t4-3:~/alpaca-lora$ WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'spider' --output_dir './lora-alpaca' --num_epochs 10 --batch_size 32 --micro_batch_size 16 --learning_rate '9e-5' --add_eos_token Adding last loss values not included in trainer json file from last checkpoint. {'loss': 0.241, 'learning_rate': 1.0040816326530613e-05, 'epoch': 8.98} {'loss': 0.2343, 'learning_rate': 9.42857142857143e-06, 'epoch': 9.04} {'loss': 0.2376, 'learning_rate': 8.816326530612245e-06, 'epoch': 9.11} {'loss': 0.2355, 'learning_rate': 8.204081632653062e-06, 'epoch': 9.17} {'loss': 0.229, 'learning_rate': 7.591836734693877e-06, 'epoch': 9.24} {'loss': 0.2325, 'learning_rate': 6.979591836734694e-06, 'epoch': 9.3} {'loss': 0.24, 'learning_rate': 6.367346938775511e-06, 'epoch': 9.36} {'loss': 0.2438, 'learning_rate': 5.755102040816327e-06, 'epoch': 9.43} {'loss': 0.2391, 'learning_rate': 5.142857142857143e-06, 'epoch': 9.49} {'loss': 0.2351, 'learning_rate': 4.530612244897959e-06, 'epoch': 9.55} {'loss': 0.2289, 'learning_rate': 3.9183673469387755e-06, 'epoch': 9.62} {'loss': 0.2294, 'learning_rate': 3.3061224489795924e-06, 'epoch': 9.68} {'loss': 0.2344, 'learning_rate': 2.693877551020408e-06, 'epoch': 9.75} {'loss': 0.2358, 'learning_rate': 2.0816326530612247e-06, 'epoch': 9.81} {'loss': 0.2365, 'learning_rate': 1.469387755102041e-06, 'epoch': 9.87} {'loss': 0.2309, 'learning_rate': 8.571428571428572e-07, 'epoch': 9.94} {'loss': 0.2438, 'learning_rate': 2.4489795918367347e-07, 'epoch': 10.0} 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1570 {'train_runtime': 17144.6766, 'train_samples_per_second': 2.916, 'train_steps_per_second': 0.092, 'train_loss': 0.41175747267000234, 'epoch': 10.0} 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1570 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1570 /1570 [4:45:44<00:00, 10.92s/it]
mattjmattj/HF_RL_unit4_reinforce_CartPole
mattjmattj
2023-05-03T22:32:50Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T22:32:40Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: HF_RL_unit4_reinforce_CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 441.80 +/- 87.01 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
artemfilipenko/keyphrase-generation-bart-large-trained-on-augmented-and-default-inspec
artemfilipenko
2023-05-03T22:16:04Z
105
1
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "en", "dataset:midas/inspec", "dataset:artemfilipenko/synonyms-augmented-5x-inspec", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-03T22:06:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - midas/inspec - artemfilipenko/synonyms-augmented-5x-inspec model-index: - name: synonyms_5000_plus_3000_default_3_epoch results: [] language: - en metrics: - f1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # synonyms_5000_plus_3000_default_3_epoch This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the midas/inspec generation dataset, concatenated with data augmented custom artemfilipenko/synonyms-augmented-5x-inspec dataset. It achieves the following results on the evaluation set: - Loss: 1.7956 - F1@5ext: 0.4590 - P@5ext: 0.6116 - R@5ext: 0.4109 - F1@10ext: 0.5403 - P@10ext: 0.5953 - R@10ext: 0.5374 - F1@5abs: 0.2019 - P@5abs: 0.3080 - R@5abs: 0.1721 - F1@10abs: 0.2307 - P@10abs: 0.3066 - R@10abs: 0.2109 - F1@oext: 0.5427 - P@oext: 0.6045 - R@oext: 0.5246 - F1@oabs: 0.2316 - P@oabs: 0.3079 - R@oabs: 0.2094 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 10 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.6.1 - Tokenizers 0.13.1
jajsmith/dsn_afrispeech
jajsmith
2023-05-03T21:54:22Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "en", "dataset:tobiolatunji/afrispeech-200", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-03T17:17:19Z
--- language: - en license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - tobiolatunji/afrispeech-200 model-index: - name: Whisper Small En - Owos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small En - Owos This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the AfriSpeech_j dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6865 - eval_wer: 29.3845 - eval_runtime: 1774.5798 - eval_samples_per_second: 1.691 - eval_steps_per_second: 0.211 - epoch: 0.06 - step: 250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
vldnechai/poca-SoccerTwos
vldnechai
2023-05-03T21:39:07Z
36
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-05-03T21:37:51Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: vldnechai/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jploski/llama-7b-hf
jploski
2023-05-03T21:32:32Z
6
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T21:23:14Z
--- license: other --- Note: this is yahma/llama-7b-hf with checkpoint shards split into smaller files in order to enable loading in restricted memory environments like free Google Colab. The remaining description below is copied from yahma/llama-7b-hf. LLaMA-7B converted to work with git head Transformers/HuggingFace on April 8, 2023. This version should resolve the EOS token issues. This is under a special license, please see the LICENSE file for details. This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format. -- license: other --- # LLaMA Model Card ## Model details **Organization developing the model** The FAIR team of Meta AI. **Model date** LLaMA was trained between December. 2022 and Feb. 2023. **Model version** This is version 1 of the model. **Model type** LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters. **Paper or resources for more information** More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/. **Citations details** https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/ **License** Non-commercial bespoke license **Where to send questions or comments about the model** Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue. ## Intended use **Primary intended uses** The primary use of LLaMA is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations. **Primary intended users** The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence. **Out-of-scope use cases** LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers. ## Factors **Relevant factors** One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model. **Evaluation factors** As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model. ## Metrics **Model performance measures** We use the following measure to evaluate the model: - Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs, - Exact match for question answering, - The toxicity score from Perspective API on RealToxicityPrompts. **Decision thresholds** Not applicable. **Approaches to uncertainty and variability** Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training. ## Evaluation datasets The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. ## Training dataset The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing. ## Quantitative analysis Hyperparameters for the model architecture <table> <thead> <tr> <th >LLaMA</th> <th colspan=6>Model hyper parameters </th> </tr> <tr> <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th> </tr> </thead> <tbody> <tr> <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> <tr> <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> </tbody> </table> *Table 1 - Summary of LLama Model Hyperparameters* We present our results on eight standard common sense reasoning benchmarks in the table below. <table> <thead> <tr> <th>LLaMA</th> <th colspan=9>Reasoning tasks </th> </tr> <tr> <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th> </tr> </thead> <tbody> <tr> <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93 </th> <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94 </th> <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92 </th> <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr> </tbody> </table> *Table 2 - Summary of LLama Model Performance on Reasoning tasks* We present our results on bias in the table below. Note that lower value is better indicating lower bias. | No | Category | FAIR LLM | | --- | -------------------- | -------- | | 1 | Gender | 70.6 | | 2 | Religion | 79 | | 3 | Race/Color | 57 | | 4 | Sexual orientation | 81 | | 5 | Age | 70.1 | | 6 | Nationality | 64.2 | | 7 | Disability | 66.7 | | 8 | Physical appearance | 77.8 | | 9 | Socioeconomic status | 71.5 | | | LLaMA Average | 66.6 | *Table 3 - Summary bias of our model output* ## Ethical considerations **Data** The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data. **Human life** The model is not intended to inform decisions about matters central to human life, and should not be used in such a way. **Mitigations** We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier. **Risks and harms** Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard. **Use cases** LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
jfecunha/arquivo-layoutxml-model
jfecunha
2023-05-03T21:32:18Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-04-27T08:20:22Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: arquivo-layoutxml-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # arquivo-layoutxml-model This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2997 - Ategory Precision: 0.8719 - Ategory Recall: 0.8491 - Ategory F1: 0.8603 - Ategory Number: 497 - Itle Precision: 0.8745 - Itle Recall: 0.8971 - Itle F1: 0.8857 - Itle Number: 2508 - One Precision: 0.8855 - One Recall: 0.8855 - One F1: 0.8855 - One Number: 2951 - Ubtitle Precision: 0.9494 - Ubtitle Recall: 0.9774 - Ubtitle F1: 0.9632 - Ubtitle Number: 23695 - Overall Precision: 0.9356 - Overall Recall: 0.9593 - Overall F1: 0.9473 - Overall Accuracy: 0.9629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
tooucci/CartPole
tooucci
2023-05-03T21:22:55Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-04-21T23:38:15Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AliiaR/t5-small-finetuned-model
AliiaR
2023-05-03T21:01:54Z
63
0
transformers
[ "transformers", "tf", "tensorboard", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-02T20:28:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: AliiaR/t5-small-finetuned-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AliiaR/t5-small-finetuned-model This model is a fine-tuned version of [AliiaR/t5-small-finetuned-model](https://huggingface.co/AliiaR/t5-small-finetuned-model) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.4127 - Validation Loss: 1.1016 - Train Rouge1: 14.9189 - Train Rouge2: 3.7554 - Train Rougel: 13.6461 - Train Rougelsum: 13.6801 - Train Gen Len: 13.4191 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 1.4127 | 1.1016 | 14.9189 | 3.7554 | 13.6461 | 13.6801 | 13.4191 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
pandma/es_pipeline
pandma
2023-05-03T20:54:53Z
4
0
spacy
[ "spacy", "token-classification", "es", "model-index", "region:us" ]
token-classification
2023-05-03T20:54:28Z
--- tags: - spacy - token-classification language: - es model-index: - name: es_pipeline results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.998766394 - name: NER Recall type: recall value: 0.9988961039 - name: NER F Score type: f_score value: 0.9988312447 --- | Feature | Description | | --- | --- | | **Name** | `es_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.5.2,<3.6.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (13 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `BILLING_PERIOD_END`, `BILLING_PERIOD_START`, `BILL_OWNER`, `COMPANY_NAME`, `CUPS`, `DIRECTION`, `ENERGY_P1_PRICE`, `ENERGY_P2_PRICE`, `ENERGY_P3_PRICE`, `NIF`, `POWER_P1_PRICE`, `POWER_P2_PRICE`, `TOTAL_IMPORTE` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 99.88 | | `ENTS_P` | 99.88 | | `ENTS_R` | 99.89 | | `TRANSFORMER_LOSS` | 6425.46 | | `NER_LOSS` | 41888.91 |
AnshulRustogi/bert-base-multilingual-cased
AnshulRustogi
2023-05-03T20:52:58Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2023-05-03T19:55:21Z
--- tags: - generated_from_trainer model-index: - name: bert-base-multilingual-cased1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased1 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 214 | 4.0240 | | No log | 2.0 | 428 | 2.6347 | | 4.063 | 3.0 | 642 | 2.3167 | | 4.063 | 4.0 | 856 | 2.1420 | | 2.3039 | 5.0 | 1070 | 2.0258 | | 2.3039 | 6.0 | 1284 | 1.9483 | | 2.3039 | 7.0 | 1498 | 1.8992 | | 1.9096 | 8.0 | 1712 | 1.8669 | | 1.9096 | 9.0 | 1926 | 1.8460 | | 1.7069 | 10.0 | 2140 | 1.8440 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
uisikdag/ayla_ozetler2006_bertuncased
uisikdag
2023-05-03T20:52:16Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T16:04:41Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: ayla_ozetler200_bertuncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ayla_ozetler200_bertuncased This model is a fine-tuned version of [dbmdz/bert-base-turkish-uncased](https://huggingface.co/dbmdz/bert-base-turkish-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3311 - Accuracy: 0.9 ## Model description ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.89 | 6 | 1.6870 | 0.4278 | | 1.7467 | 1.93 | 13 | 1.1508 | 0.6972 | | 1.0982 | 2.96 | 20 | 0.7106 | 0.8028 | | 1.0982 | 4.0 | 27 | 0.5116 | 0.85 | | 0.5588 | 4.89 | 33 | 0.4031 | 0.8694 | | 0.3365 | 5.93 | 40 | 0.3696 | 0.8778 | | 0.3365 | 6.96 | 47 | 0.3394 | 0.8806 | | 0.2345 | 8.0 | 54 | 0.3397 | 0.9 | | 0.1791 | 8.89 | 60 | 0.3311 | 0.9 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.11.0
openmmlab/upernet-swin-base
openmmlab
2023-05-03T20:51:22Z
979
2
transformers
[ "transformers", "pytorch", "safetensors", "upernet", "vision", "image-segmentation", "en", "arxiv:1807.10221", "arxiv:2103.14030", "license:mit", "endpoints_compatible", "region:us" ]
image-segmentation
2023-01-13T14:34:17Z
--- language: en license: mit tags: - vision - image-segmentation model_name: openmmlab/upernet-swin-base --- # UperNet, Swin Transformer base-sized backbone UperNet framework for semantic segmentation, leveraging a Swin Transformer backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al. Combining UperNet with a Swin Transformer backbone was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030). Disclaimer: The team releasing UperNet + Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM). Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel. ![UperNet architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/upernet_architecture.jpg) ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for fine-tuned versions (with various backbones) on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
alesthehuman/dqn-SpaceInvadersNoFrameskip-v4
alesthehuman
2023-05-03T20:51:09Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T20:50:32Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 599.50 +/- 212.67 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alesthehuman -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alesthehuman -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alesthehuman ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
ThanHitt/FishTreeRock_Classifier_v1
ThanHitt
2023-05-03T20:37:34Z
241
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-03T20:37:27Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: FishTreeRock_Classifier_v1 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9850746393203735 --- # FishTreeRock_Classifier_v1 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### fish ![fish](images/fish.jpg) #### rock ![rock](images/rock.jpg) #### tree ![tree](images/tree.jpg)
ratish/DBERT_MAKE_NewData_v1
ratish
2023-05-03T20:28:13Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T20:24:21Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ratish/DBERT_MAKE_NewData_v1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ratish/DBERT_MAKE_NewData_v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5238 - Validation Loss: 0.6256 - Train Accuracy: 0.8909 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 240, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.2333 | 1.9085 | 0.6545 | 0 | | 1.6567 | 1.3839 | 0.6727 | 1 | | 1.2308 | 1.0679 | 0.8364 | 2 | | 0.9605 | 0.8879 | 0.8364 | 3 | | 0.8155 | 0.7807 | 0.8364 | 4 | | 0.7106 | 0.7242 | 0.8545 | 5 | | 0.6365 | 0.6794 | 0.8182 | 6 | | 0.5894 | 0.6334 | 0.8909 | 7 | | 0.5446 | 0.6293 | 0.8909 | 8 | | 0.5238 | 0.6256 | 0.8909 | 9 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
KatarLegacy/kebayabali
KatarLegacy
2023-05-03T20:23:39Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-03T20:23:02Z
--- license: creativeml-openrail-m ---
KatarLegacy/demon_cosplay_outfit
KatarLegacy
2023-05-03T20:15:14Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-03T20:14:19Z
--- license: creativeml-openrail-m ---
m5rcelo/a2c-AntBulletEnv-v0
m5rcelo
2023-05-03T20:10:00Z
1
0
stable-baselines3
[ "stable-baselines3", "tensorboard", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T18:26:18Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1556.28 +/- 442.97 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nergaldarski/mistoonAnime
nergaldarski
2023-05-03T19:53:18Z
0
5
null
[ "region:us" ]
null
2023-05-03T19:41:13Z
CivitAI: https://civitai.com/models/24149/mistoonanime
Multi-Domain-Expert-Learning/expert-pubmed_abstracts
Multi-Domain-Expert-Learning
2023-05-03T19:48:41Z
6
1
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T13:01:42Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: expert-pubmed_abstracts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # expert-pubmed_abstracts This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2407 - Accuracy: 0.5368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2802 | 0.01 | 500 | 2.2553 | 0.5345 | | 2.2277 | 0.02 | 1000 | 2.2407 | 0.5368 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
rohitraman/my_new_model
rohitraman
2023-05-03T19:44:13Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-21T10:19:17Z
--- tags: - generated_from_trainer model-index: - name: my_new_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_new_model This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.1.0 - Tokenizers 0.12.1
athenasarch/ppo-LunarLander-v2
athenasarch
2023-05-03T19:42:46Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-04-28T21:34:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.28 +/- 18.48 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
aashay96/indic-BloomLM
aashay96
2023-05-03T19:09:44Z
0
5
null
[ "region:us" ]
null
2023-04-27T10:07:45Z
# Indic Language Bloom Model Training This repository contains the code and resources for fine-tuning the Huggingface Bloom model on the Indic language dataset using Low-Rank Adaptation (LoRA). The goal is to create a high-performance language model specifically tailored to Indic languages. ## Dataset The dataset used for training is provided by AI4Bharat. I have uploaded it to huggingface hub at: - [Processed Indic Language Corpus](https://huggingface.co/datasets/aashay96/indic_language_corpus/tree/main) ## Progress ### Completed - [x] Low-Rank Adaptation fine-tuning of the Bloom model on streaming data - [x] Single checkpoint available (training logs at [Weights & Biases](https://wandb.ai/indic-lm/huggingface/runs/7kq2m62v/)) ### To Do - [ ] Benchmark current multilingual LLMs on IndicGLUE using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) - [ ] Integrate DeepSpeed for better resource utilization - [ ] Convert current instruction dataset to Indic languages and train (dolly v2 dataset, distilled from GPT, etc.) - [ ] Model doesn't stop producing text - how to fix? - [ ] Deploy RLHF community app using [Cheese](https://github.com/CarperAI/cheese) ## Using the Model ```bash import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "aashay96/indic-BloomLM" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) batch = tokenizer("आप कैसे हैं", return_tensors='pt') with torch.cuda.amp.autocast(): output_tokens = model.generate(**batch, max_new_tokens=10) print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))
DKYoon/mt5-xxl-lm-adapt
DKYoon
2023-05-03T19:01:24Z
4
1
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "arxiv:2205.12647", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-04-13T19:03:29Z
--- license: apache-2.0 --- 🤗 Language model initialized from mT5 and trained for an additional 100K steps on the Prefix LM objective using mC4 data. Paper: [Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation](https://arxiv.org/abs/2205.12647) Authors: Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant PyTorch port of the original Flax checkpoint at [Google/T5X repository](https://github.com/google-research/t5x).
ratish/gpt_v1.4.1
ratish
2023-05-03T18:58:06Z
60
0
transformers
[ "transformers", "tf", "gpt2", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T18:52:39Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: ratish/gpt_v1.4.1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ratish/gpt_v1.4.1 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9635 - Validation Loss: 0.8785 - Train Accuracy: 0.8889 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.9635 | 0.8785 | 0.8889 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
SuzieCreamchease/God_Knitting_Sheep
SuzieCreamchease
2023-05-03T18:50:37Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-05-03T17:47:58Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bright1/fine-tuned-twitter-Roberta-base-sentiment
bright1
2023-05-03T18:39:08Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T01:13:01Z
--- tags: - generated_from_trainer model-index: - name: fine-tuned-twitter-Roberta-base-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-twitter-Roberta-base-sentiment This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5453 - eval_accuracy: {'accuracy': 0.7915} - eval_f1score: {'f1': 0.790972084150606} - eval_runtime: 68.7486 - eval_samples_per_second: 29.092 - eval_steps_per_second: 3.636 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-09 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - lr_scheduler_warmup_steps: 1399 - num_epochs: 7 ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3